As pressure increases on major pharma to bring new compounds into development, so too does pressure to make sure that these compounds are safe to test in humans. As evidenced in the TGN1214 trial last year, no company can afford to skip steps in insuring that the compounds tested in humans are safe. Ensuring safety and lack of toxicity during drug development were central themes to Cambridge Healthtech’s “Trends in Drug Safety” conference, held in San Francisco, and The Oxford International “Second Biomarkers Discovery and Development Congress” in Manchester U.K.
Emerging Safety Biomarkers
“Safety-related problems continue to be one of the major causes of drug attrition in preclinical and clinical development,” said Rakesh Dixit, senior director and global head of toxicology at MedImmune (www.medimmune.com). “The search for biomarkers that can be objectively linked to adverse effects and target organ toxicities, and can then be translated from preclinical to clinical development, is becoming an urgent matter for academics, federal agencies, and pharmaceutical companies alike.”
One of the key points, said Dixit, is that the safety of the compound and drug in question is absolutely crucial. “In the preclinical stages, you can look and see what effect a compound has in animals, but when you attempt to examine the effects in man, it is harder to do because of the limited noninvasive tests.”
One of the primary challenges in using biomarkers to examine safety is that the tools available to reveal that information are already 20–30 years old. “With regard to validating safety, there is really nothing out there that is cutting-edge in regards to detecting low-level adverse responses noninvasively,” stated Dixit.
However, there are newer and noninvasive ways to look at biomarkers, such as analyzing whole blood, plasma, urine, and limited human biopsy samples (e.g., skin, muscle, buccal mucosal cells), which are “probably the best line of analysis in humans right now,” reported Dixit. “The next line would be to do a genetic profile, and that’s easier to do, too. But if you’re doing a clinical trial, repeat muscle biopsies, for example, are harder to get.”
Dixit also discussed the difference in definition between safety biomarkers and toxicity biomarkers. “Some people use these terms interchangeably, but there are subtle differences,” explained Dixit. “The key thing to keep in mind is that toxicology is about how high you can push the dose before things go wrong. Safety is how low you can go in dose response before subtle, low-grade toxicities may appear.
“Safety biomarkers have a higher bar to reach than efficacy biomarkers. It’s harder to evaluate. But, the best way to validate is to conduct exploratory safety biomarker analyses and see what you experience in the clinic along with conventional biomarkers. However, the burden of proof is high, and the key challenge facing newer biomarkers is obtaining that proof,” Dixit said.
Establishing Human First-Dose Levels
Christopher Horvath, senior director of toxicology at Archemix (www.archemix.com), noted that while one can evaluate what are considered classic toxicologic endpoints, those traditional measures of what might or might not be safe are not appropriate for some biologics. “For example, with biologics there are issues of superpharmacology—too much of the desired effect—being responsible for the observed adverse effects. In contrast, for small molecules, the observed toxicity is often related to chemical metabolites, so you need to look at secondary (or safety) pharmacology when you get different results from what you first expected from a compound. A critical distinction in the selection of relevant species for nonclinical safety evaluation is that the chosen species should have comparable metabolic profiles for a drug and comparable pharmacologic activity for a biologic,” Horvath said.
Currently, there is regulatory guidance available for establishing a maximum recommended starting dose (MRSD) for a first-in-human (FIH) study. “These parameters focus on the no adverse effect dose level (NOAEL) and toxicity and/or exposure algorithms to arrive at the FIH MRSD,” said Horvath. “But, it’s important to note that the extent to which the information generated during nonclinical development of biologics is relevant to subsequent clinical development depends chiefly on the degree of pharmacologic relevance of the test systems. A NOAEL achieved in a species incapable of displaying either appropriate pharmacologic activity or relevant toxicity is not a good starting place for human safety predictions.”
Horvath pointed out that one needs only to look at the results of the TGN1412 trial last year, wherein six volunteers were administered a dose of 0.1 mg/kg—a dose 500 times lower than the NOAEL dose in monkeys (50 mg/kg)—and all six suffered catastrophic multiorgan failure. “I think all of us are saying the same thing—that if you blindly apply mathematical algorithms, the likely effect is that you leave yourself open to wholly unpredictable results. Unfortunately, many view the role of nonclinical development as telling clinical what they cannot do with respect to toxicity in FIH studies.
“Rather, we should be able to tell people in clinical development what they can do or expect to see in humans, to be able to make better use of animal studies, to better inform and educate your clinicians, and to help guide them in setting pharmacologically relevant doses and concentrations for FIH studies,” Horvath said.
Another area of analysis that was covered at the Cambridge confab is hepatotoxicity. “Drug-induced liver toxicity is a major issue, not only for current health care but also drug development,” said Philip Hewitt, head of toxicogenomics for Merck KGaA (www.merck.de) of Darmstadt, Germany.
“Toxicogenomics is gaining importance as a tool for toxicity prediction and supports classic toxicity tests for rapid and early toxicity screening.”
Hewitt reported that his group has been looking at both global arrays (of 20,000–24,000 genes) and focused arrays (550 liver-specific genes), concentrating on well-known model compounds. “We have good in vivo data using these technologies, with accurate classification of hepatotoxic compounds using gene-expression signatures. However, we are still having problems with in vitro technologies. This is the area we are currently focusing on.”
One model hepatotoxicant Hewitt’s lab has been studying is lipopolysaccharide (LPS), using pathway analysis tools to aid interpretation of the mechanisms of toxicity. In vivo and in vitro comparisons in gene deregulation after exposure to LPS have been compared. “We see similar responses in vivo and in vitro, with an excellent correlation between the systems, giving us confidence that we are going in the right direction,” said Hewitt. “One of our aims is to eventually be able to predict drug toxicity at low drug levels and after long-term treatment of rat and human hepatocytes.”
Genomic Biomarker Usage
“Our lab is heavily involved with translation research in oncology,” said Hans Winkler, senior director of functional genomics at Johnson and Johnson Pharmaceutical Research and Development (www.jnjpharmarnd.com). “We focus on the pharmacodynamic work on all projects that are in preclinical testing, looking closely at targets and effects in vivo and in vitro.”
Winkler noted that with current targeted therapies in oncology, the response rates are relatively low, about 10–15%. “Obviously this is not a good response rate,” Winkler said. “So larger trials are needed to show efficacy.”
One approach to tackle this is gene-expression analysis and profiling. “And once we have the gene-expression profile,” said Winkler, “we can make comparisons and identify those genes that contribute to a response. What we are researching depends on the compound and the clinical activity. In principle, you follow the compound. In research, every one is going after the major tumors and cancers. But, there are also the hematological indications to pursue.”
In addition, Winkler focused on developing proof of principle and proof of concept. “One important aspect of research is to understand the molecular aspect—the mechanism of action at the molecular level. We do a lot of compound profiling, and in that, we learn how those gene expressions are changed by a compound, which genes work with a compound, and which genes interfere with a compound.”
However, Winkler noted that the way most companies do safety and efficacy studies—usually broken up in three steps by three departments doing three different kinds of studies (pharmacology, pharmacokinetics, and safety)—often results in important information going by the wayside.
“If we want to do safety and efficacy studies effectively, we need to do these three studies at once, and we need to generate more effective data. Right now, we start with 100 compounds and maybe one makes it to development. If we are able to improve success rates, improve ways of assessing clinical safety, and do so in a more integrated fashion, it would be the start of a strong clinical program.”
Markers for Predictive Toxicology
Hugh Salter, associate director of pharmacogenomics at AstraZeneca (www.astrazeneca.com), said, “We’re examining potential hepatotoxicity markers using microarray data taken from in vitro samples. My talk focused on looking at the complexity of microarray data analysis techniques, the building of predictive models, and some of the limitations.”
In general terms, it is possible to build promising predictive models from quite naive data. “We’ve developed an interesting method by which we can describe a set of compounds that allows you to make predictions in a set of independent models. It allows you to form clusters in the data, and you can see how the chemical information overlaps with the biology,” Salter said.
Salter noted that his group does have a route to examine a set of compounds in the in vitro space, but to see how that is relevant to behavior in the in vivo space is a key issue. “One problem is that with microarray data, you are dealing with 30,000 to 40,000 signals, and we need to find the 300 to 400 most important genes to profile—in other words, large numbers of variables, small numbers of samples. We need to flip this equation around so that vastly more samples can be screened.
“The main point is that finding the markers is not as simple as we’d like it to be,” Salter said. “It’s a set of techniques that certainly have promise, and what we’ve done is work on improving the data readouts and technology.”
Biomarker-Based Nonanimal Toxicity Testing
The keynote presentation at “Biomarkers Discovery and Development” examined the identification, validation, and implementation of biomarker-based, nonanimal toxicity tests in drug development. “We’re not heavily geared toward genomic biomarker assessment—we’re working with a reconstituted epidermis model,” said Stuart Freeman, director of worldwide toxicology, GlaxoSmithKline (www.gsk.com). “We’ll examine skin irritation, sensitization, and look at biomarkers that are predictive of toxicities that are seen in the skin.”
The presentation discussed the identification of key transit biomarkers and relevant end points, the validation of biomarker response to toxic challenge, and demonstrating the effective use of validated tests in drug development. “While we’re not currently using genomic biomarkers in our skin model, it’s an area of tremendous promise and application,” Freeman said. “As we increase our knowledge base in this area, these techniques will become a main stream way of assessing risk and toxicity.”
What Freeman’s group looks for is cytokines secreted by skin models. “When it comes to sensitization, there is an immunological response,” said Freeman.
“Secreted biomarkers, like cytokines, improve our ability to pick up sensitization events upstream. Toxicology high-throughput screening can be enhanced by this technology. As soon as you discover a compound, the clock starts ticking,” said Freeman. “Anything that increases the speed of drug development is good business.”