May 1, 2006 (Vol. 26, No. 9)

Refining and Adapting Established Approaches to Drug Development

The field of translational medicine is witnessing rapid maturation as companies focus their efforts on streamlining the passage of drug leads from the bench, through clinical trials, and ultimately to the bedside. In contrast to traditional approaches to early drug development, translational medicine brings human experiments into the discovery process at an earlier stage, allowing them to contribute more to target selection and candidate optimization.

Achieving this degree of extrapolation requires refining and adapting established approaches to drug development, as well as adopting novel paradigms. A recent Cambridge HealthTech conference on translational medicine brought together scientists from a variety of backgrounds to discuss issues surrounding this rapidly maturing discipline.


Attrition refers to the failure of a drug at some point along the drug pipeline and, as Mark Rolfe, Ph.D., vp of discovery oncology at Millennium Pharmaceuticals ( pointed out, it can be due to one of several factors.

&#8220The drug might not have the appropriate pharmacokinetic properties to mediate an effect in the intended tissue,&#8220 Dr. Rolfe said, although improvements in pharmacometrics in recent years have made selecting appropriate molecules easier. A second major attritional variable is toxicity, which he estimated, is responsible for up to one-third of drug failures. &#8220Toxicity can be target based, in which the drug is hitting the target but generates toxicity in tissues not predicted in preclinical trials.&#8220

Bruce H. Littman, M.D., vp of global translational medicine at Pfizer (, focused his talk on strategies for mitigating against Phase II attrition and cites a disconnect between the different phases of drug development that has not been successfully bridged.

&#8220Phase I delivers pharmacodynamic, safety, and tolerance information but often does not include good pharmacodynamic data demonstrating drug activity on its target,&#8220 Dr. Littman observed. &#8220Failure in Phase II has been often attributed to compound specific reasons rather than to the selection of a novel drug target that was not really critical for disease expression. So we could end up studying another compound when the real problem was a bad drug target.&#8220

Dr. Rolfe agreed. &#8220A drug can be safe enough and hit the target, but sometimes the ensuing biology doesnt really support efficacy. So, it may not have been a good target in the first place.&#8220

According to Dr. Littman, getting a jump on drug efficacy to avoid misinterpreting Phase II data requires examining drug pharmacodynamics. &#8220One of the strategies we have in translational medicine is to translate biomarkers of drug activity from preclinical models to humans.&#8220

Dr. Littman also stressed the importance of establishing an airtight drug mechanism as early as possible in the development pipeline. &#8220Phase II transition requires a positive proof-of-mechanism, so failure in Phase II will kill the drug target for that indication.&#8220 Achieving this level of insight into drug efficacy and mechanism falls in large part to biomarkers and animal models, two key elements in the new discipline of translational medicine.


Biomarker is an umbrella term that extends across the entire spectrum of drug development, encompassing a variety of pharmacogenomic, proteomic, metabolic, pharmacological, and toxicological markers that contribute information on a drugs target, pathway, mechanism, safety, or effect on disease progression.

Dr. Littman classified biomarkers into three types: target, mechanism, and outcome. &#8220Target biomarkers demonstrate drug interaction with targets, such as receptor occupancy using a positron emission tomography ligand,&#8220 he explained.

&#8220Mechanism biomarkers demonstrate physiological, biochemical, genomic, or behavioral changes downstream of the drug target, such as glucose reduction for a diabetes drug,&#8220 Dr. Littman added.

Finally, outcome biomarkers act as substitutes for clinical efficacy or safety and are independent of drug mechanism. &#8220Quantifying viral load for antiviral drugs, is an example of this.&#8220

Another class of biomarkers is toxicity biomarkers that report on the toxicology of a drug in a specific model system. &#8220Investing in exploratory toxicology, earlier in the drug discovery process is particularly important for unprecedented targets, that is, targets where there is no drug already available as a precedent for your approach,&#8220 said Dr. Rolfe.

For Dr. Littman, a thorough understanding of a drugs mechanism is a critical tool, early in the development pipeline, and biomarkers play an important role in decision-making at this stage. &#8220As opposed to past practices, we no longer take mystery mechanisms into human testing.&#8220 One of the specific benefits of establishing a robust mechanism is in defining the therapeutic index&#8212essentially, a comparison of the amount that causes the therapeutic effect to the amount that causes toxic effects.

&#8220Most drugs will have toxicity associated with very high doses, so you have to understand the mechanism of action to optimize dose selection for efficacy and minimize off-target toxicity or target-related toxicity,&#8220 Dr. Littman explained.

&#8220This is much easier to do if you can construct a pharmacokinetic model and select doses that optimize the desired therapeutic index,&#8220 he added. &#8220We have reported on a novel immunosuppressant that targets JAK3 but also has some unwanted JAK2 activity. By using biomarkers of JAK3 and JAK2 activity we can select doses that optimize JAK3 inhibition and minimize JAK2 mediated toxicity.&#8220

Comparative Genomics

The traditional in vivo testing ground for preclinical drug development has been the model organism&#8212generally, the mouse or rat. While these organisms have typically been used to provide sometimes crude extrapolations of toxicity and pharmacokinetics to humans, they are of rather more limited use when it comes to interpreting the effect of a drug on a disease.

&#8220In general we know that certain diseases, such as polyglutamine disease or diseases of the immune system or central nervous system, do not reveal themselves in animals,&#8220 said Peter Groenen, project leader bioinformatics and pharmacogenetics at NV Organon (

&#8220The recent whole genome comparisons between the rat, mouse, and human genome have shown that several genes involved in the immune response and reproduction show a very different evolution,&#8220 Groenen noted, urging caution in the interpretation of results from rodents.

Groenen and his team of bioinformaticists represent exponents of an increasingly important discipline in drug development, comparative genomics, which allows for the development of new suites of tools to interrogate and integrate data from the newly sequenced genomes not only of humans and model organisms. The hope is that the refined approaches to preclinical testing that bioinformatics will facilitate the extrapolation of animal data to humans.

For example, the sequences of specific drug targets between different genomes can be compared to determine potentially critical residues in the function of the protein. However, Groenen cautioned, challenges lie ahead, particularly in regard to the sheer magnitude of sequencing data being generated. &#8220The current versions of the nucleotide databases Genbank and EMBL contain more than 127,044,948,169 nucleotides and already exceed 80 gigabytes.&#8220

Principal among the goals of translational medicine is the ability of the drug development process to gain detailed knowledge of the interaction of a drug candidate with its target and the ability to finetune the precise molecular interactions that are often crucial to the efficacy of the drug in the clinical phase.

Protein Tomography

Sidec Technologies ( has spent several years developing and refining a technology that holds promise in this regard (see corporate profile of Sidec in the February 15 issue of GEN, p. 10). Protein Tomography is a three-dimensional imaging protocol that allows the analysis of protein conformation in in situ samples. It uses low-dose electron tomography coupled with the companys algorithm to reconstruct both individual proteins, as well as macromolecular complexes.

In a collaboration with the University of Helsinki, the Karolinska Institute, and University of California at Davis, Protein Tomography was used to provide insight into the role of the slit diaphragm, a component of the glomerular filtration barrier, in protein filtration. A possible etiological mechanism of congenital nephrotic syndrome was also elucidated using the technique.

Previous articleDr. Leroy Hood M.D. Co-founder of the Institute for Systems Biology in Seattle, Washington talks with GEN Editor-in-Chief John Sterling about Systems Biology.
Next articleDr. Raju Kucherlapati Scientific Director, Harvard-Partners Center for Genetics and Genomics & Paul Cabot Professor of Genetics, Harvard Medical School