Tracy Vence GEN
The surest way to bring new drugs to market is also the most obvious—making certain they work before they reach the clinic.
Eleven. That’s the meager percentage of compounds entering clinical trials that will eventually be licensed, according to an estimate from Ismail Kola, Ph.D., and John Landis, Ph.D., who were both affiliated with Merck Research Labs when their approximation was published in Nature Reviews Drug Discovery.
Part of the problem is that, by their nature, preclinical studies are often fraught with complications that generate inconsistent results and hence, reproducibility issues.
Writing in Nature last March, a former Amgen executive and his academic colleague called for improvements to preclinical cancer research, noting that “clinical trials in oncology have the highest failure rate compared with other therapeutic areas.” In order to increase the robustness and reproducibility of preclinical cancer research, consultant C. Glenn Begley, Ph.D., and the University of Texas MD Anderson Cancer Center’s Lee Ellis, M.D., suggested that a cultural shift would be needed.
“Clearly there are fundamental problems in both academia and industry in the way such research is conducted and reported,” Drs. Begley and Ellis wrote. “Addressing these systemic issues will require tremendous commitment and a desire to change the prevalent culture. Perhaps the most crucial element for change is to acknowledge that the bar for reproducibility in performing and presenting preclinical studies must be raised.”
In a recent PLOS Medicine paper, McGill University’s Jonathan Kimmelman, Ph.D., and his colleagues cobble together guidelines on the design and execution of animal experiments supporting clinical development in an effort to establish a foundational consensus for scientists working to move basic discoveries to the bedside.
Mining the literature for guidelines addressing the design and execution of preclinical efficacy studies, Dr. Kimmelman et al., extracted common themes, assigning each to specific validity threats, and compiling a handy checklist for rationalizing animal studies. The team also reports having uncovered gaps in published guidelines, and makes recommendations for funding agencies, ethics committees, IRBs, investigators, journals, and regulators as how best to design, implement, and evaluate preclinical research.
That so few drug candidates progress to—and many fail during—late-stage clinical trials causes more than disappointment among patients in need. The high rate of failure in drug development also drives up the costs of medications that do make it to market.
“There are many reasons why so many drugs fail clinical development. One important reason is that we do a bad job testing drugs in animals,” Dr. Kimmelman tells GEN. “Over the past decades, medicine has adopted a number of practices—like randomization, blinding, reporting standards, prospective registration, replication, et cetera—that prevent spurious findings from being carried over into clinical practice. These practices have only sporadically been taken up in preclinical research.”
At the most basic level, using animals as proxies for human health and disease requires accepting the implicit biases that come with supposing theoretical relationships between, for example, knockout mice and people. No matter how closely mouse models of disease recapitulate human phenotypes, a mouse is a mouse and a person is not.
As Dr. Kimmelman and his colleagues note in their paper, despite even the best intentions, “preclinical researchers might use treatments, animal models, or outcome assessments that are a poorly matched to the clinical setting.” Some other potential threats to construct validity—that is, the degree to which inferences are warranted from the sampling particulars of a given experience—include experimental errors and using disease models that don’t show the same physiological aberrations known to cause the condition in humans.
Of course, there are other, more amenable factors at play that also pose validity threats, such as uncontrolled variables or insufficient replication. In an effort to ensure internal, external, and construct validity, the McGill group compiled a checklist for those involved with in vivo efficacy studies—29 points covering everything from model choice and sample-size determination to randomization procedures and results interpretation with straightforward “Yes/No” tick boxes. Among other things, one major aim of this work was to provide researchers with “a vocabulary for speaking about validity threats in preclinical research,” Dr. Kimmelman says.
Toward a Consensus
Several groups have voiced concerns over the efficacy of preclinical research in recent years. Indeed, in its investigation, the McGill team identified more than 2,029 citations denoting preclinical guideline documents. Around half of those covered neurological and cerebrovascular diseases. This was not entirely surprising, Dr. Kimmelman says, because some of the earliest initiatives aimed at improving preclinical research began with stroke drug development, and were swiftly endorsed by researchers studying conditions like Alzheimer’s disease and amyotrophic lateral sclerosis.
Interestingly, though, the researchers found none that explicitly addressed the development of cancer drugs.
“We regard it as encouraging that distinct guidelines are available for different disease areas,” the researchers write in PLOS, noting that validity threats can be disease-specific. For example, they said, the confounding effects of anesthetics present a greater threat to the validity of cardiovascular preclinical studies than to those for cancer because anesthesia can affect heart function, but rarely impacts tumors.
Still, an apparent lack of cancer-specific preclinical research guidelines is a cause for concern. Shy of proposing a potential reason for this, Dr. Kimmelman points to the competitive nature of oncology drug development. “Adopting recommendations contained in the guidelines will require a change in culture of preclinical research,” he says. “That culture shift has just not yet penetrated cancer drug development.”
But the need for significant change extends beyond culture. Incentives must also be addressed. “Researchers and research sponsors—like all of us—respond to incentives and disincentives, and they calibrate their behavior against what others are doing,” Dr. Kimmelman says. “The task here is to change both the culture and incentive structure in preclinical research.”
To do so, he notes, will require the involvement of multiple stakeholders. Researchers should work to produce more robust and reproducible results, yes, but sponsors must also act. For their part, funding agencies should stipulate explicit criteria for the conduct of preclinical research and ethics committees ought to implement more rigorous reviews of proposed studies. Meantime, journal publishers ought to establish reporting requirements, and professional societies should promote better practices.
Overall, Dr. Kimmelman suggests it is high time preclinical research be interpreted in context. Achieving promising results in animal models is a critical step in the march toward the clinic, he says, but “the perception that a clinical trial launch is itself a milestone of clinical advance” is misleading. “It is [such a milestone] if the evidence supporting the launch is extensive and sound given the objectives of the trial,” he says. “If it isn’t, trial launch is less a marker of advance than foolish optimism.”
“Threats to validity in the design and conduct of preclinical efficacy studies: A systematic review of guidelines for in vivo animal experiments” was published July 23 in PLOS Medicine.