By Isaac Bentwich, MD, and Amir Bein, PhD
Drug development and regulation is undergoing a quiet revolution. As discussed in the December 2021 issue of GEN, the European Parliament resolution to phase out animal testing has been followed by a similar initiative in the United States, the FDA Modernization Act of 2021. If this bill becomes law, it will remove an 80-year-old statute that mandates reliance on animal studies. The stage is now set for a transformation of how we discover, develop, and regulate drugs, and a new class of artificial intelligence technologies is an important part of this transformation.
According to Nobel laureate Aaron Ciechanover, MD, DSc, “One of the main problems in drug development is the model that we are using—the mouse. The mouse is not human, so there is no wonder that 92% of drugs that are successful in mice are failing in clinical trials in humans.”
To effectively predict human efficacy and safety, we need to find a new drug development path, one that avoids the faulty reliance on mice. If we don’t, we will never solve the drug safety prediction problem.
Drug development has become unbearably slow and expensive. It costs over $2.6 billion per drug, and it takes 12–15 years to bring a drug to the market. A big part of the cost stems from the difficulty in predicting which drug candidates will safely work in humans. A stunning 89% of drug candidates that successfully pass animal testing fail in clinical trials (Van Norman GA. J. Am. Coll. Cardiol. Basic Transl. Sci. 2019; 4: 845–854)—trials that cost hundreds of millions of dollars.
Let that percentage sink in: Animal testing is so ineffective at predicting drug safety and efficacy in humans, that it is in fact simply wrong close to 90% of the time. A paradigm shift is needed to move basic research, big pharma, and regulatory agencies to a more efficient drug development system.
“We are at the tipping point of the modernization of drug discovery,” notes Robert S. Langer, ScD, a co-founder of Moderna, a lauded Institute Professor at the Massachusetts Institute of Technology, and the most cited engineer in history.
AI is transforming pharma
So, what does the future hold? How can we better predict which drugs will work safely in humans? How do we break free of the faulty reliance on animal testing? We can apply artificial intelligence (AI). It is now emerging as a disruptor of the pharma industry, with AI-pharma companies—several of them young companies with multibillion-dollar valuations—improving various aspects of drug discovery and development. These companies have already shown significant, measurable savings and impact in different parts of the pharma value-creation chain, from drug discovery and development to clinical testing and development to marketing.
AI-pharma processes and companies may be divided into two broad classes. A first class of AI-pharma may be termed “early stage” or “chemical level” AI. This class, which is characterized by the use of various forms of AI in drug discovery, includes companies such as Isomorphic Laboratories (a Google-Alphabet spinout), Recursion ($2.8B), Exscientia ($2.4B), Insitro ($2.5B), XtalPi ($2B), ImmunAI ($1.4B), BenevolentAI ($1B), and Insilico Medicine ($1B). Companies in this class use AI and powerful bioinformatics to invent new molecules, accelerate discovery, improve the quality of new drug leads, find new targets for a disease, find drug candidates that have better molecule-target fit, repurpose existing drugs, and improve our understanding of the mechanisms of action of drug candidates so as to better anticipate and avoid off-target side effects.
A second class of AI-pharma may be termed “late stage” or “clinical level” AI. Here, powerful AI is used to optimize drug utilization and personalization of drugs that have already attained regulatory approval. Tempus ($8B) is an excellent example of this class of AI-pharma companies.
Both these classes of AI-pharma started out relying on existing biology data, with the emphasis now moving more toward generating cutting-edge biology-at-scale data, which is more informative and actionable. Data from single-cell genomics, epigenetics, proteomics, metabolomics, and immune profiling, as well as from protein-folding prediction studies (such as those performed by Google’s revolutionary DeepMind system), can be analyzed by machine learning. As leading AI-pharma companies have shown, this is effective in significantly improving and accelerating many processes in drug discovery, as well as in post-regulatory utilization of existing drugs.
The next frontier: Bio-AI clinical prediction
And yet, with all this impressive progress, a major AI challenge remains largely unaddressed: how to predict which drug candidates will work safely in the human body. Think of it this way: chemical-level AI processes accelerate drug discovery and may deliver more qualitative, better understood drug candidates, but each new molecule or target still has to be tested to assess its actual effect in the human body.
Currently, this means testing of a drug candidate begins with traditional in vitro lab assays (traditional 2D tissue cultures and other in vitro assays). If that goes well, testing progresses to animal models. Unfortunately, tests that rely on mice and rat models are consistently 89% wrong in predicting if a drug candidate is safe and efficacious in the human body, which brings us back to square one. Current AI platforms do not adequately address this problem.
A new class of AI-pharma called Clinical Prediction AI focuses on predicting which drug candidates will work safely and efficaciously in humans. A major difficulty in addressing this challenge is the data itself. Most of the above rely on biology-at-scale in vitro data used or generated by traditional tissue culture approaches. While easily accessible and no doubt informative, the data and resulting insights are nonetheless extremely poor in their predictiveness of clinical safety and efficacy in the human body.
To be successful, Clinical Prediction AI requires data be generated that captures novel biology and that is highly predictive of the clinical safety and efficacy of drugs in the human body. Miniaturized “Organ on Chip” technologies, especially those that interconnect multiple organ models, provide data that is highly predictive of pharmacokinetics (Herland et al. Nat. Biomed. Eng. 2020; 4: 421–436) and pharmacodynamics in the human body. However, in their current form, these technologies are unsuited to the task of quickly and inexpensively conducting thousands and, ultimately, millions of experiments and thereby training a robust AI platform.
To significantly improve drug prediction capabilities, a completely new, holistic approach is needed. We can deliver on the real promise of Clinical Prediction AI only if we start by testing known safe and unsafe drugs on a robust humanized in vitro system, comprised of miniaturized patients-on-a-chip within an automated high-throughput platform. Automatically generated data then needs to be classified and used to continuously retrain the machine learning algorithm to generate high-fidelity predictions of clinical safety and efficacy.
Clinical Prediction AI is complementary to, and synergistic with, other AI-pharma approaches. It supports chemical-level AI drug discovery by identifying drug candidates early on that are likely to be safe and effective in the human body, and it works well with clinical-level AI by helping personalize drugs. Together, these different AI approaches will help transform drug development, steer its regulation, and change (or eliminate) the role of animal testing.
Isaac Bentwich, MD, is the founder and CEO of Quris, and Amir Bein, PhD, serves as vice president of biology at the company.