Artificial intelligence (AI) is poised to transform the pharmaceutical industry. According to a 2017 report, AI solutions for drug discovery may generate revenues exceeding $4 billion dollars by 2024. Within the bioprocessing sector, AI is attracting interest because it promises to reduce costs and accelerate timelines.

“There’s a huge potential for using AI in manufacturing and bioprocessing,” explains Vimal Mehta, PhD, CEO and co-founder of BioXcel Therapeutics. “If you can start to use AI to optimize production processes or help upscale them, you can meet challenges we face today, such as how to scale up production of a vaccine for COVID-19.”

Building upon mature technologies

Over the last 30 years, the meaning of “artificial intelligence” has evolved, according to Richard D. Braatz, PhD, Edwin R. Gilliland Professor at the Massachusetts Institute of Technology. He prefers to refer to AI as “a technology that provides a solution to a problem that is better than a human’s solution.”

This definition covers a huge range of technologies, including pattern recognition, predictive analytics, and deep learning. Some of these AI technologies are sufficiently mature to enhance drug research and development. “We already have products that are using AI,” declares Per Lidén, digital product management leader at Cytiva (formerly GE Healthcare). One such product is IN Carta, a cell image analysis software that includes a machine learning module called Phenoglyphs. With this module, IN Carta can automatically classify cells by phenotype.

Cytiva is investigating other AI applications. For example, it is using AI to improve the performance of cell cultures and optimize protein purification. “We’re in an interesting situation as an industry,” remarks Liden. “We have a great interest in AI when arguably we’re not yet industrialized to the extent of other industries.”

He expects the pharmaceutical industry to skip several early AI technologies to adopt cutting-edge infrastructure—rather like countries who came late to telephony and immediately adopted mobile phones.

BioXcel Therapeutics, meanwhile, is using machine learning and big data to identify novel uses of known drug classes. By reviewing millions of publications and extracting metadata in a few hours, the company quickly creates relationship maps that encompass agonists, receptors, symptoms, and other factors.

According to Mehta, the company used machine learning to identify a drug candidate called BXCL501, which has promise as a treatment for acute agitation in patients with schizophrenia, bipolar disorder, or dementia, as well as for delirium and opioid withdrawal. The company arrived at BXCL501, a proprietary sublingual thin-film formulation of dexmedetomidine, a sedative that has been on the market since 1999, after deciding to focus on neural symptoms and recognizing that agitation creates a high healthcare burden.

“This technology,” Mehta emphasizes, “allowed us to identify a unique mechanism of a well-known drug for indications no one had thought about.”

Putting AI into practice

Neither of BioXcel’s current drug candidates require advanced, AI-optimized bioprocessing, but Mehta understands its benefits and anticipates that it could be become relevant his company. “There’s a huge application of AI in manufacturing,” he explains. “There are so many steps involved in manufacturing those drugs and so many variables involved in [batch] production.”

Large biotech companies are beginning to investigate how AI can be incorporated into their manufacturing processes. “I am, for sure, looking into AI and how this technology can be used to optimize our operational processes,” writes Stephan Rosenberger, PhD, global head of digital transformation, LPBN Operations, Lonza.

He indicates that Capsugel, a Lonza company, is using machine learning on high-speed camera images to detect defective drug capsules so that capsule machines may automatically eject them. “We are in the process of collecting information on defects from all capsule machines in the global Lonza-Capsugel production network,” he continues. “Preliminary investigation into machine learning on images has been done, but combining image data with other data, such as raw materials data, is still in development.”

Boehringer Ingelheim is using advanced machine learning algorithms, such as neural networks, to optimize its manufacturing processes as part of a wider commitment to adopting digital technology across the company. Hermann Schuchnigg, who represents the company as “product owner SMART Digital Twin platform,” says that Boehringer Ingelheim wants to understand how to optimize procedures and reduce timelines throughout the process of bringing a new drug to market—from designing the manufacturing process to producing the drug.

“From a manufacturer’s perspective, we are looking at increasing productivity, ensuring robustness, and enabling market readiness as early as possible,” he explains. “Reducing timelines for process development and optimization has an impact on the time to market for new products.”

Optimization begins when a neural network, a set of intelligent algorithms, is “taught” to work with a digital model of a real process. Then, the intelligent algorithms refine themselves to give the model the best production efficiency, robustness, and reproducibility. Besides reflecting individual process stages, such as the chromatography step, the model gives a holistic view of the entire process chain.

Boehringer Ingelheim is currently evaluating its model as a means of optimizing process design, but the company intends to use its model as a means of optimizing process control. To achieve this goal with a real process, the company is working to give that process a full-fledged digital twin that can emulate the real process in real time.

“[If our model were to continuously acquire feedback from] our manufacturing systems, it could iteratively optimize the manufacturing process on the go,” Schuchnigg asserts. “While full-scale operationalization in manufacturing requires further technical and regulatory advancement, an initial prototype of this digital tool is currently in preparation, and the company expects first results later this year.”

Researching bioprocessing approaches

Academic researchers are examining how AI can improve bioprocessing. For example, a team of academic researchers recently showed how to improve co-culture bioprocesses with “deep reinforcement learning.” The team, which included Chris P. Barnes, PhD, professor of systems and synthetic biology at University College London, and Brian Ingalls, PhD, associate professor of applied mathematics at the University of Waterloo, Canada, described their work in a paper that appeared last April in PLoS Computational Biology. They detailed how a machine learning algorithm could be used to optimize the nutrients entering a bioreactor to sustain multiple microbial strains.

AI-driven approach to bioreactor optimization diagram
Researchers at the University College London and the University of Waterloo, Canada, have developed an AI-driven approach to bioreactor optimization. In this image, a reinforcement learning loop is used to control co-cultures within continuous bioreactors, maintaining populations at target levels and optimizing output. [Chris P. Barnes, PhD, University College London]

The machine learning algorithm received feedback on the amount of two species of Escherichia coli in five virtual bioreactors running in parallel. By applying reinforcement learning, the researchers managed to train the algorithm to optimize product output within 24 hours.

According to Ingalls, the study shows that it’s possible for machine learning to perform as well as—or outperform—a predictive model where predetermined equations control the bioreactor system. “There’s no underlying predictive model,” he explains. “Instead, the regulator tries different actions and learns from its experience what works and what doesn’t.”

The study showed that a co-culture bioreactor system could be optimized using a reinforcement learning approach that works without the benefit of a prebuilt mathematical model. Such approaches, Ingalls believes, could be adapted by the pharmaceutical industry to control processes that are even more complex. He notes that pharmaceutical industry currently finds it hard to optimize for multiple species of microbe at the same time.

“What we’ve done is pretty novel,” says Barnes. Although the study was done using virtual bioreactors, he is now putting in a grant to move to benchtop scale. “We hope to demonstrate it in the next year or so in small tabletop bioreactors and then try to scale to 10 L in a pilot plant,” he continues. He adds that this would provide the expertise required to start a company and involve industrial partners. “Once we’ve shown it working,” he declares, “that will be an exciting time.”

Making intelligent decisions

A team led by Braatz is working on designing an intelligent “decision tree” for helping biopharmaceutical companies make better decisions based on the data collected from their processes. The decision tree selects the best data analytics software, based on the characteristics of the data, to produce the most accurate predictive models.

“In case studies, the software performs better than human experts and is fully automated,” he explains. “Consequently, users can focus on defining their problems and making decisions based on the models.”

Braatz and colleagues are currently developing decision trees for addressing more complex problems. They hope that the system will allow companies to speed process development, to bring approved drugs to market faster, and to reduce the losses that are incurred when drug candidates fail late in clinical trials.

Moving to the future

According to Barnes, reinforcement learning works only if there’s continuous feedback between the bioreactor and the algorithm. That requires online measurements, for example, near infrared (NIR) spectroscopy, where the absorption of NIR light by a sample reveals the sample’s composition.

“Once you go to continuous processing and you have online measurement, you can apply these AI techniques like reinforcement learning,” he says, pointing out that most companies still rely on batch processing. Relatively few companies have embraced continuous processing, which remains a technology for the future.

in silico patient
GNS Healthcare has developed an “in silico patient” called the Reverse Engineering, Forward Simulation (REFS) platform. According to the company, REFS not only reveals the drivers of disease progression, it also anticipates how individual patients will respond to selected drugs. The platform is designed to answer “what if” questions beyond the scope of conventional AI systems.

Other companies are applying AI to the emerging field of personalized medicine. GNS Healthcare has developed an “in silico patient,” a cause-and-effect model designed to predict which patients will respond (or not) to drug treatment. “We’re not the only people with predictive technology, that is, deep learning,” says Colin Hill, PhD, chairman, CEO, and co-founder of GNS Healthcare. “But no one else has causal AI and cause-effect technology designed for the biopharmaceutical world.”

According to Hill, the pharmaceutical industry could unravel the complexities of human biology more effectively with GNS Healthcare’s technology. The in silico patient, he insists, is an increasingly accurate model of a complex human response that is based on a cohort of patients.

Some of the company’s clients are applying this technology to optimize the manufacturing of chimeric antigen receptor (CAR) T-cell therapies. “What the example of CAR T-cell therapies shows is that, as AIs try to tame the complexity of human disease, it’s now becoming possible to move toward increasingly personalized treatments,” Hill declares. “And AI is going to be central to that.”

Previous articleOvercoming Scale-Up Challenges in Gene Therapy Manufacturing
Next articleCRISPR Startups Give Genome Editing Several New Twists