A “digital bioprocess replica” is, as the name suggests, a computer model of a bioprocess. The aim is to simulate the entire process, including all critical parameters and quality attributes, to enable optimization.
Data-rich bioprocess replicas can accelerate development, according to Jens Smiatek, PhD, from the Institute for Computational Physics at the Universität Stuttgart in Germany.
“Pharmaceutical manufacturing and process development can be improved by the early analysis of relevant critical process parameters (CPPs) and critical quality attributes (CQAs), the efficient determination of proven acceptable ranges (PARs) and less time-consuming exploratory experiments,” he says.
“Hence, a deeper process knowledge and control due to the digital bioprocess replica provides a more tailor-made design of the bioprocess with less wet-lab data.”
The potential impact on quality can be significant, notes Smiatek, citing compliance with regulations as a particular benefit.
“Digital bioprocess replicas directly reproduce Quality by Design principles. Hence, a deeper process knowledge and control can be achieved which is in full agreement with regulatory requirements,” he continues.
“Moreover, recent knowledge management initiatives like knowledge aided and structured application (KASA) can be fully supported by such approaches. In addition, PAT methods in automated laboratories can also be connected to digital models which provide a real-time control of the process.”
In Smiatek’s opinion, a higher product quality can only be achieved by a deeper molecular understanding and process knowledge. Models may help to reach these goals and to broaden our knowledge.
Constructing a digital replica is straightforward, at least on the conceptual level. Process analytical technologies measure multiple critical parameters from each unit operation and the data is used to build a model.
In practice, there are challenges with model accuracy being the major hurdle, Smiatek says.
“A digital bioprocess replica strongly relies on scientifically sound unit operation models. The nature of these models is not important, as long as they reproduce the system behavior to a sufficient extent. Thus, one requires a bullet-proof validation and calibration procedure for the models. Such validation and calibration procedures are usually performed along experimental data,” he points out.
“Thus, one definitely needs data from experimental process to calibrate and to validate the individual unit operation models but also the digital bioprocess replica.”
Data integration is also a complex process, according to Smiatek, who says defining multivariate distribution functions for transfer between the unit operation requires careful planning.
“One wants to exchange data distributions between the unit operation models with mean values and standard deviations and, if possible, all relevant parameters for the process,” he explains. “Therefore, one usually does not want to reduce the design space and keep it as close as possible to the experimental set up as one would not integrate out relevant degrees of freedom. Such ignored parameters usually affect the outcomes in terms of hidden complexities.”
For a technology standpoint, having sufficient computing power is critical, particularly for the analysis of complex parameters like fluid dynamics.
“Most mechanistic models as well as hybrid models are not very demanding. Standard computers with graphics processing unit (GPUs) in combination with differential equation solvers usually do the job,” says Smiatek.
“However, more computing resources are needed for computational fluid dynamics (CFD) or atomistic or molecular models. Fortunately, the current state-of-the-art of computational hardware is sufficient but still can be improved.
“Machine learning routines can nowadays easily be implemented by using standard routines from Python modules like TensorFlow, Kerras or Scikit-Learn. Hence, the time is right for the development of holistic process models.”
In the future, Smiatek expects digital process replicas will play a great role in process development. He is less convinced about other industry 4.0 ideas like machine learning.
“You know, I am a habilitated theoretical physicist, meaning my emotional behavior is usually under full control,” he joked, adding “I am not really convinced by recent developments. Enthusiasm is nice, but all coins have two sides.”
A good example involves machine learning routines like neural networks. The algorithms are more than 30 years old and the underlying math is “really simple, meaning I really can’t be excited by their usage,” he says.
“I am more impressed by theoretical insights and deeper molecular understanding. Hence, recent analytic techniques like Fourier Transform Infrared Spectroscopy (FTIR) or MIRAS which provide more molecular information are exciting. Furthermore, a theory or a model which rationalizes product stability or cell metabolic pathways is very interesting but far out of reach.”
In his opinion, one should concentrate on the deeper understanding of molecular, biological or fundamental principles instead of process technology. “Such insights would be key to improved APIs,” he says.