January 1, 2015 (Vol. 35, No. 1)
Assembling Information via Modeling to Perform a Biomanufacturing Simulation
A central goal of bioprocess design and analysis is to determine resources required to produce the desired quantity of product. For Charles Siletti, director for planning and scheduling applications at Intelligen, resources include process equipment, materials, utilities, and labor.
Additional considerations: a new facility or an existing plant, manufacturing costs, time duration per batch, time between consecutive batches, operations likely to incur bottlenecks, process and equipment changes that may improve throughput, environmental impact, and whether the fastest or least expensive processing is best.
Siletti defines a model as a mathematical description of a process that includes calculation of durations and resource utilization. “The schedule derives from information from the model,” he explains. A scheduling model will not normally calculate durations from scientific principles, e.g., biochemical reaction rate. Instead they assume durations and some relationship among activities, and focus on resources.
Scheduling addresses future operations—what occurs during the next run. Scheduling tools improve productivity in two ways. Most common is debottlenecking, or identifying the constraints within the process. Constraints arise not just from single processes, but when two or more process lines running simultaneously share resources such as purified water, equipment, or operations like buffer prep. “The goal of scheduling is to identify, before a problem arises, the nature of conflicts and constraints within those shared resources,” Siletti says.
Scheduling also matters in day-to-day planning of activities and for communicating the production plan to the operators or supervisors executing it. Productivity improves through basic communication. “No one will overbook resources by accident because they know the plan,” he adds.
Understanding Process Operations
For day to day scheduling, the model is based on an established process; engineers already have a feel for operation durations and when they should occur. Those operating in the batch control world might think of the scheduling model as a master template for what overall process should look like. “Then, as each batch is executed that information is updated for all scheduled activities and projected forward,” Siletti says.
Schedules used for actual process execution require updating as the process moves forward. After the run this data is incorporated into the batch record. “Even users who don’t have a scheduling system will likely have a batch record,” Siletti says. For example, if an activity takes longer or takes less time than planned, or uses a different resource, or occurs faster or slower than expected, that information needs to be updated in the schedule because of its potential effect on future activity.
During process development there may be questions about whether resources can accommodate a new process or product line, and what sort of capacities might be expected. Here the scheduling model is based on anticipated durations and resource requirements.
“The process must be developed to a reasonable extent before you can have a detailed schedule,” according to Siletti. “For some people modeling and simulation are interchangeable terms, but we think of modeling as the act of assembling information required to perform the simulation.”
Both processes may occur entirely in silico, at least at project initiation. At some point actual plant information is fed back to the scheduling program. One way to acquire such information is by going over batch records and asking relevant questions, for example the average duration for a particular activity or operation.
“If real world information is available that’s obviously the best choice. But when we’re considering a new process design or building something that doesn’t yet exist, we would use process simulation or design calculations,” Siletti tells GEN.
Siletti is not enamored with the notion of basing a schedule on experiments. “A scheduling model is built after the process is established, using experience from the process itself. In situations where you’re building a scheduling model before you have a process, the information feeding into the scheduling model comes from process simulation or design calculations.” Another approach involves feeding pilot-scale data into a simulation, which may then be used for scheduling. The simulation package can recalculate such factors as heat transfer coefficients to production scale, from which a scheduling model will calculate plant capacity.
Scheduling models are operationally limited. They cannot, by their nature, calculate or predict how changes in process conditions will affect the product or process. Estimating the impact on duration or cycle time of modifications in temperature or flow rate requires a process simulation package. Limits also exist in how to manage conflicts.
Some scheduling systems optimize to achieve lowest cycle time or maximum plant capacity, which carries constraints for process size or complexity. Some scheduling systems use a nonoptimization approach, but when those packages encounter conflicts they cannot solve, the person in charge of scheduling must fix the issue manually.
Limitations also exist due to the inherent nature of bioprocesses. Thirteen years ago, Mark Marten, Ph.D., professor of chemical and biochemical engineering at the University of Maryland Baltimore County, evaluated two leading simulation packages for a production process at a major pneumonia vaccine manufacturer. The infectious agent existed in 23 serotypes, each with implications for efficacy and manufacturing. “Keeping track of all that information while transitioning from research to production was challenging,” Dr. Marten says. “They wanted software to model and document the process.”
At the time, he concluded that either software program could simulate the process but did not completely represent everything occurring within the process. He wrote that the ability of simulation software to predict scaleup was “limited.”
Given the advances in computer hardware and software since that time, and better understanding of bioprocess scaleup, simulation, and modeling has improved tremendously. Yet the complexity of biochemical processes suggests that limitations will always exist.
“In the chemical process industry you can get down to first principles when you build models,” explains Dr. Marten. “You can accurately predict outcomes.” But in cells thousands of reactions are occurring simultaneously, so a first-principles model may be unachievable. “People have developed not dynamic models but stoichiometric models that don’t predict anything but steady state, which means they are difficult to use for bioprocess control.
“In cellular systems modelers must guess at rate law parameters. That’s the fundamental difference between simple systems and vastly complex systems like cells.”