Stem cells offer great therapeutic promise, but only if bioprocessors can effectively manage the many variables. For human induced pluripotent stem cells (hiPSCs), James Colter—a PhD candidate in biomedical engineering at the Pharmaceutical Production Research Facility at the University of Calgary in Canada—and his colleagues reported that many bioprocessing challenges arise from these cells.

“The top challenges are coupling downstream cell functionality and clinical safety and efficacy to critical variables in the upstream process,” Colter says. “Whether we’re talking about implementing a robust process to maintain a high degree of homogeneity of a large population with a healthy iPSC phenotype, maintaining differentiated population purity and functionality, or tailoring a process for effective autologous therapy across human populations, these are all examples of challenges that encompass a large number of variables that I think we can more effectively integrate and control.”

To achieve that integration and control, Colter recommends several approaches: high-quality instrumentation for online and offline characterization, robust process protocol development, and effective utilization of the resulting data.

“I’ve heard plenty of arguments that coupling datasets between instrumentation types is unfeasible given the high degree of variability we see in the datasets, which in my mind points to the shortcomings that are currently present in protocols, culture systems, and methods of obtaining the data that I’d like to see more effectively utilized in controlling cell populations at the pluripotent state and downstream when we start to differentiate these cell populations,” he adds.

James Colter, University of Calgary

Reaching these goals, however, will not be easy. “Variability and biological heterogeneity play a role here, but the entire point is to find ways to minimize and account for the variables that give us such results,” Colter says. “This isn’t an easy challenge to address, but there are many different ways to approach the problem.”

Although online monitoring improves the bioprocessing of hiPSCs, Colter believes that the resulting data could be used more effectively.

“Coupling these systems with offline time-series data—omics and quality-assurance data—could be critical in determining key variables in the maintenance and expansion of these populations with the highest quality of phenotypic health,” he says. “These systems clearly exist and are used extensively, but I think there needs to be more integrated assessment of upstream and downstream systems and protocols, with more effort to couple clinical safety and efficacy to the preceding steps.”

Bioprocessors already strive for integrated assessment in producing hiPSC-based therapies, but Colter suggests “more effectively utilizing the data that is being obtained or more rigorous acquisition of data beyond the basic quality control checks and simple process maintenance automation.” Making those improvements, however, does not necessarily mean adding complexity. Colter doesn’t think “particularly complex algorithms and high-dimensionality approaches are the best answer.”

Instead, he says, “I do think there is plenty of room to improve by innovating how we interpret the data being obtained in these processes—with prior knowledge of the initial population, throughout process development and beyond—and apply it to our systems and protocols in a broader sense.”

Previous articleExternal Light Source Boosts Chemical Production in E. Coli
Next articleAchieving QbD with Machine Learning