May 1, 2010 (Vol. 30, No. 9)

Ken Niebling, Ph.D.

Host of Therapeutic Classes Require Continuous Updating of Technology

Since biological therapeutics are derived from living organisms, their manufacture and validation presents difficulties not encountered during traditional small molecule drug development. Despite the complexities inherent in developing biologics, there are a growing number of licensed biological products on the market. 

IBC’s “Bioassay and Development” meeting held earlier this month in San Francisco addressed the challenges facing researchers in academia and industry. Bioassays are critical for the development of biologics, and many companies now  find themselves in a regulatory holding pattern because of poorly developed and inadequately validated potency assays.

To correct this problem, many biotech and pharma firms have earmarked a significant portion of their R&D budgets for developing high-performance bioassays that measure potency accurately and achieve product acceptance quickly with minimal cost.

Many factors affect bioassay performance, explained David Lansky, Ph.D., president of Precision Bioassay. At the meeting, Dr. Lansky spoke about how mismatches between bioassays and models result in imprecision. “Pseudoreplication during sample generation is commonplace in assay design and leads to misleading estimates of precision.”

Precision Bioassays’ Xymp™ bioassay system is a solution to the problem, he said. “It reduces cost, labor, time-to-market, and developmental and regulatory risk.” The three-module software system has an optional component that utilizes robotics to facilitate good plate layouts using statistical designs that implement structured randomization.

“Blocking is a hugely powerful tool, and including the 96-well plate as a block is a useful idea that escapes most bioassay design models,” Dr. Lansky said. He explained that “randomization by the Xymp robotic system allows for the use of more complex software models to analyze the data.”

Xymp’s second module provides for sophisticated outlier detection, which is important as removal of outliers before re-scaling the data is a poor strategy, according to Dr. Lansky. “Re-scaling the data usually removes 80 percent of the problems and restores at least near symmetry if not near normality. Researchers need to look at the sources of the variance.”

A common approach in bioassay analysis is to examine small groups of replicates (or pseudoreplicates) for outliers. A much better approach uses all of the data in the assay together, but with milder assumptions about the shape of the dose-response curve (i.e., a smooth curve) than used in the models fit to assess similarity and estimate potency. The use of reproducible randomized experimental designs with more samples allows more complex statistical methods to identify outliers.

The third Xymp module implements equivalence testing for similarity via mixed-effects models and delivers graphical summaries and potency estimates, Dr. Lansky explained, adding that there are often important components of variance associated with sample assignment to rows or dilution assignment to columns that mixed-effects modeling addresses. 


Raw cell culture bioassay response data from plates: The colors indicate the sample (dark blue is reference) while the color intensity indicates the nominal concentration in each well. There is a different random assignment of samples to rows in each plate and a different random assignment of dilution to column in each plate. This structured strip-plot design is practical to set up by hand, is relatively easy to construct with a lab robot, allows use of a multichannel pipette, protects the assay against location effects, and allows for statistical analysis. [Precision Bioassay]

Contamination

Ned Mozier, Ph.D., a director at Pfizer, shed some light on eliminating sources of contamination that feed into bioassay potency measurements. Dr. Mozier focused on the detection of host-cell proteins (HCPs) and their removal to acceptable levels. He presented a case study that zeroed in on HCP analytical testing and its use in risk assessments for recombinant therapeutic proteins.

Dr. Mozier believes that the use of multiple technologies to detect and evaluate the virtually unlimited variety of HCPs that might occur during bioprocess development leads to the development of robust and well-controlled bioprocesses, which, in turn, minimizes risk as new drugs are brought to market.

The ultimate suitability of an HCP test is based on the results obtained both in detecting and quantifying residual HCP levels in late-stage or registration batches made at commercial-scale, he reported. “Risk due to HCPs has traditionally been dependent on a triad of factors,” Dr. Mozier explained. “HCPs depend on the route of administration, the patient’s health status, and host-cell expression and production systems.”

Underestimation or overestimation are significant issues in host-cell protein detection. “Overestimation is typically caused by nonspecific binding to HCPs, leading to false positives.” Underestimation is the result of HCPs escaping detection because it is difficult to generate bioassays that will detect the full spectrum of HCPs.

Future goals for scientists working in bioassay development include the advancement of analytical methods that can detect all of the HCPs present in biologics, improvements in testing sensitivity, and development of new testing systems to better predict the potential immunogenicity risks posed by the presence of HCPs in biologics. The absence of human data and the nearly infinite variety of HCPs potentially present make the latter a challenging problem.

Mary Hu, director of bioassay development and process analytics at Seattle Genetics, emphasized that “product-related impurities are typically defined by the lack of similarity in potency compared to a reference material using binding assays and functional bioassays. Careful consideration should be taken in choosing an appropriate binding assay format and calculating relative potency values when lack of parallelism occurs between product variants and reference material.”

Response Surface Method

Liming Shi, Ph.D., senior staff scientist at Amylin Pharmaceuticals, shared his insights on successful bioassay development. “In order to develop a robust biological assay at low cost, the parameters and the process must be optimized to make it robust against noise factors. Design of experiment studies are used for assay development, especially for complicated multiple factor interaction assays,” he said.

“Response surface method (RSM) is a statistical technique for modeling responses via polynomial equations. The model becomes the basis for 2-D contour maps and 3-D surface plots for the purpose of optimization. Using statistical techniques, the data can be combined to produce a response surface that simultaneously optimizes all endpoints over the studied range for all factors,” Dr. Shi explained.

“During assay development, especially for cell-based potency assays, the major steps are getting the engineered cell line, establishing the biological activity detection system, screening and finding the significant parameters, optimizing the process settings, and eventually, performing the qualification to demonstrate assay accuracy/recovery, specificity, precision, linearity/range, and robustness.

“We screened multiple parameters and locked out three of the most significant parameters, which evoked the biggest changes in assay response. We used second-order polynomial functions to describe the response, which allowed us to determine the interactions between parameters.”

When using RSM for biological drug development, Dr. Shi explained that, “once the critical parameters have been determined, the next step is to obtain the optimum settings of the variables. For example, we are interested in determining the optimum levels of cell number and drug stimulation such that yield is maximized. For every combination there is a yield.”

Jill Crouse-Zeineddini, Ph.D., principal scientist at Amgen, summed up the future challenges of bioassay development during the next decade. “Potency assay development has come a long way. We have seen a progression from the use of animal models to the use of cell culture, and we are now seeing implementation of non-cell-based assays for potency testing.”

New technologies will continue to drive potency method selection, she insisted. “Furthermore, the use of automation, whether applied in a modular approach or in a fully automated capacity, will become more commonplace.” Lastly, she said that “it will be important for potency method development to keep pace with the vast number of therapeutics being developed. New classes of therapeutics, such as those designed to have dual functionality, are one example. Developers will need to ensure potency methods are relevant and fit for purpose to support these types of therapeutics.”

Takeaways: Bioassay Risk Factors

  • Process capability
  • Maximum dose
  • Route of administration
    (subcutaneous, intramuscular,
    or intravenous)
  • Frequency of dosing
    (acute or chronic indications)
  • Preclinical and clinical data
Previous articleCytoscape
Next articlePEPs Gaining FDA Nod but Remain Hard to Take