March 15, 2013 (Vol. 33, No. 6)

MaryAnn Labant

A crucial step in drug development, bioassay development can take from three months to a year to accomplish. Time, personnel, and budget resources along with reagent, drug lot, and cell-line availability constrain the process.

Assay reliability is paramount; even seemingly subtle variations can make the difference between a high-performing and a highly variable assay.

Challenges in bioassay development will be the focus of the upcoming IBC “Symposium on Development, Validation and Maintenance of Biological Assays”.

“Ideally, you have responsive and nonresponsive analytical cell banks, key reagents, and several drug lots in place from the start. Additionally, having historical experience with an assay starting from the discovery stage makes assay development and optimization quicker and increases the chances that the assay will run reproducibly when transferred to QC,” says Ken Lewis, Ph.D., principal development scientist, CMC Biologics.

“Frequently, this is not the case and often the analytical tools need to be developed and brought on line very quickly. Depending on the type of drug, for example a biosimilar, there may also be high assay performance expectations from the onset. This makes bioassay development very challenging.”

Trend charts provide empirical evidence regarding the performance of a bioassay and can be useful communication tools, particularly when troubleshooting an assay. To identify key reagents and procedural steps, factors such as reagent lot and cell passage number, and performance parameters such as EC50/IC50, daily maximums and minimums, and slope, are tracked to obtain variance data from multiple plates with multiple analysts on multiple days.

The use of platform assays is another common approach; examples include the CDC and the ADCC antibody assays. With platform assays, many factors are the same; the main difference is the target cell line. This facilitates development and troubleshooting.

Bioassays should be able to detect both hypo- or hyper-potent materials. Although dilutional linearity is used early in the development process to simulate variable potency samples, well-characterized force-degraded samples should be used to challenge the assay and to test the effects of types of drug damage.

Force-degraded samples can be prepared using elevated temperature storage, oxidation, agitation, different pH, etc. to damage the protein. Studies using these samples test not only the potency assay but also other proposed stability-indicating release and characterization assays.

“One of the assumptions for potency assays is that the shape of the dose response is the same for reference material and the test samples. Sometimes a degraded sample may have a mixture of fully active and fully inactive product, merely shifting the response curve to the right.

“In other cases, such as an antibody with two binding arms, if both binding arms are needed for full potency but one has been damaged, binding affinity could change along with the shape of the dose-response curve.

“The potency assay should be able to demonstrate that a degraded test product is no longer behaving the same way as the reference material. Degraded samples, which are unique to every project and dependent on the drug mechanism of action, are crucial,” concludes Dr. Lewis.


At the center of CMC Biologics sits the analytical and formulation group, which develops bioassays and formulations that facilitate both upstream and downstream processes.

Systematic In-Depth Analyses

Rigorous, systematic approaches to problem-solving, such as design of experiments (DOE) techniques, enable scientists to determine, simultaneously, both the individual and interactive effects of parameters that could affect potency assay results. DOE provides insight into the interactions between design elements—helping to produce a robust result.

Despite a focus on optimization, variability around response curves may still be observed. A thorough re-examination of the steps involved in the assay method can help to identify simple changes and improvements.

“Good, specialized statistical input is crucial for design and analyses of cell-based potency assays and is very helpful when using DOE to help guide assay development,” comments Souravi Ghosh, Ph.D., senior scientist supervisor, CSL Limited.

At the initial stages of assay qualification, both operational factors and random effects can contribute to assay variation. In-depth analyses of every assay condition can lead to identification of factors that may influence variability.

According to Dr. Ghosh, examples of operational factors include incubation time, cell number, pipetting and dilution errors, as well as cell and plate-handling techniques. These operational variations can by minimized by identifying allowable ranges for each factor.

Variation due to random effects, or conditions over which there is very little control by the operator or method, can be handled by application of design and an appropriate analysis.

“In the case of a cell-based potency assay that was developed for an antibody-based drug, outlier detection methods were used to remove very unusual observations. This helped reduce the variability observed in response curves.

“Randomization of sample positions and assaying each sample on each of several plates allowed us to get separate estimates for the variation associated with dilution, cells, and plates. These data helped focus our attention on portions of the assay procedure that were causing variation.

“Careful scrutiny of assay conditions can lead to improvements in management of cell-based assay variability; even small changes in the basic assay method can reduce variability of cellular responses,” explains Dr. Ghosh.

Developing Analytical Cell Banks

Cells and cell-derived reagents require distinct characterization and control measures to ensure operational consistency over time. Homogeneous, stable analytical cell banks ensure that starting cellular material for each assay is as consistent as possible.

“Well-characterized cell banks for analytical methods should be the norm across the industry. In practice the state of cell banks often varies widely across, and even within, companies, especially during the early stages of clinical development when analytical methods are first being transferred from research into development/QC,” adds Jonathan Zmuda, Ph.D., associate director, Life Technologies.

Cell banks used for producing biotherapeutics are not the same as analytical cell banks used for testing biotherapeutics. The latter provide a continuous supply of viable cells to generate accurate, reliable results within specified test methods, or to provide the cell-derived reagents used in those methods.

“Industry relies heavily on CHO and NSO cell lines for the production of biotherapeutics. Bioproduction cell banks are costly to produce and characterized to a great level of detail, more so than that required of cell banks used in analytical methods. Plus CHO and NSO are rodent cell lines. Rarely are such production cell lines relevant to the mechanism of action of a biotherapeutic targeted against human disease and therefore not typically utilized for such purposes,” continues Dr. Zmuda.

Detailed guidance on the generation and testing of bioproduction cell banks is provided in ICH Q5D and FDA Points to Consider in the Manufacture and Testing of Monoclonal Antibody Products for Human Use.

Since cells from analytical cell banks never contact the product manufacturing stream, patient safety is not directly at risk from these cell banks.

“Employing the same rigor of testing to analytical cell banks is costly, time consuming, and unnecessary, as the intended purpose of the cells used for analytical methods is different than cells used for bioproduction. However, in addition to a thorough characterization of the growth patterns of the cells, it is recommended that analytical cell banks minimally undergo identity testing to confirm the cell type, sterility, and mycoplasma testing to ensure the cells are free of contamination, and that they are demonstrated to be responsive in the analytical methods for which they will be used.”


Proper generation, characterization, and storage of cell banks is critical to ensure the long-term performance of cell-based assays and assays that use cell-derived reagents. [Life Technologies]

The Ongoing Parallelism Discussion

“Difference testing is the current practice for demonstrating parallelism and claims that two curves are parallel if a statistical difference between the curves cannot be found. The disadvantage to this approach is that two curves that are not statistically different are not necessarily similar, from a scientific perspective,” explains Todd Coffey, Ph.D., senior CMC statistician, and Mary Hu, Ph.D., director, bioassay development and process analytics, Seattle Genetics.

“Two curves that are highly precise can often be shown to be different when there is little scientific relevance to the differences between them, frustrating bioassay scientists.”

Recently, the United States Pharmacopeia (USP) published three guidance documents containing recommendations for bioassays, and one of their recommendations was to replace difference testing with the equivalence testing paradigm.

In bioassays, a relative potency calculation requires two parallel concentration-response curves. By random chance, two parallel curves will have slight differences. Equivalence testing’s objective is to determine whether the differences between curves are within the limits of scientific irrelevance and thus can be considered parallel, or whether the differences are so large that the curves cannot be considered parallel.

Equivalence testing is typically implemented by comparing a metric for parallelism to boundaries. If the parallelism metric is within the limits, the two curves are considered parallel.

The boundary that determines scientific relevance is called the equivalence limit, a critical, yet challenging to define, parameter. Often, the boundary is unknown and historical or preliminary data is aggregated and assessed to set the boundaries based on what could be expected from previous bioassays, or what might be predicted from initial testing of a particular bioassay.

“There are obstacles to equivalence limit implementation. We worked with our software vendor to customize existing templates within the software package,” explain Drs. Coffey and Hu.

“Extensive data review and statistical analysis were used to select a set of curves that were representative of assay variation. After calculating the equivalence limits using this dataset, we then tested their appropriateness against curves that should have theoretically been nonparallel.

“During the assay lifecycle, other sources of variation, such as additional operators, different lots of reagent, or variation in cell performance, may introduce changes in assay variability, which affect the equivalence limits. We developed an assay-monitoring plan and phased approach to reassess the limits throughout the assay lifecycle.

“Due to the recent publication of the USP guidance and the variety of methods for constructing the equivalence limits, a consensus has not yet been reached on many of the technical details, including statistical approaches. As statisticians and bioassay scientists work together, we are looking forward to seeing advancements in best practices in the coming years.”

Previous articleGrowing Proteins, 240 Miles Above Earth
Next articlePoll Update: Whole Foods and GMO Labeling