November 15, 2017 (Vol. 37, No. 20)

Kathy Liszewski

Developers Have a Range of Options When Optimizing Bioassays for Biologics

A good bioassay, like a good marriage, joins two partners in a bond and is mutually beneficial. In the case of bioassays, optimal design and development and accurate statistical analysis comprise a lasting partnership.

At the Cambridge Healthtech Institutes’s fifth annual Optimizing Bioassays for Biologics conference, held in October 2017 in Washington DC, experts discussed the many challenges facing this field and weighed in on how to avoid costly mistakes when optimizing bioassays for biologics.

Solutions suggested include automating workflows to enhance potency bioassays to save time and money, employing carefully selected cell-based assays that will reduce development time and satisfy requirements for comparability and quality control, and establishing critical collaborations with expert statisticians and IT personnel.

Avoiding Costly Mistakes

Early in the process of assay development, scientists often must rely on limited product production experience, meager clinical experience, and incomplete knowledge about assay capabilities. “Despite those drawbacks, optimizing bioassays for biologics still requires setting realistic project specifications,” advises David Lansky, Ph.D., president, Precision Bioassay.

He continues, “Scientists often make the biggest mistakes in initial assay design. Examples include always using an assay template with replicates (or more often pseudo-replicates) of each sample and dilution combination in adjacent locations. Another issue is having sample groups and dilutions arranged in ways that are likely to be confounded with location or sequence effects. These designs do not protect estimates of nonsimilarity or potency against bias due to location or sequence effects.”

But there is a solution, according to Dr. Lansky: “Designs that separate replicates into blocks not only protect against these likely sources of bias, but also support variance components analyses that can inform developers about where to focus effort to improve the assay. While changing templates by hand is challenging—it requires even more focus and attention than usual—random placement of samples can be easily achieved using a pipetting robot.

“For perspective: Clinical results submitted to the FDA from nonrandomized trials are generally ignored. Why should important analytical bioassays be held to lower standards?”

Another mistake is not paying enough attention to establishing state-of-the-art statistical analysis. “Developing these analyses is a bit harder and less well understood,” observes Dr. Lansky. But there are solutions here as well. “The software chosen must include great graphics and highly selective statistics,” he continues. “Combining comprehensive statistical analysis with a well-devised experimental design leads to high precision. Great precision cuts costs and translates to high throughput, which gets the products to market sooner.”

Precision Bioassay offers their Xymp® bioassay solution that is designed to complement their consulting services. Dr. Lansky explains, “We work closely with a sponsor’s management and lab teams to help refine assay design including development, robustness, validation, method transfer, etc. In the process, we capture the statistical features of the assay SOP in an ‘assay protocol.’ Data summaries are concisely displayed to enable bench scientists, managers, QA, and regulatory staff to quickly understand an assay.”

Dr. Lansky’s take-home message is that companies must carefully weigh early decisions about assay development. “Consulting with experts can be a big help. Careful assay design can go far to protect you and speed your course to market.”

Shrinking Assays Enhances Efficiency

Potency and bioassays tend to be very complex, costly, and time-consuming. Because of this, there is a push to incorporate automation to save time anmoney.

“We are always wondering how to do more with less,” notes Robyn Beckwith, Ph.D., associate scientist, analytical development and QC, product technical development, Genentech. “We have developed a strategy that automates workflows for potency assays using a novel low-volume dispensing system (LVDS).”

Dr. Beckwith and colleagues built a unique system that leverages contactless liquid handling technologies, including acoustic ejection and air-pressurized microvalve hardware, to dispense nanoliter- and microliter-scale volumes in an automated assay workflow. She reports, “We miniaturized and automated a potency assay by integrating multiple off-the-shelf liquid handling instruments into a larger system controlled by a unified scheduling software.”

To establish proof-of-concept for the system, Dr. Beckwith evaluated an automated homogeneous enzymatic potency assay in miniaturized 384-well format versus the assay performed manually in 96-well format.

“We demonstrated equivalence of performance in both assays across a linear range. Even with the differences in workflows, we obtained comparable results. This is especially remarkable because incorporating acoustic ejection technologies allows dispensing volumes as low as 2.5 nanoliters. This provides tremendous benefits, especially when moving into the high-throughput arena. Overall, our system can process more samples, is more economical, and performs comparably to manual potency assays.”

The next step will be establishing proof-of-concept for cell-based assays.

“Working with cells is more complex, but we believe the system will provide suitable performance. Further, because it is flexible and customizable, we have the potential for multiplex assays,” he says. “We are excited to share information about this one-of-a-kind system that other scientists could also adapt and utilize for their assays.”

Thaw-and-Use Cell-Based Assays

In order to beat the competition, biosimilar and biobetter drug developers must move quickly to market. Cell-based assays (CBAs) facilitate this by reducing assay development time and satisfying the requirements for comparability and QC lot release.

“Biotherapeutics manufacturers have to implement a panel of assays and tests for both drug product and drug substance,” declares Alpana Prasad, Ph.D., product manager, Eurofins DiscoverX.

“Unfortunately, assay complexity as well as the qualification process can significantly add to the development timelines,” she continues.

“For example, a cell-proliferation assay using human umbilical endothelial cells can take over two weeks and still suffers from variability and low specificity. For some biologics, where no CBAs are available, utilizing animal models takes even longer.”

Although CBAs are becoming the method of choice for potency testing, a question often emerges as to which CBA would provide the best model to test for potency of a given drug. “An ideal cell-based potency assay should mimic the therapeutic’s MOA [mechanism of action], as well as be stability-indicating,” advises Dr. Prasad.

“Often, it is more efficient to implement a commercially available MOA-reflective CBA that reduces qualification and validation timelines and provides overall cost reduction.”

Eurofins DiscoverX has developed more than 36 off-the-shelf qualified bioassays that rely on a receptor’s native biology. “These quantitative and robust bioassays provide a readout of the drug’s MOA and are easily scalable utilizing thaw-and-use cryopreserved cells,” Dr. Prasad comments.

The bioassay kits differ from methods using continuously cultured cells. Dr. Prasad elaborates: “Continuous-culture assays are commonly used and often suffer from assay variability in QC settings, as well as having a higher per-sample cost. However, we have demonstrated equivalent assay performance with cryopreserved cells compared with continuous culture cells. The bioassay kits include all reagents needed for the assay and allow the scientist to use the cells when they need them and not have to keep growing and maintaining cell lines.”

Dr. Prasad’s conclusion is that “using these qualified ‘plug-and-play’ cell-based bioassay kits provide an assay that anyone can set up, and will deliver superior performance with high reproducibility while saving time and money. Also, one can skip difficult and time-consuming method development and launch right into assay validation.”

Statistical Analysis: Do the Math

A critical early step in a product’s development is designing a bioassay that can accurately assess its relative potency and biological activity. However, exactly how to set up and interpret such assays can be vexing. Erica Bortolotto, Ph.D., senior scientist, bioassay development, analytical sciences for biologicals, UCB Pharma, warns, “Failure to assess appropriate tests such as parallelism generates meaningless relative potency results that cannot be reported or interpreted.”

A common way to begin the process involves establishing dilution assays that compare product to standard. Such bioassays should generate a quantitative response showing a sigmoid log-dose or parallel line dose (Figure 1). These can be analyzed by employing standard statistical methods.

The question is which method to utilize to accurately represent the information generated. Measures include fitting a nonlinear dose-response model directly to the data or examining slope ratios.

According to Dr. Bortolotto, “In order to determine which statistical tests works best, we follow both the guidance of the USP as well as evidence from our own determinations. For example, in the past 10 years, many groups only considered the linear aspect of dilution curves. However, this approach does not consider an important part of the dose-response curve, such as the upper asymptote.”

An additional consideration to keep in mind is establishing measures of potency with an assay that will be used as the product continues development. Dr. Bortolotto advises, “Although a binding ELISA is faster to develop for early phases, often one subsequently moves to CBAs for later phases. This becomes a bigger challenge. Thus, it’s important to start as soon as possible to create a well-defined potency assay that will carry the product into the future.”


Figure 1. Scientists at UCB Pharma warn that bioassay analysis can yield meaningless relative potency results if parallelism is not established. One way to establish parallelism is to run dilution assays that compare product to standard. Such assays should generate a quantitative response showing (A) a parallel-line dose or (B) a sigmoidal log-dose. These can then be analyzed by employing standard statistical methods.

Call the Statistician

While carpenters rely on tools such as tape measures and T-squares, statisticians often employ the 4-Parameter Logistical (4PL) function to make sense of bioassays, notes Martin Kane, managing data scientist, statistical and data sciences practice, Exponent. “Potency assays measure two versions of a product against each other. This type of statistical calculation is not particularly new, but there are caveats to using the 4PL. Choosing the best one, along with employing additional statistical tests, is critical.”

Kane uses the Rodbard version of the 4PL. He explains, “The 4PL consists of four parameters: a sigmoidal curve, upper asymptote, lower asymptote, slope (at midpoint of sigmoidal curve), and the EC50 (half maximal effective concentration). With these values for two samples, one can derive the relative potency of a product.”

It seems simple on the surface to obtain the numbers and then just plug-and-play, but that’s not the case.

“Sometimes, comparing the historical 4PL calculation with those [which are] newly derived can be tricky,” Kane reports. “If, for example, three of the four measures compare equivalently between test and control, sometimes scientists try to force the fit of one or more parameters. This may be wholly inaccurate. It’s usually significantly more complex than that. The best thing to do is to include a professional statistician who can use more sophisticated means to verify that you are determining potency accurately.”

Perceval Sondag, Ph.D., senior manager of statistics, PharmaLex, agrees. “It is important to involve statisticians immediately as the product begins development,” he comments. “A strong collaboration in which scientist and statistician work together allows both to share their unique perspectives. Both should be teaching each other.”

According to Dr. Sondag, a statistician can help scientists to better understand the USP’s requirements and also create simulations exploring realistic scenarios that compare, for example, precise versus imprecise assays and optimal versus suboptimal designs (Figure 2). “One mistake many scientists make is drawing conclusions indirectly. For example, one may declare their data prove similarity simply because they fail to demonstrate non-similarity. This is a big mistake. One should instead seek the guidance of a statistician to directly prove the data are similar.”

Dr. Sondag concludes, “Don’t just use software and expect to generate the correct results—ask for help. Involve an expert from the beginning stages so that your assays are designed correctly from square one.”


Figure 2. A bioassay is commonly a comparison between the biological activity produced by a batch of a test product (“Test”), and the biological activity produced with batch of a reference product (“Reference”). This comparison usually depends on a single measure, the relative potency, which is usually derived from a four-parameter logistic curve. Computing this curve requires the use of statistical approaches to test the parallelism between two curves. Some approaches, such as those developed by PharmaLex, rely on equivalence tests. PharmaLex aims to derive equivalence margins to test parallelism in bioassays, without the need for historical data or expensive additional experience.

Previous articleCRISPR/Cas9 Delivery Goes Nano with Enhanced Guide RNA
Next articleCancer Immunotherapies May Deploy Nonimmune Cells Engineered to a “T”