February 15, 2017 (Vol. 37, No. 4)
Vicki Glaser Writer GEN
Avoid “Bad” Immunogenicity by Using Assays That Are More Sensitive and Can Adapt to Increasingly Complex and Specialized Drugs
With some medical interventions, immunogenicity is the whole point—hence the “good” immunogenicity provoked by vaccines. Immunogenicity is much to be avoided, however, when biotherapeutics are deployed. Biotherapeutics such as recombinant proteins or monoclonal antibodies (mAbs) may instigate “bad” immunogenicity, potentially nullifying any hoped-for benefits or even causing autoimmune reactions.
Biotherapeutic immunogenicity is a growing problem now that biotherapeutics are proliferating and diversifying. What’s more, biotherapeutics are becoming increasingly complex. Consequently, many new biotherapeutics are apt to provoke the immune system in increasingly subtle ways.
Any new biotherapeutic needs to be assessed for its immunogenic potential, typically by means of an assay. Immunogeneic assays, however, pose daunting developmental challenges. The assays need to be sensitive enough to detect anti-drug antibodies (ADAs) and, specifically, neutralizing antibodies (NAbs) that risk not only drug efficacy also patient safety. Drug interference is a particularly difficult problem in immunogenicity assays. Potential solutions include new pretreatment methods and assay formats.
Immunogenicity testing is being driven by three main trends:
1. The need for assays to adapt to increasingly complex drugs.
2. A shift away from the development of blockbuster drugs to more specialized biologics targeting rare diseases.
3. The demand for more sensitive assays.
A prime example of the growing complexity of biotherapeutics is the emergence of multidomain drugs such as fusion proteins, bispecific antibodies, and antibody-drug conjugates (ADCs). This necessitates testing for ADAs that could target each domain and the associated hinge region as well. Whereas immunogenicity testing of a single-domain drug might require one ADA assay, multi-domain drugs compound the assay burden.
Oligonucleotide-based therapies represent another type of biotherapeutic. These therapies, which may involve antisense oligonucleotides, small interfering RNAs, the CRISPR/Cas9 gene-editing system, or other modalities, pose particular challenges for immunogenicity testing.
We have a poor understanding of the immune responses to therapeutic oligonucleotides and the possible effects of anti-double-stranded DNA antibodies, says Sebastien Boridy, Ph.D., research scientist, immunogenicity, Charles River Laboratories. These antibodies, he notes, are particularly shifty because they may or may not cross-react with endogenous DNA.
As biopharmaceutical companies are focusing on the development of more targeted therapies tailored to treat patient subpopulations defined by disease- or patient-specific biomarkers, regulatory bodies are seeking bioanalytical data that is more directly relevant to the target population. For instance, in immunogenicity testing, the method development and validation should be conducted using the matrices from the target disease population rather than using a blank matrix from normal donors, suggests Dr. Boridy.
The goal is to be able to better characterize and predict immune responses. This is especially relevant in the field of oncology at present.
Minimizing drug, matrix, and target interference in ADA assays can improve assay sensitivity. Drug and target interference present the biggest challenge. The most straightforward method for improving assay drug/target tolerance is dilution of the test sample. However, assay sensitivity is often compromised with increasing sample dilution; therefore, other options include sample purification or pretreatment and the utilization of alternative assay formats, such as solid-phase extraction with acid dissociation (SPEAD) and affinity capture elution (ACE) methods.
Overcoming drug interference is one of the greatest challenges in developing NAb assays. “If you want the assay to be sensitive, then you can have only a small amount of drug in the assay,” insists Shan Chung, Ph.D., senior scientist, BioAnalytical Sciences, Genentech. “Pretreatment procedures that remove drug from the sample are by far the most effective way of ensuring that you have an assay that is sensitive and relatively free of interference, even if your sample can be expected to have a high drug load.”
New assay methods that have much higher drug tolerance are in development. One example of an approach that is still being refined—and is, consequently, not yet in mainstream use—is the immune polymerase chain reaction (PCR) method. It can achieve a sensitivity of 10 ng/mL of ADA within mg/mL quantities of drug, according to Dr. Boridy. That represents an improvement of severalfold over current methods.
The immune PCR method uses a conventional immunogenicity bridging assay format with a modified detection method. Detection is achieved using a secondary antibody conjugated to a DNA molecule, which can be amplified with PCR. A substantial increase in assay sensitivity has been reported. “At the same time,” notes Dr. Boridy, “you can reduce drug interference by being able to dilute the sample to a much higher dilution than you would be able to do with another format.”
“Drug manufacturers need to develop ADA assays that are drug tolerant, particularly for drugs with a long half-life that may remain in the circulation for days to weeks,” says Mitra Azadeh, Ph.D., principal scientist, bioanalytical & biomarker development, nonclinical development, Shire. She describes several approaches to reducing drug interference in ADA assays, which include changes to the assay design and platform, modifying the assay format, and optimizing specific parameters or components of the assays.
In assay design, reversal of the capture and detector components could contribute to reduced drug interference. It is worthwhile to compare an ELISA format versus the Meso Scale Discovery® platform to determine which is less affected by drug interference for a particular ADA assay, proposes Dr. Azadeh. One way to try to optimize an assay is to experiment with different formats, such as a sequential versus homogeneous assay.
Adding an acid dissociation step “has proven to increase drug tolerance by 10-fold or greater in various ADA assays,” comments Dr. Azadeh. “ACE and SPEAD technologies have offered significant improvement in acid pretreatment protocols in recent years.” In the coming years, however, she envisions that “bioanalytical scientists would look to find alternatives to the use of acid in their ADA assays, due to the fact that some drugs have shown to be acid-labile.”
Additionally, optimization of variables such as the concentrations of capture and detection antibodies, the minimum required dilution, and even the type of microtiter plate used can yield moderate to significant improvement in drug tolerance.
Dr. Azadeh also points to the early adoption and assessment of technologies such as surface plasma resonance and bead-based assay platforms. Scientists are also evaluating the use of modified variants of the drug in assays, instead of the intact, standard form, to capture and detect ADAs.
Cell-Based or Non-Cell-Based Assays?
Dr. Chung co-authored a white paper (“Strategies to Determine Assay Format for the Assessment of Neutralizing Antibody Responses to Biotherapeutics”) that appeared last November in the AAPS Journal. According to this paper, the selection of an appropriate NAb assay format—whether a cell-based assay or a non-cell-based assay—should follow a therapeutic mechanism-of-action (MoA) approach rather than the traditional risk-based approach.
Biotherapeutics in general carry an increased risk of immunogenicity, and the FDA has historically expressed a preference for cell-based NAb assays; however, not all biologics necessarily require a cell-based functional NAb assay if you take into consideration their MoA. “The main reason for this debate is the development of therapeutic antibodies,” states Dr. Chung.
Also relevant are the ways therapeutic antibodies may differ from other types of biologic drugs such as enzymes, cytokines, and hormones. For example, antibodies may differ with respect to their MoAs. Some may block the activity of soluble factors, while others may actively induce effector function.
“If the antibody’s MoA is more of a blocking function, for example, it is okay to use a ligand-binding assay,” advises Dr. Chung. This can save the time and cost of developing a cell-based assay for NAb testing, even though the antibody drug may have a high risk for immunogenicity.
Antibody therapeutics present unique challenges for immunogenicity testing, particularly if these drugs are complex. For example, ADCs may have novel or multiple MoAs.
As the white paper indicates, since NAbs against either the mAbs or cytotoxin part of an ADC would block its cell-killing activity, “a single cell-based NAb assay using readout associated with cell death or proliferation would reflect the therapeutic MoA.” The white paper, however, adds that “if the circulating drug generates significant interference in the cell-based assay and results in false-negatives through potentiating the cytotoxicity effect, a non-cell-based assay could be considered as a replacement to be used for NAb assessment.”
A Model to Simulate Immunogenicity
Tim Hickling, Ph.D., immunogenicity sciences lead, biomedicine design, Pfizer, describes a mathematical model to predict clinical immunogenicity. The model, says Dr. Hickling, is built on “equations that represent the processes of the immune system,” and it was designed by Pfizer “to enable simulations of how a patient’s immune system may respond to a given therapeutic protein.”
According to Dr. Hickling, the simulations rely on input data including 1) which T-cell epitopes may be present in the drug, 2) how much of the drug is in circulation, and 3) how much drug is needed for a clinical effect. Outputs of the simulation include the proportion of patients in which an immune response occurs, the time for a clinical effect to appear, and the magnitude of the immune response.
Pfizer can vary the input data and study the effects. For example, “if a particular T-cell epitope contributes strongly to the immune response, then it is possible to simulate the effect of removing this epitope,” explains Dr. Hickling. “This data may be used to help the protein engineers understand the value of altering the sequence before taking the protein into clinical trials.
“Assessment of immunogenicity can be challenging, not least because of the complexity of the assays used to measure ADAs. However, improvements in ADA assays are beginning to show us how widespread immune responses to therapeutic proteins really are.”
Dr. Hickling identifies several other challenges that may “need to be addressed to make a truly predictive approach viable.” These include understanding how the disease state affects the immune response. “Inclusion of starting material relevant to the disease state in in vitro assays is possible,” notes Dr. Hickling. “This issue encompasses both the function of the immune system and the expression of the target.”
Another key challenge is determining how different populations respond to a drug. Improvements in the technology and cost for performing whole-genome sequencing should help with this issue.
It is also important to determine which product quality attributes really affect immunogenicity outcomes. “Recent research has focused on aggregates as immune stimulators, though in vitro assays have required extreme levels of aggregates to identify enhanced immune signals,” observes Dr. Hickling. Not yet clear is whether the aggregates actually pose a problem or if the in vitro assays simply lacks sufficient sensitivity.
Dr. Hickling also points to the lack of in vitro systems that can produce an ADA: “Most in vitro systems look at the T-cell response to a drug and not the B-cell response (that is, the ADA). Improvements in ‘organ on a chip’ systems and knowledge of the germinal center should help in the development of an in vitro immune response that goes all the way to the ADA.”