November 1, 2012 (Vol. 32, No. 19)
Josh P. Roberts
Even use-as-directed assays may require some adaptation based on a host of different considerations. What type and how many samples are to be run, and from what matrix?
How precious are the samples? Is throughput as important as accuracy and reproducibility? How many parameters need to be assayed, and will it be done in monoplex or multiplex? How sensitive and selective does the assay have to be, and over what dynamic range? Will it give consistent results across lots—and can you prove it?
Scientists gathered recently at CHI’s “Biomarker World Congress” to share their insights about developing assays to measure DNA, protein, and even RNA.
There are well-established, reliable methods to visualize where specific proteins and DNA sequences lie in a tissue or in a cell: to wit, respectively, immunohistochemistry (IHC) and fluorescence in situ hybridization (FISH). “Yet there’s a lack of technology to look at RNA in situ,” commented Yuling Luo, Ph.D., founder, president, and CEO of Advanced Cell Diagnostics (ACD). “That is the space that our RNAScope technology fills.”
RNA in situ hybridization (ISH) has undergone incremental improvements over the past 40 years, but not enough to endow it with the sensitivity, specificity, robustness, or simplicity to be routinely used for biomarker analysis and diagnostic applications, Dr. Luo noted. Boosting signal at the same time boosted the background, he said, yielding only limited improvement in signal:noise ratio.
ACD designed a system “that allows selective amplification of target-specific signal without amplifying nonspecific hybridization signal,” Dr. Luo explained. Two independent oligonucleotide probes simultaneously bind adjacent RNAs targets, and are recognized by a single specific “pre-amplification” molecule.
This, in turn, is bound by up to 20 amplifier molecules, each having 20 binding sites for the label probe—making a “Christmas tree-like hybridization structure”. And since a 1 kb stretch is typically targeted by 20 probe pairs, an RNA molecule can be decorated with 8,000 labels.
All the RNA resides in the cytoplasm and the assay conditions are all the same, so RNAScope can easily be multiplexed, Dr. Luo pointed out. By contrast, “in immunofluorescence analysis, because there are membrane proteins and nuclear proteins, the assay conditions are different—you have to get different antibodies in different locations to work together, which is much harder to do.”
RT-PCR, on the other hand, can assay single RNA molecules but fails to deliver tissue context. And this would be a problem when searching for loss of expression as a tumor biomarker, for example, since “grind-and-bind” methods such as RT-PCR cannot differentiate whether the expression is found in tumor or stroma.
For protein biomarkers—when context is not an issue—the immunoassay “can be the most specific assay you’re going to get,” said Lynn Zieske, Ph.D., vp for commercial solutions at Singulex.
In a typical sandwich ELISA, antibody that has randomly adhered to a plastic plate captures target from a sample of interest. Another antibody, to a different epitope on the same target, then either directly labels the target or itself becomes the target for a direct or indirect label.
The Erenna® immunoassay system offers resolution at subpicogram concentrations—up to two orders of magnitude greater than a standard ELISA, claimed Dr. Zieske. First, antibodies are coated onto paramagnetic particles in a way that orients them for maximum exposure to binding and capture of antigen.
Using particles in suspension “allows us a lot more flexibility in being able to not worry about nonspecific binding—because most nonspecific binding occurs on the plastics of wells.” The antigen is then translated into a signal using a fluorescently conjugated detection antibody.
Once the sandwich is formed and washed, the detection antibody is dissociated so that only the detection antibody, antigen, and elution buffer remain as a solution to be read in the Erenna immunoassay reader, which is in concept similar to a capillary flow cytometer.
A laser focuses on the sample within a 100 µm capillary tube, and a single molecule at a time is counted. “All we’re looking for is literally the fluorescence intensity of a molecule,” he said. An algorithm back-calculates the concentration of target in the original sample.
“We’re focused on very high precision and high sensitivity, though we have a broad dynamic range for targets not requiring exquisite sensitivity,” Dr. Zieske said. Users can run the equivalent of four 96-well assays at a time, without the cross-talk inherent in multiplex assays. He suggests we “think of it as being a multiple monoplex,” giving the same amount of information while potentially using less precious sample than a multiplex ELISA.
Single System, Multiple Assays
High-density microarrays can whittle 20,000 targets, found from assaying just a few samples, down to a few dozen or hundreds that seem to be telling a story. After that is “the point when you really want to hone in on what markers are important for that particular disease or condition that’s being studied,” said Sherry Dunbar, Ph.D., director of scientific marketing for Luminex. “We think of ourselves as the step after high-throughput screening.”
She touted the ability of the company’s eponymous barcoded bead-based platforms to quickly, and at low cost, multiplex multiple tests: “You can run—depending on which of our analyzers you’re using—a 96-well plate in 17 minutes to an hour, and you can have a multiplexed result, up to 500-plex, on 100 samples in an hour or less. So that’s a lot of data you can get in an hour.”
It’s difficult to get a good picture of what’s happening immunologically by looking at just one cytokine. In a study on ischemic brain injury, researchers used Luminex technology to simultaneously examine the expression of 30 or so cytokines from samples of just 50–200 laser microdissected brain cells. “There was really no other way to do a very extensive analysis on that,” Dr. Dunbar said.
Just as important, though, is that Luminex can do that not just with proteins like cytokines—one of the platform’s original target markets—but can query samples at the genetic and gene expression levels as well. For cytochrome P450, for example, “we have some larger assays that look at a lot of different alleles and then narrow down the key markers that are important for predicting response to different drugs like Warfarin,” Dr. Dunbar said. These kinds of studies allow the small contributions of individual or clusters of biomarkers to be teased out from otherwise impenetrable background noise.
More than 8,000 systems have been placed over the last 12 years, with over 50 Luminex assays having been approved for in vitro diagnosis by the FDA—including one for HLA matching that has “kind of become the gold standard,” Dr. Dunbar noted. Such a track record reduces the risk associated with regulatory clearance.
Due to duration of a study, kit shelf life, and other factors, it is not unusual for samples for quantification of a biomarker study to be run on different lots of immunoassay kits in support of both preclinical and clinical studies. Yet for a variety of reasons—going up and down the supply chain of critical reagents that go into them—“there is quite a bit of variability in terms of the quality of these commercial kits,” laments Afshin Safavi, Ph.D., senior vp of BioAgilytix Labs.
Even home-brewed assays rely heavily on the critical reagents, he pointed out. “We don’t call it a kit, but in essence we are developing a kit in our shop internally.”
Like it or not, the researcher needs to shoulder some of the burden of making sure the kits perform the same way lot-to-lot, year-to-year—and if not, to come up with systems to bridge the data that are generated by different lots.
To do this, Dr. Safavi recommended at a minimum running a series of quality controls and samples in both old and new lots. For larger studies “our practice is actually to repeat the lot-bridging process over three consecutive days, and that way you generate a larger number of data points that are more statistically significant,” he said. “And then we come up with a correction factor…to normalize that data to the previous lot.”
With the trend to multiplex assays, the bridging process becomes more critical than ever because the panel of proteins included in different biomarker kits comprising a multiplex assay do not always vary in the same way as each other from lot to lot, Dr. Safavi observed.
When applying a correction factor, it is important also to know what kind of tolerance there is for differences, and this can depend upon the intended use of the kit, the disease area being supported, as well as the changes you may predict, he says. If a 500% change is expected due to drug treatment, 40% variability is not going to have much effect on the decisions likely to be made based on the test, but if a 20% change is expected due to drug treatment even a 5% change may affect the quality of the data generated and impact the decision process.
Dr. Safavi was quick to emphasize that his comments applied only to research kits used in biomarker studies. “When it comes to pharmacokinetics or immunogenicity assays, they have their own sets of processes for qualifying and bridging reagents and assays and lots.”
A Case in Point
Tests developed in a regulated environment generally adhere to much tighter tolerances, with far more involved in their validation. When Roche developed the cobas 4800 BRAF V600 mutation test as a companion diagnostic to the metastatic melanoma therapeutic Zelboraf, for example, “no less than 25 analytic performance verification studies were required by the PMA [FDA premarket approval] process as well as for European and other regulatory agencies,” recalled Walter Koch, Ph.D., vp and head of global research for Roche Molecular Diagnostics. “Multiple labs, multiple operators, multiple reagent lots over different days, with over 1,400 samples, all showed that the test was highly reproducible.”
Development of the diagnostic and the drug proceeded together apace, taking a mere five years from the IND to their approval last year—“those are pretty much record times, I think,” Dr. Koch said. “It shows the power of patient selection if you’ve got a good hypothesis of the biomarker, a good drug target, and a good drug that hits that target.”
The assay, based on extracting DNA from FFPE samples for PCR, had its challenges to overcome, he pointed out. Formalin reacts with nucleic acids and causes them to degrade (“in the worst case only as much as 10% of the DNA is amplifiable”). Melanin in these highly pigmented cells is an inhibitor of PCR (“we had to devise sample-preparation strategies that eliminated the melanin from carrying into the PCR reactions”). And tumor tissue is heterogeneous and may contain different percentages of tumor content (and “not necessarily all the tumor cells have mutant copies—you can have mixed populations”).
After all that, Roche had a conundrum. The FDA required it to compare the test results with Sanger sequencing, which can miss mutations found in less than about 25% of cells. The cobas test, on the other hand, was able to detect down to about 5% of cells carrying the mutated BRAF gene. Dr. Koch explained: “Our clinical trials had shown that those patients benefited from the drug. Yet the gold standard said, ‘Oh, you’ve got a false negative or a false positive’.” The issue was ultimately resolved by validating the procedure against 454 next-gen sequencing. “That was a challenge, because the agency had not looked at such data before.”
Dr. Koch believes that targeted therapies and companion diagnostics are becoming inextricably linked. He urges coordinating and aligning their development from the beginning, and making sure that the diagnostic is in place and ready to go for the start of pivotal trials.