|Send to printer »|
Feature Articles : Aug 1, 2008 ( )
Roundtable Discussion: High-Throughput Screening Challenges
Biotechnology and pharmaceutical scientists are increasingly relying on high-throughput screening (HTS) methods to discover new leads that may, in turn, be transformed into promising therapeutics.
GEN recently held a Current Trends in High-Throughput Screening roundtable discussion at its offices in New Rochelle, NY. The goal was to obtain the perspectives of two veteran pharmaceutical industry researchers and two instrument and software specialists on the present state of HTS in bioresearch and their views on where the technology might be headed in the future.
The following Q&A focuses on the current bottlenecks in HTS assay development and data analysis.
In the September 1 issue of GEN, we will publish another edited section of the discussion that addresses HTS trends.
GEN’s Editor-in-Chief, John Sterling, served as the host for the roundtable discussions. The main participants were Dejan Bojanic, Ph.D., head of lead finding platform, Novartis Institutes for BioMedical Research; Richard M. Eglen, Ph.D., president, biodiscovery, PerkinElmer; Stephan Heyse, Ph.D., head of lead discovery informatics, Genedata; and Berta Strulovici, Ph.D., vp basic research, automated biotechnology head, Merck Research Laboratories.
Sterling: What are the current bottlenecks in analyzing high-throughput screening data?
Strulovici: The largest bottleneck is linked to the implementation of high-content types of assays in high-throughput format. As we move into more complex biology such as multiparametric assays we have to analyze a lot more diverse data from cell imaging and so on. This is where the complexity is found. You start having a bottleneck in terms of how you store, annotate, and analyze the data and how you generate knowledge that informs the next set of experiments.
Eglen: Given the fact that several of these imaging systems are now quite high throughput, some of the challenges probably lie in analyzing as well as in archiving or retrieving complex imaging information. It’s very interesting how the worlds of HTS and high-throughput confocal microscopy are now converging. This is generating a real bottleneck in terms of data interpretation.
Bojanic: Analyzing primary screening data is a well-developed process these days. In other words, it isn’t a bottleneck. The real challenge is in validating the hits once they’re discovered. It’s not uncommon to have a hit rate between one half to one percent, and the follow-up work that’s involved in understanding how a molecule interacts with a target can be very involved.
For instance, with biochemical assays, it’s not just a question of measuring inhibition or activation in the assay, the molecules have to be characterized as true binders and not assay artifacts. For certain cellular assays, one can easily measure an effect on a phenotype, but the challenge is to identify the intracellular site of action, which may involve a concerted target fishing approach.
Heyse: Perhaps the most visible challenge nowadays involves new technologies, like high-content screening, adding to the complexity of the information. Volume is still an issue, in a way, because HTS departments do more and more downstream work. And so analyzing the secondary assays at increased throughput and treating all this data appropriately remain issues.
Distilling meaning out of this data and sharing it with other scientists like medicinal chemists is also quite challenging because every assay is somewhat different. Often, your interpretation depends on the exact experimental protocol, e.g., the ATP concentrations at which kinase inhibition had been measured.
To relate this information to someone who has not done the experiment or who is not on the HTS team is not easy. We’ve seen that HTS data is increasingly used in a broader context beyond the individual project. This means that it becomes more important to standardize and share this information appropriately and to pre-interpret it for the scientists who are not involved with the experiment.
Eglen: Is assay development, with all of these different sorts of targets coming in, also a bottleneck?
Strulovici: That’s a very important point. The initial or primordial assay, which comes from the basic biology labs, is often not an assay that can be performed in a high-throughput format. It requires effort to build the kind of assay that recapitulates the physiology of the target and is in high-throughput format. It also takes time to obtain the right high-quality reagents in large enough amounts to enable large-scale experimentation. So those are also bottlenecks.
Heyse: Has this become easier with, for example, high-content screens? Do you need to optimize them to the same extent as in classical cellular screens?
Strulovici: Yes, high-content screens are a lot more complex, particularly if you are using more antibodies and all sorts of fixing types of protocols. Not every kind of cell sticks to the bottom and can be washed. The challenge is not just taking the image. It’s in building an assay that is robust enough and can be done in high throughput.
Bojanic: Assay development in terms of scaling it up for HTS is fairly straightforward. The biggest hurdle at the early stage is the procurement of quality reagents. In many cases, getting a good quality protein is tough, and it can take a long time. In addition, engineering cells can also have its challenges.
Additionally, if one has to perform biophysical characterization of biochemical hits, then one would like to have crystal-grade proteins as well. And that puts an extra emphasis on quality in terms of acquiring the reagents. Once the right reagents are procured I would say that assay development, which is configuring the assay for screening, involves a relatively fixed timeline that’s pretty well controlled.
Eglen: Is there still a bottleneck in translating assays performed in the therapeutic area group to those that will be undertaken in an HTS group? Does this require using different techniques including conversion to a highly miniaturized assay format that can be readily automated?
Strulovici: No. When we mapped internal activities to see where bottlenecks are, this was identified as a bottleneck. You need a certain “handshake” in an abstract sense of the word, and this requires proactive thinking, effort and planning. Plus, for example, not all organizations are the same.
At Merck, my department, automated biotechnology, is a Center of Excellence for the entire Merck Research Laboratory global lab. Therefore, the handshake is not really a physical handshake. This is easier for some people who have worked with us for a while; they already understand how to do it. But for groups that start fresh, it can be a challenge. When we work with intact cell assays, it is important that the cells are properly characterized, and that’s different from pulling a protein and other reagents from the freezer, which would be the same day in and day out. Cells don’t behave the same way day in and day out. There’s a lot of variability in the biology, from the first day of screening to the last day of screening. That needs to be taken into account when you do assay development.
Bojanic: We put extra emphasis on communication as we recognize that our scientists must work with the disease areas well before a target is nominated. In this way they can be part of the project team. They can manage expectations, and they can educate in terms of what the requirements are. And, in doing so, it really elicits an environment where the target nomination, reagent assembly, assay development, and screening strategy are fit for purpose. So there isn’t a throw-it-over-the-wall type of mentality, but a spirit of partnership between screening groups and the disease areas with a common objective.
© 2013 Genetic Engineering & Biotechnology News, All Rights Reserved