January 15, 2010 (Vol. 30, No. 2)

Kathy Liszewski

Improvements in Speed, Data Integration, and Efficiency Are Expanding the Possibilities

The drive to produce more data, more quickly, and at less cost is fueling new strategies for lab automation, especially as applied to the drug discovery process. Overcoming bottlenecks of speed, efficiency, and data integration are among the topics to be discussed at “LabAutomation2010” later this month in Palm Springs.

Flow cytometry has existed for more than 20 years and taken on a number of new capabilities. Scientists at Vivia Biotech have gone back to the simple roots of flow cytometry, in screening primary cells with antibodies, but they’ve magnified the scale and brought it into the realm of personalized medicine. “We wanted to develop assays that are closer to the target than traditional screening methods and by testing patient samples directly, this brings the technology closer to personalized medicine,” says Teresa Bennett, Ph.D., vp of research.

Vivia Biotech has engineered a fully automated process for analyzing the impact of drugs on blood or bone marrow cells from patients with hematological malignancies. “We do re-profiling screening, looking for new indications of known drugs, and also screen the drugs for a particular indication to determine ex vivo which drugs a patient may be resistant or sensitive to,” Dr. Bennett reports.

“By using specific markers for cells along with assessing apoptosis, we can evaluate both healthy and cancerous cells simultaneously. In a short span of about 48 hours, this approach allows for the ex vivo analysis of thousands of drugs or combinations of drugs on patient samples. Traditional analysis via flow cytometry can screen only 10–100 drugs per sample.”

To accomplish this goal, the company automated the process from the beginning. It incorporated liquid handlers to prepare samples in tissue culture hoods and developed an automated flow-cytometry system to run the assay. To handle this much data also required development of proprietary software. “One important accomplishment is that we don’t have to analyze each individual well separately, we can use one file for the whole plate and this is analyzed within a few minutes. Ultimately, this approach could tell much more rapidly how a patient would respond to a drug regimen.”

The company has already identified a new drug candidate and will proceed into Phase I/II trials in the fall of 2010. Additionally, a clinical study for personalized medicine testing will begin in early 2010.

Genotype-Correlated Drug Sensitivity

Traditional methods of anticancer drug discovery, development, and approval have generally followed a tissue-centric approach wherein the organ from which the tumor originated has preeminence. An alternative strategy gaining headway places the genotype of the cancer cell center stage and evaluates anticancer agents to uncover genotypes that confer sensitivity.

“This would allow for the stratification of patients for treatment with a particular drug based on their genotype without regard to the tissue from which the tumor originated,” says Sreenath Sharma, Ph.D., assistant professor of medicine at Harvard Medical School and assistant director of Molecular Therapeutics at Massachusetts General Hospital (MGH) Cancer Center.

“We use a collection of more than 1,000 genetically characterized human tumor-derived cell lines from different organs and assess their sensitivity to anticancer drugs that are in or about to enter the clinic,” Dr. Sharma explains. “This study aims to identify specific genotypes that confer sensitivity to particular anticancer drugs and use this information to profile and identify cancer patients most likely to benefit from treatment with the drug.”

To work with so many cell lines and drugs requires automation. “We use lots of liquid-handling workstations for what we call pushing plastic, such as for adding drugs to multiwell plates and fixing and staining cells,” Dr. Sharma reports. “Analysis also requires software able to handle the volumes of data generated in order to create heat maps of drug sensitivity. What we can’t automate is the actual handling of cell lines. Each cell line has its own personality and properties, so here we need the human touch.”

For the future, Dr. Sharma says, “One could fantasize that a cancer patient coming into the clinic, gets a tumor biopsy that is genotyped before anything else is done. Based on the genotype of the tumor, the patient is then treated with drugs that specifically target the mutated gene driving his/her tumor. This personalized approach would maximize the benefit from the treatment while at the same time minimize the side effects of the drug. In some current trials, only 10% of patients respond to therapy, indicating that a lot of individuals are getting treated unnecessarily.

“At MGH with genotype-based patient preselection we can increase response rates from 10 to 80%, in some cases. The field is definitely headed in this direction as the paradigm changes from the less effective tissue-centric therapy to a more specific molecularly targeted treatment approach that is guided by the genotype of the patient’s tumor.”


Researchers at Harvard Medical School are making headway with a strategy that evaluates anticancer agents to uncover genotypes that confer sensitivity.

Microfluidics and Cell Modeling

Drug discovery often relies heavily on biological knowledge gleaned from working with cells and tissues in functional assays. Miniaturizing cell culture models using microfluidic systems is ramping up data collection and allowing more in-depth biochemical analyses.

Ivar Meyvantsson, Ph.D., engineering manager at Bellbrook Labs, provides some insights into the field. “Microfluidics opens the portal to a new way to culture cells in vessels that expand our ability to control the local cellular microenvironment and, just as importantly, to create three-dimensional models that provide more complex and detailed information. Also, interfacing microfluidics with standard automation makes the models much more accessible to drug discovery scientists than in the past.

“For example, a plate that has 96 structures allows one to set up a stable gradient to perform chemotaxis experiments. The cells can be observed with a microscope, which provides more information content compared to existing solutions as to the effects of a drug candidate on living cells.

“One can determine what population of cells moves and how far. You can employ automated image processing to detect morphological features. In other words, once you’ve established that a compound inhibits chemotaxis you can dig deeper and ask what type of effect it has on the cells.”

According to Dr. Meyvantsson, such automation often can be easily employed in labs to allow generation of large datasets.

“Because most labs that do this type of work have automated liquid handlers and high-content analysis systems already in place, they can get up and running quickly without any new equipment purchases.”

The new technology still has some challenges to overcome. “We are still just scratching the surface of this emerging technology,” Dr. Meyvantsson notes. “Some challenges that remain are finding the best way to gather and analyze information and improving manufacturing methods. We’ve made a lot of progress, but there’s still a lot of work needing to be done before we realize the full potential of cell modeling in microfluidic devices.”

Handling Data Deluge

The path from hit to therapeutic involves a complex maze of interacting multidisciplinary drug discovery teams. Handling not only the data, but the communication among all teams can be a monumental challenge that can spell the difference between success and failure and significantly alter the time taken to achieve the overall objective.

“The last five years have seen the industry begin to make changes in how they address the problems of siloed data,” explains Andy Vines, Ph.D., product manager for Activity Base™ at IDBS. “The issues here are primarily that when each department chooses its own solutions it becomes invisible to the rest of the organization. Communication between groups is often poor as a result, with e-mail or other static file types such as pdfs often used to send out their data to other functional disciplines. So, it’s a question of timing and efficiency. Lack of close coordination can be very costly.”

According to Dr. Vines, improving communication between in-house therapeutic program teams can help facilitate the planning and resourcing of screening activities in lead optimization. “Better orchestration and managing all the processes involved in drug discovery among the various disciplines provides a number of important benefits.

“IDBS’ Assay Cascade Solution helps reduce the overall cycle time for biological screening processes. By providing business intelligence dashboards of compound status, scientific processes can be carefully monitored and optimized, providing a single portal for therapeutic program teams to progress or usher molecules through the process. Another benefit is creation of an audit trail and transparency of the decision-making steps.”

IDBS offers a number of software solutions such as the Activity Base Suite that provides drug discovery data management for biological, chemical, screening, and structure-activity reporting. Additionally, its E-WorkBook next-generation ELN provides for capturing data from disparate sources, particularly preclinical.


According to IDBS, its AssayCascade Solution helps reduce the overall cycle time for biological screening processes.

Service Architecture

Jeffrey McDowell, Ph.D., senior manager, IS-research informatics at Amgen, agrees that integrated access to data is important for productivity and for providing a complete view of available information for making decisions. He suggests that there are three approaches companies can employ in order to facilitate the integration needed for automation and drug discovery.

“The solution Amgen has implemented uses a service-oriented architecture approach to create a data-integration system. In general there are three approaches to solving integration, which vary depending on where the data connections are made. These are integrating either in the data layer, where a consolidated data system (either physical or virtual) is created by re-factoring the information into a single or federated system; in the application layer, where the application is responsible for implementing the rules used for making connections between data; or in the service layer, which is a set of independent components that reside external to individual applications and connect via a distributed technology like simple object access protocol.

According to Dr. McDowell, ultimately the service-architecture approach focuses on creating discrete services categorized and published through a service registry. This type of architecture not only allows for integration but also is easily extensible and permits service consumers to discover new resources and makes data connections automatically at runtime.

“The consumer application uses these services in a distributed manner first by discovering them at runtime by querying the registry using standardized categorizations. The registry returns services matching the request which the consumer can then invoke. The service approach is easily extensible as adding new resources simply involves adding services according to the standards defined by the architecture. Given that the approach is data agnostic, data-type restrictions are only as defined by the categorization schemes used, the architecture can be extended beyond its currently employed data types.”

Integration for Systems Biology

Besides integrating data from various disciplines within an organization, another layer of complexity emerges when needing to integrate data from systems biology approaches. Systems biology studies the interactions and interplay of multiple levels of biological information. “Although integration is conceptually simple, we have about 15 years worth of software that costs hundreds of millions of dollars and often still doesn’t work efficiently,” John Boyle, Ph.D., senior research scientist and director of informatics at the Institute for Systems Biology, notes.

It’s not all doom and gloom, though, reports Dr. Boyle. “What’s important is simply that people within organizations take a step back and decide what they really need. There are a lot of redundant software solutions, but scientists need to switch to those most pertinent for drug discovery. Optimally, you need to ask three questions when deciding how to handle the problem.

“First, decide what will allow ad hoc data integration. Chips and instrumentation come and go, what is needed is a way to allow integration when things change.

“Second, find something that is easy to use. Scientists don’t have the time to keep learning new systems. It’s inefficient, too. We’ve found the best systems are more natural and typically look more like a file system. This is a nonintrusive approach and easily allows other activities such as e-mail integration.

“Third, the data-integration solution must be easy to adapt. Science constantly changes, new paradigm shifts and new scientific finds must be considered. New solutions must have built in flexibility so that they can rapidly be adapted to new usage since you never know what will be discovered next.”

Dr. Boyle suggests that the field is in a growth spurt. “We’ve just started really. We have a few growing pains, but it is clear that some commonalities are emerging. We are going to have to design solutions to be less formal and to meet science halfway. Instead of companies going for the latest trends, they simply need to take some time and put in some effort to take a good hard look at what is out there and what is the best way to meet their needs now and for the future.” 

Previous articleReporting from J.P. Morgan Healthcare Conference
Next articleStem Cell Utility Limited by Lack of Ethnic Diversity