January 1, 2011 (Vol. 31, No. 1)

Vicki Glaser Writer GEN

Greater Heights Are Being Reached in Cell-Based Screening and Preclinical Analysis

At the Image Informatics and Computational Biology Unit in the Laboratory of Genetics at the NIH, Ilya Goldberg, Ph.D., head of the unit, and colleagues are developing computational strategies based on pattern recognition to derive and interpret quantitative measurements from morphological assays.

By training computers to recognize patterns in biological image data acquired via automated, high-throughput microscopy, they are, in essence, teaching computers to think like humans and use intuitive thought processes to cull through and analyze the massive amounts of data generated by image-based high-content screens.

Once the computers know what to look for in the images, they can then apply image classifiers developed by Dr. Goldberg’s team that allow the computers to extract the desired information and transform it into quantitative data for analysis. The ability to extract quantitative data from high-content screens will allow researchers to perform a broader scope of assays including, for example, dose-response assays designed to generate standard curves.

Dr. Goldberg will be among the presenters at CHI’s upcoming “High Content Analysis” meeting in San Francisco. He describes a key challenge in developing pattern-recognition models for biological image analysis—a technology initially developed for remote-sensing applications: telling the software how to identify the objects you want to measure, which in the case of cell-based assays, are subcellular organelles that have been stained with fluorescently labeled antibodies.

Using positive and negative control sets, the computer is trained to identify differences in morphological characteristics of interest, such as shape or intensity, for example, without first having to identify cellular or subcellular structures in the image. This “pure pattern-recognition” strategy offers advantages over more conventional algorithm-based image processing that relies on locating individual cells or specific organelles, according to Dr. Goldberg. Teaching a computer to differentiate objects and recognize patterns in images, rather than giving it algorithms designed to mimic human thought processes, allows for the development of a more generic, multi-purpose analytical approach.

Algorithms and parameter-based methods are designed for specific image-processing tasks. Pattern recognition can be applied more broadly to a variety of image-based quantitative assays.

Compared to humans, computers can distinguish much more subtle differences in cell and organelle morphology using the data obtained from assays performed on automated microscopy systems. With adequate training, computers are better able to filter out the characteristics and specific changes of interest in a cell from the sea of information produced that includes anecdotal evidence and meta-data about how the image was acquired.

Goldberg’s group developed an image-classification system that incorporates more than 2,000 numerical image descriptors. The computer is taught how to apply the image classifier to translate the qualitative information obtained during image analysis to quantitative data. By comparing a variety of control datasets, the computer determines how to distinguish natural experimental variation in an image (and filter it out of the analysis) from aspects of the image that help distinguish it from other images.

The computer is also taught techniques for filtering through the descriptors and ranking them according to which are most effective in helping it distinguish between the different control sets. The computer then uses this image classifier to analyze new datasets.

One downside of this pattern-recognition strategy is the difficulty researchers often have interpreting what the computer is “seeing” and basing its results on, says Dr. Goldberg.

One of the ways the NIH group has tested and applied pattern recognition in high-content screening studies is to characterize the morphological transitions that cells undergo as an organism ages. In a screen designed to study muscle degeneration in worms as a determinant of physiological age, the group trained the computer on a classifier derived from images of the neuromuscular cells that comprise the pharynx collected from age-grouped Caenorhabditis elegans.

The computer learned how to determine physiological age based on quantitative analysis of the tissue architecture and pharynx function. The results led Dr. Goldberg’s group to conclude that rather than being a gradual process, aging in worms occurs in three distinct stages. The group is expanding on this experimental approach to explore how it can apply pattern recognition and quantitative image analysis to identify other types of nonlinear transitions that are governed by biological processes.

Modeling Biological Systems

“Advances in high-content imaging technology, multiparametric image analysis, and automation of screening in cell-based models have enhanced our ability to model and understand disease biology at increasingly earlier stages in drug discovery,” says Beverley Isherwood, Ph.D., team leader in the advanced science and technology laboratory at AstraZeneca. The group is applying high-content analysis in complex cell models to assess drug efficacy in single agent and combination drug screens. They are developing co-culture, 3-D, and kinetic models of biological processes to simulate complex biological systems for profiling the mechanism of action (MoA) of experimental compounds.

One example is a human primary co-culture model of angiogenesis that is useful for performing multiplexed assays capable of identifying vascular modulating agents, obtaining MoA information, and predicting cytotoxicity in a high-throughput screen upstream of preclinical testing.

“This model allows us to evaluate not only direct effects on human endothelial and stromal cells, but also effects on autocrine and paracrine signaling between cells,” Dr. Isherwood explains. The company has put in place end-to-end automation of long-term co-culture screens, from cell plating through to data processing.

The researchers are extending these approaches to ex vivo tissue analysis, enabling the incorporation of kinetic information and the study of cells and tissue in co-culture and 3-D environments, “which is allowing us to apply high-content approaches to more complex models of disease biology” for rapid repositioning of drugs and early-stage predictions of the safety and efficacy of a drug or drug combinations, adds Dr. Isherwood.

In the area of oncology, the group is applying high-content screens to generate compound phenotypic fingerprints by assessing compound activity in cells derived from different patient backgrounds to understand how genetics affects response to treatment. They are exploring the ability to apply phenotypic high-content analytical approaches that integrate both kinetic and endpoint measures of phenotype across in vitro and ex vivo clinical samples.

Evidence of a phenotypic effect in vitro that correlates with a similar finding in clinical samples strengthens the predictive potential of the assays and leads to the identification of biomarkers of efficacy.

Dr. Isherwood believes that, in the future, the application of molecular pathway profiling techniques and emerging imaging modalities such as fluorescence lifetime will make it possible to gather even more information from a screen and to identify endpoints that facilitate the translation of phenotypes to in vivo and clinical outcomes.


Tubule formation in a human primary co-culture model of angiogenesis [AstraZeneca]

Reasoning Across Datasets

The BioAssay Ontology (BAO), developed by the Center for Computational Science at the University of Miami Miller School of Medicine, is designed to provide an ontology-driven, semantic description of data generated from high-throughput biological screens that can be mined in an integrated, inferential fashion.

Combined with software tools that allow users to browse, query, and explore diverse datasets, the BAO provides a standardized approach to facilitate data retrieval and analysis and allow for the integration of data from multiple high-content and/or high-throughput screens.

The ability to search large amounts of diverse data and establish relationships on a conceptual level will allow computers to reason, draw conclusions, and seek answers to biological questions. The BAO project also encompasses a data-curation component in which the results of many annotated assays drawn from PubChem are integrated with the ontology.

The University of Miami group has released a beta version of its BAO software, which will be more broadly available by early 2011. A main goal of the BAO project was to create a technology that could apply inference/reasoning across large datasets, according to Stephen Schürer, Ph.D., assistant professor at the university and a member of the BAO project team.

This would be analogous to typing even a simple question-based query into a computer search engine, explains Dr. Schürer. While search engines can cull through datasets looking for specific words or phrases, when presented with a question, they cannot readily mine a database for relevant information and process the information to develop a response.

In an ontology-based system, a concept is not only defined by a word or phrase, but rather in a form—and using a standardized vocabulary—that a computational system can understand. A particular word, for example, would not only have a specific meaning associated with it, but would also have relationships that link it to other words and concepts.

“The first version of our software cannot do this yet; it cannot make inferences of scientific relevance,” says Dr. Schürer. But the group is working toward a system that will ultimately be able to integrate the results of screening studies, gene-expression data, findings from knock-out experiments, and knowledge of biochemical pathways, cell-signaling networks, and other cell and systems biology information for example, and present it in a way that allows a computer to identify relationships and make inferences. A computer could then answer questions such as, “In which types of biologies/assays are these compounds active?”, and make determinations such as whether a “hit” on a screen is an artifact of a certain method of screening or whether the compound is active across multiple assays.

The challenge is to enable “reasoning across huge datasets,” says Dr. Schürer, describing this as an area of active research in computer science. One solution may involve cloud computing. “Because all the data is directly or indirectly related, inferences across large datasets can likely provide novel, meaningful insights.”

David Andrews, Ph.D., professor of biochemistry and biomedical sciences at McMaster University, will describe his group’s work using automated microscopy to explore cell physiology in a presentation sponsored by PerkinElmer.

Dr. Andrews identifies three main trends in high-content screening for drug discovery. First, there is demand for more realistic assay conditions using live cells, and increasingly a move toward the use of primary cell cultures. To achieve this, Dr. Andrews suggests that temperature control of the cell cultures during screening is essential, the presence of carbon dioxide is helpful but not required, and humidity level control is important but can be maintained simply by covering the plate with a lid.

Temperature control requires the use of an incubator, which can either be part of the imaging system as a built-in or modular component, or a stand-alone unit that is able to communicate with the imaging and robotics platforms for rapid and efficient transfer of plates to/from the incubator and viewing stage.

The second trend is the emergence of numerical image-analysis technology that is enabling increasingly quantitative output from imaging studies. High-content assays are moving beyond more descriptive types of results, such as the translocation of a molecule from the cytoplasm to the nucleus, and to more sophisticated imaging screens capable of generating intensity-based data such as nuclear/cytoplasmic area or nuclear density measurements, and of yielding standard deviations. Using numerical image analysis, “we can make more than 500 measurements per cell,” says Dr. Andrews.

Furthermore, computers are able to distinguish many more subtle changes than are visible to the human eye, he notes. These small differences can be quantified and can, for example, allow the software to differentiate a stress response from the early stages of cell death in response to the introduction of a cytotoxic agent, distinguishing necrosis from apoptosis. In this way, high-content screening is helping researchers uncover drug mechanisms, Dr. Andrews adds.

The third trend is the growing use of fluorescence lifetime in high-content screening, which yields chemical findings that underlie biological processes and can be used, for example, to measure protein-membrane interactions or pH changes. Dr. Andrews’ group is using a modified beta-version of PerkinElmer’s Opera™ confocal microplate imaging reader to explore novel applications of fluorescence lifetime assays.

Advancing High-Content Imaging

Dev Mittar, Ph.D., senior scientist at BD Biosciences, will present cell-based screening data for the identification and characterization of cell-surface markers for a monocyte-macrophage cell differentiation model using flow cytometry and high-content imaging. Together, says Dr. Mittar, “the two complementary technologies provide a more comprehensive single-cell analysis.”

BD Pathway™ high-content cell analyzers are CCD camera-based automated cellular imaging systems that feature a selectable spinning disk that allows for confocal in addition to widefield imaging. The 435 instrument is a benchtop unit designed for performing fixed cell assays. The BD Pathway 855 system includes an environmental chamber with temperature and CO2 control to support live-cell imaging, including time-lapse studies, in 96- or 384-well plates. The 855 is capable of greater than four-color imaging. The stationary-stage design on both models allows cell samples to remain stationary while the objective moves across the plate. In this way, “you can image settled suspension cells without disturbing them,” says Dr. Mittar.

BD developed the new Version 1.7 data-management tools for its AttoVision™ software to overcome the problems associated with data overload from high-content screening. The tools are part of a client/server-based system that allows multiple users to access data stored in a central repository, to add metadata to experimental results, and to select from a variety of query functions. The software extracts quantitative data from an image and then analyzes the data, generating graphs, charts, dose/response curves, and other forms of built-in or user-specified reporting tools.

Dr. Mittar will also describe the company’s new BD Lyoplate™ Human Cell Surface Marker Screening Panel, which enables direct, antibody-based profiling of 242 cell-surface markers using either flow cytometry or high-content imaging.


Field of view (FOV) comparison between a CCD and an sCMOS camera. [BioImaging Solutions]

Spinning-Disk Technology

Yokogawa Electric, through its western U.S. distributor BioImaging Solutions, will introduce its CellVoyager™ confocal high-content imaging systems, based on the company’s spinning-disk technology, to the U.S. and European markets in early 2011.

Baggi Somasundaram, Ph.D., sales and marketing specialist for BioImaging Solutions, will demonstrate the capabilities of the benchtop CV1000 system, designed for the research market, during a technology showcase session at the meeting. He will also present the advantages of the CV6000 system, designed for high-throughput HCA in 6-, 24-, 96-, or 384-well plates for drug discovery screening applications.

Dr. Somasundaram describes the CV1000 as a highly automated, benchtop confocal imaging system, with high precision and resolution that is ideal for basic research and assay development.

The CV6000 offers the same high resolution as the CV1000 at higher throughput achieved through automation and multiple detectors. It can perform three-color imaging of a 96-well plate in one minute and a 384-well plate in five minutes. The CV600 images over a large coverage area “using an advanced five megapixel camera that captures an image field four times larger than the conventional CCD cameras currently used,” says Dr. Somasundaram.

For live-cell imaging, both the CV1000 and CV6000 contain a built-in incubator chamber and feature automated X-Y stage adjustment to facilitate long-term observation of cells as they change over time and in response to the addition of a drug or introduction of other stimuli. The systems include the software needed to perform parallel data processing, and users can customize the data-analysis algorithms.

Yokogawa recently announced a joint development agreement with the German Center for Neurodegenerative Diseases to collaborate on the development and application of cellular assays for HCA screening of compounds against neurodegenerative disease targets. Yokogawa will use this experience to enhance specific functions of the CV6000.

Robert Graves, senior applications scientist at GE Healthcare Life Sciences (www.gelifesciences.com), will present the company’s new Zebrafish Analysis Plug-In for the IN Cell Investigator 1.6 software. The plug-in module enables automated organ-based analysis of zebrafish embryo images acquired from any microscope, and is optimized for use with GE’s In Cell Analyzer 2000 system, which performs whole-well imaging in 96-well plates.

The system was designed for cellular assays and imaging of small organisms. Applications include testing of drug efficacy and toxicity. The In Cell Analyzer 2000 images a large field of view at high resolution, capturing an entire 96-well plate in a single image.

Using transmitted light imaging, the system can produce label-free images of zebrafish embryos with sufficiently high resolution to enable organ recognition. Taking advantage of the well-defined and distinct shapes of individual zebrafish embryos, GE scientists created a flexible, geometric digital model that the software can use as a reference to identify specific organs.

The built-in flexibility allows the software to make adjustments to the model to fit it to a particular embryo image; in this way, the software can make measurements even when the zebrafish is somewhat deformed. The IN Cell Analyzer 2000 can also obtain fluorescent images, and these can be linked to the digital model as well, thereby allowing quantification of fluorescence signals from different organs.

What sets the Zebrafish Analysis Plug-in apart is “its organ-based approach,” says Ahmad Yekta, Ph.D., staff scientist at GE Healthcare Biosciences. It can identify 14 different organ regions, yielding 19 system-defined morphometric measurements including length and area. Users can also customize the software with additional measurements that will then automatically be collected, including measures of curvature, intensity, transparency, or granularity, for example.

Mark Collins, global director of marketing for the Life Science Research-Cellomics business unit of Thermo Fisher Scientific (www.thermofisher.com), will introduce the company’s new personal cell-imaging platform, the CellInsight™, during the HCA technology showcase. Designed to overcome many of the barriers to entry into the field of high-content screening, the CellInsight incorporates many of the features of the company’s ArrayScan VTI HCS Reader and High Content Informatics (HCi) data-management and analysis platform.

In particular, Collins points to the system’s ease of use, software-guided assay development feature, and solid-state construction with an optical/light train. The CellInsight has no moving parts except for the filter and stage. It is powered by a four-color LED engine, minimizing maintenance needs. For assay development, users select the appropriate algorithm and assay design modules built into the company’s iDev software, which interactively guides users through the process of training the algorithm to differentiate between positive and negative controls for a particular assay.

Collins compares the cost of the system, about $100,000, to that of a high-end plate reader, making HCA more affordable and accessible for individual laboratories and researchers. Describing its speed he reports a “time to decision”—from image acquisition through data analysis—of about 3 minutes for a typical benchmark assay in a 96-well plate, less than 15 minutes for a 384-well plate, and about 60 minutes for a 1536-well plate.

“When you want to scale a high-content assay and run it in different laboratories across an organization, a personal cell-imaging system solves the problem,” says Collins.


High-content assay optimization for BD Lyoplate™ human cell surface marker screening: Representative pseudocolored merged images from antibody (green) and DAPI (blue) channels of differentiated THP-1 macrophages from control, CD11b (positive), and CD14 (negative) antibodies, respectively. Graph shows the average intensity quantified from antibody channels from control, CD11b, and CD14 wells (32 wells each) with Z’-factor values for the assay.

Potential in Stem Cells

Improving the workflow of high-throughput, high-content screening in stem cells—from image acquisition to data analysis—is a key area of technology development at Molecular Devices.  Evan Cromwell, Ph.D., director of research, describes the use of stem cells as one promising solution to the demand for more biologically relevant assay systems in drug discovery.

Stem cells can be readily grown in culture and expanded to produce ample quantities of cells for screening applications, and they can be induced to differentiate into various cell types of interest.

To facilitate the development of new stem cell lines for screening of compound libraries for drug efficacy and toxicity early in drug discovery, and to advance their use for therapeutic purposes, an automated solution is needed to monitor stem cell expansion and differentiation and cell-purification processes,” says Dr. Cromwell.

The main technology challenges at present are on the software side, he adds, and the need for integrated software tools to optimize workflows and data management and to facilitate the extraction of useful biological information from the large volumes of metadata generated. These tool sets need to be flexible enough to allow users to optimize them for a particular application, notes Dr. Cromwell.

Molecular Devices’ most recent generation of MetaMorph® microscopy automation and image-analysis software, MetaMorph NX, is a “much more intuitive interface,” Dr. Cromwell says. The new user-centered interface streamlines the workflow by integrating hardware setup and providing synchronized views of imaging data. The Dataset View feature keeps all of the images and data that belong to a particular dataset together in one workspace and displays the images in a grid with a user-specified layout. Acquisition parameters are stored with each image.

Previous article2010 in Review: Biotechnology Highlights
Next articleLundbeck and Biotie Report Alcohol Dependence Therapy Success in Two Phase III Studies