November 1, 2017 (Vol. 37, No. 19)
Over the past few years, 3D cell culture models have gained great attention because they can promote levels of cell differentiation and tissue organization unachievable in conventional 2D culture systems. These platforms such as organ-on-a-chip, biological scaffolding, or microdrop techniques offer substantial advantages in evaluating disease mechanisms, drug discovery, and toxicity testing and potentially, for tissue-replacement therapies.
But these systems present formidable technical hurdles to universally adaptable methods for standardization and validation, including the need for high-content analysis instruments, methods, and analytics relevant to 3D architecture.
At Cambridge Healthtech Institute’s 15th annual High-Content Analysis and 3D Screening meeting, held November 2017 in Boston, participants from academia and pharma companies will focus on advancements in HCA technologies and applications, including screening of 3D and physiologically relevant cellular models, data analysis techniques, and will present case studies and strategies for successful drug discovery.
Speakers from OcellO, The National Institute of Standards and Technology (NIST), and the Ludwig Institute for Cancer Research at the University of California in San Diego, talked to GEN about what they planned to cover in their presentations at the conference.
OcellO, a Netherlands-based contract research organization that offers drug screening and profiling services using 3D cell culture models of disease, says it has expanded the utility of patient-derived (PDX) mouse models by offering in vitro assays using 3D culture of the same tumor materials. OcellO’s high content imaging and analysis platform, it says, enables in-depth evaluation of drug effects in a robust, high-throughput format.
Leo Price, Ph.D., OcellO’s CEO, described the challenges of developing meaningful models in cultured human 3D tissues. “More complex biology means higher potential for experimental variation,” he says. This demands stringent standardization of protocols and adapting existing cell culture pipelines for 3D culture (i.e., for automated liquid handling).”
He further noted that capturing imaging data through a 3D stack requires a collection of about 50 images. “If done by confocal microscopy, this is slow. If done by conventional wide-field fluorescence microscopy, most of the data collected will be out of focus, so image deconvolution is required,” adds Dr. Price.
Resolving Disparities between in Vivo and in Vitro Studies
Image analysis of a 3D image stack is more challenging and computationally more demanding than analysis of a 2D image. Compression of the image stack into a single 2D image is an option, but then most information is lost,” he explains. “This includes the spatial association between different fluorescence channels, preventing co-localization.”
Despite technical obstacles, Dr. Price cited advantages of the company’s high-throughput in vitro PDX screening platform: the supply of PDX tumor is essentially unlimited in amount, and the same tumor can be sourced repeatedly “[This is] unlike patient tumor, which is limited in amount and can only be obtained once,” he points out. “Tumors are well characterized both genetically and with respect to drug responsiveness. This is generally not the case with a primary tumor from patients.”
Dr. Price also noted that a major headache in drug discovery is the translation between in vitro and in vivo studies—results of in vitro studies with cell lines can often not be recapitulated in vivo, partly because the biology of a 2D monolayer and a solid tumor are different and because the cell lines used in vitro differ from the tumors in mice. “Our in vitro PDX model uses the same tumor cells in vitro and in vivo, enabling better translation,” he says.
He also described examples of where an unexpected drug sensitivity observed with in vivo PDX studies was recapitulated in the company’s in vitro PDX model. “We have also performed drug screens that have identified the potential therapeutic activity of unique molecules,” he said.
While Dr. Price says It’s too early to say how the company’s screening approach will pan out in follow-up in vivo studies, he identifies other examples of success stories, “where in vitro 3D screens have identified novel compounds that have been successful in preclinical studies and are either in or scheduled for clinical studies.”
Visualizing Stem Cells
Ann Plant, Ph.D., chief of the biosystems and biomaterials division, NIST, describes the development of tools that allow panning and zooming of hundreds of fields of view, permitting visualization of stem cell colonies as they expand and merge over days in culture.
Currently, she said, stem cell populations are often characterized using genetic methods. But methods that examine entire populations of cells don’t lend themselves easily to a comparison of individual colonies to one another, or dynamic measurements, and are unlikely to be sensitive enough to identify rare events.
“Imaging can provide quantitative phenotypic information and temporal and spatial information. Many image features can be quantified, many cells and colonies can be observed, and statistically rare events can be observed if sampling is sufficient,” she explains.
Microsoft’s Deep Zoom—the technology that underpins Google Maps—enables this type of cell analysis, according to Dr. Plant. The technology was extended to 3D using open-source activities such as OpenSeadragon. “We have developed a visualization system called Dipsomania that enables interactive capabilities and is implemented on top of the Seadragon JavaScript library,” she notes. These tools and NIST-generated data can be accessed at https://isg.nist.gov/deepzoomweb/home.
The pipeline includes applying a stitching routine to compose a single mosaic image from about 400 separate fields of view. A segmentation routine is applied to the aggregate image to identify colony objects, and the mosaic image in the fluorescence channel is corrected for uneven illumination, dark current, and background. A multi-resolution pyramid representation of each corrected and stitched 2D image is created, and a set of temporal gigapixel images is stored on a server, allowing for image transfer and viewing.1
“The method has allowed us to quantitatively compare preparations of stem cell colonies with parameters including colony size, growth rates, and Oct4 gene expression,” explains Dr. Plant. Potential correlations between these features were examined. Despite differences in plating density and colony size, colonies expanded at similar rates. “We also observed heterogeneous expression of Oct4 within some colonies. Rare events, like loss of Oct4 expression, were identified,” adds Dr. Plant.
Although it appears that users currently are largely academic researchers, Dr. Plant maintains that NIST’s methodology is useful for quality control of commercial preparations. “Ongoing work in our labs is addressing the use of image features as the basis of release criteria for cell therapy products,” she adds.
Analyzing Gene Knockdown Effects
Rebecca Green, Ph.D., from the Ludwig Institute for Cancer Research, University of California San Diego (working in the laboratory of Karen Oegema, Ph.D.), notes that while high-content screening for gene profiling has generally been limited to single cells, their team approached gene-function profiling by analyzing effects of gene knockdowns on the architecture of complex tissues in a multicellular organism, the nematode Caenorhabditis elegans (C. elegans).
“In our current work, we use the entire developing organism as our substrate and use RNAi to specifically target the genes required for embryonic development. We hope to map these developmental genes into pathways, and we expect these to include pathways required for neurogenesis, epidermal morphogenesis, and cell-fate specification, which are likely to have significant conservation with human developmental pathways,” Dr. Green says. Conceptually, “our current project is similar to our prior work with the reproductive organ of C. elegans. We need to identify groups of genes that show similar phenotypic ‘fingerprints’ to allow us to ultimately infer biological function.”
But analysis of this developmental dataset is much more complicated. Her team has a detailed time-course of three-dimensional data in two different strain backgrounds, for each RNAi condition. “Altogether, we will have over 27,000 four-dimensional movies to analyze. So, the crux of this project was to come up with automated computational methods to analyze this complicated phenotypic information in a way that deals with an evolving three-dimensional organism,” she says. Dr. Green and colleagues could not use machine-learning-based approaches to analyze this dataset mainly because these techniques require large sample sizes. Instead, they developed automated methods to detect biologically meaningful phenotypic defects. In one strain, the investigators, including Renat Khaliullin, Ph.D.—the postdoctoral fellow Dr. Green credits as the person who enabled the automated approach—dynamically tracked fluorescently labeled endodermal, mesodermal, and ectodermal nuclei to monitor defects in cell-fate specification.
A second strain was designed to allow tracking of morphogenic changes during epithelial and neuronal development by monitoring changes in their tissue position and shape (Figure 2). The team accomplished this by measuring the center of mass and moment of inertia, over time, for each fluorescently marked tissue, which can be done by considering pixel intensity values in each dimension, at each time point.
As Dr. Green explains, “Unlike our previous work, where the data were scored manually in a binary manner, the features scored in the current study were found using completely automated approaches and the parameters take into account changes in measured feature values over time.”
Armed with phenotypic fingerprints for each gene target, the team sought to identify groups of genes that show similar developmental phenotypes. According to Dr. Green, “Each phenotypic fingerprint can be represented as a unique location on a plot, where the axes are made up of computationally scored features. Genes that cluster together in a neighborhood within this plot are phenotypically similar. Relative distances to other neighborhoods can also be measured. This allows us to ‘map’ gene-gene relationships.”
Key considerations that were addressed in this work, she added, include dealing with highly complex 4D data; challenges associated with small sample size; scoring parameters in continuous, rather than discrete, space to capture a range of phenotypes and gene-gene relationships; and retention of rare phenotypes.
“The problems we are solving are problems that anyone dealing with a complex substrate, like a 3D tissue model, would encounter and we think the concept and approach is highly translatable,” Dr. Green concludes.