Preclinical imaging focuses predominantly on estimating a parameter of interest, while clinical imaging is driven primarily by diagnostic efforts.
Preclinical imaging applications in drug development require multidisciplinary strategies and thoughtful planning. Imaging technology selection and quantitative analysis—the ability to extract numeric information from the image data—play primary roles.
Advancements and challenges in preclinical imaging were discussed by industry, academia, and government thought leaders at the recent GTC conference “Imaging in Drug Discovery and Development.”
Previously, a major preclinical image-processing bottleneck was manual segmentation of collected data, a slow process that did not provide enough relevant output information. Semiautomated or fully automated processing routines did not exist; a week of data collection translated to three weeks of processing.
According to inviCRO, its instrument-agnostic informatics protocols enable users to fully or semiautomatically segment regions of interest; store them in an accessible cloud-storage solution; create aggregate spreadsheets, or arrays of numbers, from the images to plot; and apply statistics, pharmacokinetic models, or automated reporting engines.
“We measure life one voxel at a time,” commented Jack Hoppin, Ph.D., co-founder and managing partner. “Datasets used to be 128 × 128 × 50. Now they are 1,000 × 1,000 × 1,000 × 70, or more. Creating a platform that can maintain and handle that data quantity is complicated.”
An interdisciplinary activity, imaging requires assembly of the right team of experts to maximize return on investment.
“Just defining the question you want to answer with the imaging data may be hard. How do you focus your efforts correctly to extract the best data amount without wasting time? You can spend a lot of time trying to automate things you should do manually, and vice versa.”
“Today, everyone acknowledges that imaging analytics, a technical, quantitative approach to data analysis, should be part of the workflow. You think through what your image-processing platform will be a priori—data management, processing, and reporting. It can be the most challenging aspect of the process,” concluded Dr. Hoppin.
In cancer immunology and immunotherapy, a stumbling block is encountered if blood, or a tumor, is decomposed. When the immune cells are extracted and run through a flow cytometer, the architecture, including the location of the immune cells, is lost. The use of tissue sections allows a better in situ understanding of these complex phenotypes of the tumor microenvironment.
Multiplexing on tissue sections brings different challenges compared to cells in a well or on a plate, and requires special expertise to retrieve similar information.
In a tissue section, entire cells or cell portions may be present. Morphologies, shapes, and the cellular environment add complexity.
In addition, all cells may not be of interest. For example, in cancer, stromal and tumor cells need to be differentiated. Technicians must patiently sit at a microscope, or computer screen, to figure out which part of the tissue section contains the portion of interest. Automating that process is essential in large studies. For fluorescence, 80–90% of the measured signal may be autofluoresence, a massive challenge.
PerkinElmer addresses tissue section throughput and multiplexing issues with the Vectra Automated Multispectral Imaging System and inForm Tissue Finder software, which finds, images, and quantitatively analyzes the portions of interest, speeding up the workflow. The system can enable the imaging and analysis of up to 10 markers in the same tissue section.
“With inForm Tissue Finder, you draw exemplary regions on a number of images, such as regions around epithelial cells or stromal cells, and a blank or nontissue. Then through an iterative, train-by-example process, the system uses machine learning to figure out which region of the image contains the various cell types.”
“The whole process takes under 10 minutes,” said James Mansfield, Ph.D., director of tissue analysis applications, life science and technology. “It is used primarily for pathology and toxicology with other applications in oncology, clinical trials, and drug discovery.”
Quantitative Label-Free Drug Imaging
Mass spectrometry imaging (MSI) is used to visualize the spatial distribution of compounds, biomarkers, metabolites, peptides, or proteins by their molecular masses. MSI looks at whole-body sections to quantify molecules in all organs quickly, as compared to LC-MS/MS. It can be used on small organs where high resolution is needed, or dissection is difficult, such as the brain or eye tissues.
“MSI detects thousands of molecules from a simple microlayer of biological tissue, but the tissue is composed by biomolecules which interfere in the quantification process,” said Jonathan Stauber, Ph.D., CEO and CSO, ImaBiotech. “This is termed the biological matrix, or tissue, effect.”
“For example, if we detect one molecule corresponding to a drug at the same concentration in the liver and the brain, we have different signal intensities due to different biological-matrix compositions. This limited the possibility to quantify molecules.
“We have developed a quantification protocol and software, Quantinetix, which take into consideration the biological-matrix effect by normalizing the datasets and the images making absolute quantification possible,” said Dr. Stauber.
MALDI (matrix assisted laser desorption ionization) imaging detects compounds by using a laser that ionizes molecules, while LESA (liquid extraction surface analysis) uses a micropipette to extract and detect biomolecules at a selected position on the tissue section.
ImaBiotech’s services combine these two complementary technologies with Quantinetix software and a biomarker databank to provide images with quantification of drug candidates and markers of efficacy, or toxicity, in a single experiment. This approach allows pharmacokinetic and pharmacodynamics studies in whole-body tissue sections at a resolution of 15 microns.