As pharma looks for better ways to build its drug pipelines, researchers are improving upon existing technologies and broadening their usage. At CHI’s recent “High Content Analysis” conference, presenters discussed new and revamped tools, and showed how these advances are helping pharma companies keep their drug pipelines active and productive.
According to Louis Stancato, Ph.D., senior research advisor, cancer growth and translational genetics at Eli Lilly & Co., phenotypic drug discovery (PDD) was previously laborious, inefficient, and often led to dead ends. “But the advent of higher volume informatics technologies, together with the high content imaging (HCI) piece, really makes PDD possible in a way that wasn’t tenable until now.”
His presentation examined how HCI subpopulation analysis tools enable a high-resolution look into cancer cell function. “This discipline is ideally suited to HCI applications. By incorporating custom in-house informatics tools, we can advance molecules with novel mechanisms of action through the lead-generation process, in particular, chemical series previously discarded owing to perceived failure in conventional targeted approaches.”
Dr. Stancato says that informatics experts help his team design custom algorithms, custom analyses, and data viewers that help find phenotypic fingerprints of interest. “Iteratively, we run our SAR looking at this phenotypic fingerprint in much the same way a chemist would look at an IC50 against an enzyme.
“We might look at upwards of 10 different data points from the same cell, synthesized to give us a number that we can then use to assess our structure activity relationships. This past summer, we launched an externally focused phenotypic drug discovery effort, called PD2 which is an open, global collaboration with academic and biotech institutions to help discover molecules.”
Dr. Stancato also examined case studies of molecules that could not have been identified any other way. They were essentially thrown away because they did not work in the targeted setting the way they were originally intended. “And if it weren’t for the imaging approach, we never would have known.”
Dr. Stancato’s group has helped many researchers with their imaging and informatics approaches by phenotypically showing molecules that were thought to behave similarly, actually behaved differently when looked at using his group’s informatics tools. “Everything we do results in a phenotypic change regardless of where it is, and that response will define whether or not the molecules we identify will help patients.”
Single-Cell Data Leveraging
Technology has indeed caught up to the ability to track phenotypic changes in cells, noted Oliver Leven, Ph.D., head of screening services, Europe, Genedata. “High-content screening experiments produce rich information on phenotypic changes of individual cells when subjected to treatment with compounds, siRNAs, or other inducers.”
“Managing resulting microscope images is a concern, but upcoming challenges to the researcher are the biologically meaningful interpretation and quantification of the high-content screening outcomes, especially with higher throughput as HCS is applied more broadly and more often.
“For example, will you be able to distinguish cell subpopulations of differential response, statistically aggregating them across cells, wells, and replicates, normalizing signals and eliminating errors, separating and quantifying phenotypes and effects for thousands of compounds?”
These challenges become exponentially significant when researchers attempt to scale up for large throughput. Dr. Leven’s presentation addressed the issue of high-content screening analysis within a high-throughput screening infrastructure, an approach that is being adopted more frequently by large pharma companies.
Dr. Leven noted that leveraging HCS data from the complex single-cell datasets—with millions of data points per plate—requires a scalable framework with automated data processing and intelligent management functions, including scientists’ review at any stage of the process.
“One point we made in the presentation is that, when using Genedata Screener for high-content screening analysis, users can reliably and efficiently create a hit list for a complex high-content screen,” said Dr. Leven. “It’s not easy to do, since there are many different features to be evaluated simultaneously, and currently, this is manually done on an ad-hoc basis without proper support. Today, however, technology is available that enables you to create a hit list with an initial rule set, run your quality control, and at any point you can go back to adapt the rules, and your changes will be reflected in the hit list—all your hit list criteria and decisions will be documented.”