High-Level Informatics, Analytics, and Computer Learning Drive Personalized Medicine Approaches
Whether they are concerned with a mutation of a single gene, or mutations in a combination of two or more genes, today’s oncologists look forward to using genomic information to more precisely target and treat cancer. But as more and more researchers delve into the work of discovering which genetic mutations are associated with specific subtypes of cancer, or which drugs are most effective in fighting the cancers identified by their signatures, they begin to test the limits of their informational tools—computing platforms, informatics packages, and analytic algorithms. These digital factors are driving (and sometimes hindering) advances in developing more precise and targeted therapies for individual cancer patients.
Many of today’s challenges to increasing the precision of cancer treatment are directly related to both the complexity of data generated by genetic sequencing and the sheer volume of biomedical information contained in the published literature detailing new discoveries in the root causes of cancer and the drugs and therapies that most effectively treat it.
From biopsy (or blood draw) and sequencing and tissue analysis to the eventual determination of a diagnosis and the development of a treatment regimen, clinicians devoted to precision oncology are as effective as their computational infrastructure is robust. According to David Jackson, Ph.D., chief innovation officer of diagnostic company Molecular Health, the ability to capture, store, and analyze vast datasets generated by molecular technologies is shifting the skills needed to treat cancer.
“Some people have described medicine as more of an art than a science,” Dr. Jackson said. “I think what you are seeing with the evolution of these technologies is [a shift in the other direction. These technologies] are driving medicine away from an art form and making it much more an information science.”
The challenge today is not found in generating the sequencing information, it is in handling the data—integrating it within a broader medical informatics and patient-centric setting to provide clear decision support to the oncologists who rely on it.
Making Sense of the Data
At Molecular Health, the focus of the company is to find ways to use all available knowledge to provide treatment guidance based on each patient’s specific genetic profile. Although there are specific companion diagnostics on the market that suggest a specific drug based on a single specific cancer-causing mutation, the bulk of cancer treatment is significantly more nuanced and takes into account multiple mutations, multiple potential drugs, as well as other patient morbidities and medications.
“The ultimate approach for making treatment decisions is a tripartite approach that is looking at tumor-specific information, patient-specific information, and then the patient intrinsic information like co-medications,” Dr. Jackson explained.
This approach requires an encyclopedia of information and data not only on available research for known mutations and tumor types, but also drug efficacy, drug-drug interactions, and the like, all curated from public sources. Molecular Health gathers this information from all sources globally, helps its team of curators determine relevance scores (peer-reviewed, in-patient studies score highest), and finally runs a patient’s tumor sequence though its proprietary analytics engine to generate a report that cites a number of treatment options ordered by likelihood of efficacy. The report the clinicians see also includes links to relevant studies, should the clinicians want to delve deeper into the sources before making a treatment decision.
That said, the oncologist still has the final say in how each patient is treated. “What is sometimes misconceived in the field—especially among physicians—is that the computer is out to make their job redundant,” Dr. Jackson said. “In reality, this type of technology is doing nothing more than what a doctor would do if they had access to all the literature in the field” to compare the clinical and genomic information of individual patients.
Using this research, doctors could then compile the information of all relevant studies and then use it to create a prioritized approach to treating the individual based on the discovered evidence. The problem would be the time needed for a physician to do this research.
“The physician would have to return to his office after a one- to two-month sabbatical for that single patient,” Dr. Jackson added. “All these technologies are really doing is harvesting all the evidence and then, through the analytical methods, analyzing the data in the exact same way physicians would—if they had the time.”
Data Flow at the Clinical Level
Although diagnostic companies such as Molecular Health, Foundation Medicine, and others have made significant strides in providing actionable information and clinical decision support matrices for individual patients, the integration of sequencing and treatment data within the clinical setting remains a challenge.
According to Noah Hoffman, M.D., associate director of the Informatics Division in the Department of Laboratory Medicine at the University of Washington Medical Center (UWMC), designing systems that can handle the sequence data across the breadth of a medical system is a difficult task.
“Bringing a laboratory information system up in a laboratory is a fairly major undertaking,” said Dr. Hoffman. “Managing data flow between your laboratory information system and other applications you are using is a complex process.”
Today, even major electronic medical record (EMR) systems, such as those provided by Epic and Cerner, lack the ability to ingest sequencing data. In UWMC’s case, since directors in the clinical laboratory can’t send sequencing data directly to the hospital’s EMR, they instead create narrative accounts of the reports, which are sent to physicians in either a text or PDF format.
“That is fine to support clinical operations, but it leaves an enormous amount of structured data out of the patient record,” Dr. Hoffman stated. “What that does not provide is hyperlinks to relevant literature, or a decision matrix where you put in characteristics of the patients and let the system match the symptoms. It is a very static representation of the data.”
As hospitals look to integrate all data sources into the EMR system and use information contained in EMRs to inform patient care, this represents a significant gap. Physicians in most hospitals are already stretched for time adapting to the use of EMRs within their daily workflow. Needing to go outside the patient record is less than ideal.
“When you ask them to extend their activity into all kinds of external systems, the value of the decision support tool really declines,” Dr. Hoffman concluded. “There is so much activation energy needed to access the data.”
Automating the Tried and True
It wasn’t so long ago that the pathology report—generated in the hospital’s laboratory through the examination of tumor tissue samples—was the primary tool for both diagnosing and choosing a treatment for cancer. As next-generation sequencing has taken on a prominent role in pinpointing treatment, the examination of tissue samples has declined in importance. But that will likely change in the coming years, as new technology promises to automate the analysis of samples on tissue microarrays (TMAs).
One approach to the automation of tissue analysis is being developed by David J. Foran, M.D., at Rutgers University and the Cancer Institute of New Jersey. Working in conjunction with IBM and its World Community Grid, Dr. Foran and colleagues are teaching computers how to differentiate between fat, epithelial cells, cancer cells, and the like via image analysis. By combining machine-learning technology and ready access to more than 100,000 tissue samples, these investigators mean to build reference libraries of specific types of cancer.
Dr. Foran believes the information contained in the image libraries can serve as additional information to help inform personalized cancer treatment. “I don’t see this as competing with sequencing; rather, I see it as complementary,” Dr. Foran insisted. “One of the reasons is in the tissue microarray. These are heterogeneous tissues. What tissue microarray allows you to do is localize the signal. So, you could say this particular biomarker has an affinity for the nucleus or the cytoplasm or one specific tissue. That information is often lost when you are doing standard gene sequencing analysis.”
There is also significant opportunity for this technology to be used to bolster information about which therapies work for specific cancers and which ones don’t. Dr. Foran pointed out that the TMA technology can be used to unlock information in decades-old cancer tissue samples currently stored in repositories around the country.
“These are patients for whom we already know the outcomes,” noted Dr. Foran. “You could image going back retrospectively and pull out those cases. You could do some of the advanced studies that are available to us today that weren’t available then.”
Drs. Jackson, Hoffman, and Foran agree that better integration of all data relating to the treatment of cancer and the information provided to clinicians is both the promise and a limiting factor in more quickly accelerating personalized treatment.
“At present, multiple approaches—gene sequence analysis, tissue microarray analysis, and automated image analysis—are being developed in parallel,” remarked Dr Foran. “What if these approaches could be combined by a computer? What if the computer could then be used to improve treatment accuracy? That is where things are moving.”
Chris Anderson is the former Chief Editor of Drug Discovery News, which he helped launch in 2005. ([email protected])
This article was originally published in the February 2015 issue of Clinical OMICs. For more content like this and details on how to get a free subscription to this digital publication, go to www.clinicalomics.com.