October 1, 2014 (Vol. 34, No. 17)
Robin Munro Director IDBS
Streamline Data Practices to Speed Journey from Laboratory to Clinic
It was inspiring to listen to Howard J. Jacob, Ph.D., professor of physiology and human and molecular genetics at Medical College of Wisconsin, speaking a few years back about successfully treating a young boy with an extreme form of inflammatory bowel disease using genome sequencing. It just shows that with the right time, the right data analytics, and expertise there can be success in bringing genomics to the bedside.
So what needs to happen in genomic medicine to make this exceptional case become more commonplace? Now, almost four years later, the ambition and vision is there but major challenges still lie ahead.
The business of information sharing has to be guided by organizational policy. Medicine and clinical research need to move away from a muddied landscape of data. Silos of data are still representative of the whole field—even after years of advances in integration technologies. There are plenty of examples where the software systems and databases cannot be shared in real time between different groups, due to organizational and compliance issues. Data ends up being massaged, copied, and, in some cases, corrupted or incomplete. To make better use of genomics data, researchers need to make sure that they are gathering and sharing good quality clinical data as well.
Lack of standardized clinical data is another barrier for research-driven studies and this impacts getting good quality guidance back to the bedside. As John Quackenbush, professor of computational biology and bioinformatics at Harvard University, so rightly mentions in a recent article in Nature: “Publicly available datasets rarely include the right clinical information to define appropriate cohorts or test the relevance of a genomic signature.”
Furthermore, data analysis in the context of disease needs to be robust and made available for easy consumption, to help guide decisions. There is still a gap and lack of correlation between the information generated by high-throughput sequencing technologies (NGS) and the massive amounts of information that are available about disease.
We also have to be mindful today about what people will be capable of in the future and ensure that we have the correct ethics in place to protect patients and their families. Genomes fundamentally make people identifiable. Legal regulation has to be in place to ensure public safety and reassure people that using genomes routinely in medicine can become a commonplace event.
These challenges need to be overcome for genomics driven, or even supported, medicine to be a success.
Diagnostic companies and CLIA laboratories are increasingly using high-throughput data to design new diagnostics. John Sninsky, vice president of discovery research at Celera, a subsidiary of Quest Diagnostics, is leading a team to discover and develop meaningful new therapies that improve human health. They have implemented a platform that enables deep sequencing analysis. The project focuses on providing an informatics platform for DNA sequencing workflows, data management, and analysis of sequence data. Applications are used for evaluating the technical quality of the sequence assays and variation.
They have developed a mutation review portal that queries information from an in-house database designed and managed at Quest Diagnostics. This information is presented in an interactive web portal, together with the Integrative Genomics Viewer (IGV), from the Broad Institute, which helps scientists review the data. Annotation data from providers of curated information on pathways, drugs, diagnostics, and clinical trials is also collected, displayed and used for reporting on individual patients. The Clinical Laboratory Scientists (CLS) review, approve, and submit mutations to clinical directors for final clinical review for each patient. The review process is similar for each assay, which helps standardize the diagnostic techniques, and will undoubtedly help with the application of genomic medicine in the clinic.
The application of change to the clinic has to be driven by change at the research level. It is initiatives like eTRIKS, which encourages data sharing in a collaborative software environment, that will help to facilitate standardization of research.
“We need a platform to facilitate data-driven translational medicine using big data analytics and collaborative research for clinical investigators, analysts, and bioinformaticians,” explains Professor Yike Guo, from Imperial College London. “eTRIKS aims to foster new approaches for disease prevention, diagnosis, and treatment, ultimately redefining the way biomedical research is translated to better health.”
Collaborative multi-omics longitudinal clinical studies are the future of better healthcare. To enable better analysis of multi-omics data, standards need to be used to support reproducible research. This can, in turn, validate a study in the context of another. This is not a trivial challenge, and steps must be taken to harmonize how research data is generated and shared.
Researchers also need to be sure that they are standardizing how samples are collected and processed to help eliminate any likelihood of experimental variation. This variation also occurs in how the analytical methods are used. For example, different techniques used to identify features in omics data have been shown to have considerable inconsistencies when used on the same raw data.
The provenance of data is a must for improving how data is shared and used by others in and across organizations.
To overcome the challenges of poor data that is hard to reproduce, and drive research toward the clinic, a unique new program across Quebec will prospectively collect information about patients with many types of cancer. Scientists conducting serial biopsy trials will capture molecular signatures for omics data profiled from patients’ metastatic cancer tumors. This will help scientists to understand cancer subtypes and their characteristics and determine the molecular signatures of therapeutic resistance to specific treatments. When patients’ tumors present a specific molecular signature, treatments can then be targeted to match specific proteins. This will improve patient outcomes and help them avoid accruing needless toxicity.
“Cross-organizational collaboration within translational science is crucial. Our patients want doctors and scientists talking and sharing with each other,” notes Gerald Batist, director of the Segal Cancer Centre at the Jewish General Hospital.
This approach will follow patients throughout their cancer trajectory. A large biobank and accompanying knowledgebase will help to easily identify and access patient sub-groups, and support patient stratification to help identify cohorts and support new drug development.
Are We Nearly There Yet?
We are on the right path but there is a long way to go. Everyone involved in the domain seems to agree that making the right data available for the right people at the right time is essential. It will be the key to making decisions based on standardized analytical methods, and subsequently improving the applicability of genomics data to clinical guidance and pathways.
Clinical practice needs to move more in the direction of longitudinal studies, so that future researchers will have a legacy of good clinical data, samples, and molecular data available. This will help us to understand when exceptional responders appear in studies.
As more and more data is generated, only those individuals and groups who can evolve the way they work together with patients, omics technology, analytics, collaborators, and regulators, will really drive success when using genomics in the clinic.