Not so long ago, the omics field was a highly siloed collection of specialized applications and technologies. Now, multiomics is going mainstream. But as researchers revel in single cell resolution, challenges in storing and harnessing the data loom large, just as they did two decades ago at the start of the NGS revolution.

GEN invited a group of multiomics experts to share their predictions of the potential and needs of multiomics in the near future.


The Growing Clinical Impact of Genomics

Madhuri Hegde, PhD
Senior Vice President and CSO
Revvity

Today, genomics laboratories are doing far more than assisting physicians with diagnoses. By integrating genetic data with insights from other omics technologies—such as proteomics, transcriptomics, and epigenomics—medical geneticists can provide a more comprehensive view of an individual’s health profile.

Advancements in sequencing technologies have revealed that approximately 6,000 genes are associated with around 7,000 disorders. These breakthroughs enable medical geneticists to direct patients with rare diseases or conditions to physicians who can offer targeted treatments. Landmark studies, such as the U.K.’s 100,000 Genomes project and Project Baby Bear, have demonstrated the profound impact of genomics on healthcare decision-making, particularly for rare disease patients. As a result, more companies, including Revvity, are introducing innovative sequencing technologies and services to the market.

Awareness of and access to genetic testing vary significantly between countries globally, influenced by differences in national healthcare systems and the socioeconomic conditions of local populations. The presence of local genomics laboratories and population-specific sequencing efforts is essential to identify genetic variants unique to specific groups. In resource-limited regions, collaborations between public and private organizations can have a particularly transformative impact. Similarly, global pharmaceutical and biotechnology companies contribute by extending the reach of multiomics, whether through free or subsidized rare disease testing or the development of novel treatments for these conditions.

In addition to identifying suitable participants for clinical trials, advanced platforms for biochemical and genetic testing will be used more to monitor biomarkers and evaluate the effectiveness of specific therapies in 2025. The integration of multiomic data will also be driving the next generation of cell and gene therapy approaches such as CRISPR. These programs hold immense promise in the years to come, potentially accelerating the discovery of new therapies and improving the quality of life for countless individuals.

A growing body of research is also exploring the clinical value of omics-based screening in asymptomatic individuals. As genome sequencing continues to uncover new insights and becomes increasingly cost-effective, whole genome sequencing (WGS) will also shift from being a diagnostic tool of last resort to a first-line diagnostic approach.


Multiomics at Single-Cell Resolution

Charles Gawad, MD, PhD
CSO, BioSkryb Genomics

One way to understand the current state of multiomics research is to think back to where we started with bulk genomic studies. Due to technical and cost constraints, investigators utilizing early next-generation sequencing platforms focused on specific regions of the genome or transcriptome. As sample preparation and sequencing technologies have improved while sequencing costs have rapidly decreased, obtaining genomic, transcriptomic, and epigenomic information from the same sample is now possible.

However, integrating these data types requires inference and deconvolution algorithms that only have a limited capacity to determine which changes are likely to occur in the same cells.

More recent technological advancements have enabled multiomic measurements from the same cells, allowing investigators to correlate and study specific genomic, transcriptomic, and/or epigenomic changes in those cells. Similar to bulk sequencing, we are now seeing studies examining more of each cell’s genome, transcriptome, and epigenome as sample preparation technologies continue to improve and sequencing costs continue to decline.

I also anticipate that in addition to acquiring information from a larger fraction of the nucleic acid content from each cell, we will also begin looking at larger numbers of cells, as well as utilizing complementary technologies, such as long-read sequencing, to examine complex parts of the genome and full-length transcripts. Finally, the integration of both extracellular and intracellular protein measurements, including cell signaling activity, will provide another layer for understanding tissue biology.

Central to integrating these complementary measurements from the same cells, the development of artificial intelligence-based and other novel computational methods will be required to understand how each of these multiomic changes contributes to the overall state and function of that cell.

Single-cell multiomics is still a young field. I am eager to see how technological innovation over the coming years continues to transform our understanding of tissue health and disease at single-cell resolution.


Diving Deeper for Multiomics Data Analysis

Matt Newman, SVP & General Manager, Pharma & Diagnostics Business Development DNAnexus

This is such an exciting time for multiomics research. Not only do scientists have unprecedented access to proteomics, genomics (i.e., long- and short-read whole genome sequencing, or WGS), and transcriptomics (i.e., RNA-seq), but also to new frontiers in spatial transcriptomics and single-cell platforms. This offers 360-degree views of disease pathways from inception to outcome that are greatly needed to identify treatments and interventions for historically intractable diseases: from incurable genetic disorders to cancer to general aging. Translating this knowledge into the results patients need will require more than pulling large omics sets together and analyzing modalities in siloed workstreams. Rather, it will take new forms of data storage, infrastructure, and analysis, specifically pulling together streams of large multiomics datasets, and mining them holistically for insights that couldn’t be achieved with any individual dataset.

While AI allows faster, deeper data dives and a powerful new path for discovery, scientists need analysis tools designed specifically for multiomics data. Most analytical pipelines work best for a single data type, such as proteomics or RNA-seq. Scientists today often have to move data back and forth across multiple analysis workflows to get the answers they’re looking for. That’s not a robust model for a future where we see multiomics becoming a go-to approach for scientific inquiry. While we’ve seen improvement over time, especially with cloud vendors offering more access to these resources, we need more versatile models to handle these changes and evolution in data.

In 2025, I predict this field will greatly improve the availability of purpose-built analysis tools that can ingest, interrogate, and integrate a variety of omics data types, providing answers that have eluded the biomedical field in our mono-modal paradigms. However, new analysis tools alone won’t be sufficient. We’ll also need appropriate computing and storage infrastructure, along with federated computing specifically designed for multiomic data. 


Network Integration, Clinical Application

Gary J. Patti, PhD
CSO, Panome Bio

Multiomics research, the simultaneous analysis of multiple biological layers, is poised to revolutionize our understanding of complex diseases. Disease states originate within different molecular layers (gene-level, transcript-level, protein-level, metabolite-level). By measuring multiple analyte types in a pathway, biological dysregulation can be better pinpointed to single reactions, enabling elucidation of actionable targets.

Often when researchers perform multiomics, samples from multiple cohorts are being analyzed at different laboratories from around the world. This creates harmonization issues that complicate data integration. Moreover, even when datasets can be combined, they are commonly assessed individually, and the results are subsequently correlated. While these approaches have value, they do not maximize information content.

An optimal integrated multiomics approach interweaves omics profiles into a single dataset for higher-level analysis. This approach starts with collecting multiple omics datasets on the same set of samples and then integrating data signals from each prior to processing. The integrated data improves statistical analyses where sample groups (e.g., responders vs. non-responders, diseased vs. healthy, treated vs. untreated) are separated based on a combination of multiple analyte levels.

A key piece to an integrated multiomics approach is network integration, where multiple omics datasets are mapped onto shared biochemical networks to improve mechanistic understanding. As part of this network integration, analytes (genes, transcripts, proteins, and metabolites) are connected based on known interactions (e.g., a transcription factor mapped to the transcript it regulates or metabolic enzymes mapped to their associated metabolite substrates and products). Advances in machine learning and artificial intelligence are enabling the development of more powerful analytical tools to extract meaningful insights from multiomics data.

The application of multiomics in clinical settings is another significant trend. By integrating molecular data with clinical measurements, multiomics can help patient stratification efforts by predicting disease progression and optimizing treatment plans. Multiomics is particularly helpful for large cohort studies where machine learning approaches can be harnessed to build predictive models of disease course, drug efficacy, and more. 


Integration, Standardization, Collaboration

Joe Lennerz, MD, PhD
CSO, Boston Gene

Multiomics research is transforming our understanding of biology by integrating data from genomics, transcriptomics, proteomics, and other domains to reveal comprehensive insights into biological systems. While the field has made progress in recent years, its continued advancement will rely on addressing emerging trends and challenges.

A critical trend in multiomics research is the integration of multiple discrepant data sources. Biological systems are inherently complex and capturing their full scope requires reconciling data with varying formats, scales, and biological contexts. Advances in computational methods, particularly data harmonization, enable researchers to unify disparate datasets, generating a cohesive and actionable understanding of biological processes. Such integration is critical for biomarker discovery, disease classification, and identifying therapeutic targets.

Another transformative trend is the growing ability to perform multi-analyte algorithmic analysis. By leveraging artificial intelligence and machine learning, researchers can analyze multiomics datasets encompassing genomics, transcriptomics, proteomics, and metabolomics simultaneously. These technologies detect intricate patterns and interdependencies, providing insights that would be impossible to derive from single-analyte studies. As these algorithms evolve, their ability to integrate diverse data modalities into predictive and actionable models will be indispensable, especially for advancing diagnostic accuracy and personalized treatment strategies.

Liquid biopsies exemplify the clinical impact of multiomics, analyzing biomarkers like cell-free DNA (cfDNA), RNA, proteins, and metabolites non-invasively. Recent improvements have enhanced their sensitivity and specificity, advancing early disease detection and treatment monitoring. While initially focused on oncology, liquid biopsies are expanding into other medical domains, further solidifying their role in personalized medicine through multi-analyte integration.

The integration of multiomics plays a crucial role in advancing clinical outcomes, particularly in oncology. By utilizing multi-analyte datasets and advanced computational tools, we are gaining valuable insights into the molecular and immune landscapes of diseases. The ability to combine diverse data modalities and apply sophisticated algorithms reflects broader trends in the field, underscoring the growing importance of actionable insights that inform personalized treatment strategies.

Multiomics research faces challenges that must be addressed to sustain its growth. Standardizing methodologies and establishing robust protocols for data integration are crucial to ensuring reproducibility and reliability. The massive data output of multiomics studies requires scalable computational tools and collaborative efforts to improve interpretation. Moreover, engaging diverse patient populations is vital to addressing health disparities and ensuring biomarker discoveries are broadly applicable.

Looking ahead, collaboration among academia, industry, and regulatory bodies will be essential to drive innovation, establish standards, and create frameworks that support the clinical application of multiomics. By addressing these challenges, multiomics research will continue to advance personalized medicine, offering deeper insights into human health and disease.

Previous article2025 Trends: NGS
Next article2025 Trends: Organoids