Scientists, including a team from the U.S Department of Energy Joint Genome Institute (DOE JGI), reported the results of the Critical Assessment of Metagenome Interpretation (CAMI) Challenge, a benchmarking study of computational tools for metagenomes. The CAMI Challenge was led by Alexander Sczyrba, Ph.D., head of the computational metagenomics group at Bielefeld University, and Alice McHardy, Ph.D., head of the Computational Biology of Infection Research Lab at the Helmholtz Centre for Infection Research.

The research results (“Critical Assessment of Metagenome Interpretation—A Benchmark of Metagenomics Software”) are published in Nature Methods.

“Methods for assembly, taxonomic profiling and binning are key to interpreting metagenome data, but a lack of consensus about benchmarking complicates performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on highly complex and realistic data sets, generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids and representing common experimental setups,” write the investigators.

“Assembly and genome binning programs performed well for species represented by individual genomes but were substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below family level. Parameter settings markedly affected performance, underscoring their importance for program reproducibility. The CAMI results highlight current challenges but also provide a roadmap for software selection to answer specific research questions.”

“It is very difficult for researchers to find out which program to use for a particular dataset and analysis based on the results from method papers,” said Dr. McHardy. “The datasets and evaluation measures used in evaluations vary widely. Another issue is that developers usually spend a lot of time benchmarking the state of the art when assessing the performance of novel software that way. CAMI wants to change these things and involves the community in defining standards and best practices for evaluation and to apply these principles in benchmarking challenges.”

The CAMI Challenge took place over three months in 2015. Three simulated metagenome datasets were developed using more than 300 draft genomes of bacterial and archaeal isolates sequenced and assembled by the DOE JGI. The datasets also included roughly the same number of genomes from the Max Planck Institute in Germany, along with circular elements and viruses. The simulated datasets were a single sample dataset of 15 billion bases (Gb), a 40-Gb dataset with 40 genomes and 20 circular elements, and a 75-Gb time series dataset comprised of five samples and including hundreds of genomes and circular elements.

“JGI has a strong interest in benchmarking of tools and technologies that would advance the analysis of metagenomes and improve the quality of data we provide to the users. Having published the very first study on the use of simulated datasets for benchmarking of metagenomics tools from the JGI, it is great to see how this methodology has expanded over the years and now through this study, evolving into a model for standardized community efforts in the field,” said Nikos Kyrpides, Ph.D., DOE JGI Prokaryote Super Program head.

“JGI is very vested in not only benchmarking of lab protocols, but also computational workflows,” added DOE JGI Microbial Genomics head Tanja Woyke, Ph.D. “This makes our participation in critical community efforts such as CAMI so important.”

Computational tools were evaluated in three categories. Half a dozen assemblers and assembly pipelines were evaluated on assembling genome sequences generated from short-read sequencing technologies. In the binning challenge, five genome binners and four taxonomic binners were evaluated on criteria, including the tools' efficacy in recovering individual genomes.

Finally, ten taxonomic profilers with various parameter settings were evaluated on how well they could predict the identities and relative abundances of the microbes and circular elements. The benchmarking results are available on https://data.cami-challenge.org/results.

“CAMI is an ongoing initiative,” explained Dr. Sczyrba. “We are currently further automating the benchmarking and comparative result visualizations. And we invite everyone interested to join and work with CAMI on providing comprehensive performance overviews of the computational metagenomics toolkit, to inform developers about current challenges in computational metagenomics and applied scientists of the most suitable software for their research questions.”

Previous articleCRISPR/Cas9 Disrupts Glaucoma Gene, Takes Pressure off Eyes in Disease Models
Next articleLonza Expands in Mammalian Manufacturing with Purchase of California Site