January 1, 1970 (Vol. , No. )

John Sterling Editor in Chief Genetic Engineering & Biotechnology News

Last July the New York Times reported on a huge breakdown in the cancer research program at Duke University. One of the Duke scientists, Anil Potti, M.D., along with his colleagues had published a paper in Nature Medicine in 2006 that described genomic tests that reportedly could match an individual patient’s tumor to the most appropriate chemotherapy. Subsequently, a statistical research team at the University of Texas MD Anderson Cancer Center examined the Duke data and found discrepancies, problems, and errors. Nevertheless, the Duke group continued to write scientific papers and began three clinical trials based on their studies.

In July 2010, it was discovered that Dr. Potti’s claim on his resume that he was a Rhodes scholar was not true. Combined with questions about the reliability of the Duke data, the Rhodes scholar story basically ended the Potti team’s cancer research program. Four scientific papers were retracted, the clinical trials were ended, and Dr. Potti resigned from the university.

The whole saga came to light again this past Sunday when 60 Minutes aired a segment on it. It was entitled “Deception at Duke.” The CBS News site reads as follows: “What our 60 Minutes investigation reveals is that Duke’s so-called breakthrough treatment wasn’t just a failure—it may end up being one of the biggest medical research frauds ever.”

I am not in the position to decide whether fraud was involved or not in Dr. Potti’s research. I am sure there will continue to be follow-up stories and further discussion on many aspects of this matter. I do think this case illustrates a more wide-ranging problem in biotech today: How do you collect the large amounts of computer-generated data emanating from most of today’s research projects, how do you interpret and analyze it, and how and when do you decide that you are secure enough after having studied the data that you can move the results of your research into the clinical stage?

If there is a common theme I hear from researchers over and over again, no matter where I travel, it’s that we have to come up with more efficient methods of obtaining, analyzing, interpreting, and sharing data with our colleagues. The fact that the Duke scientists were able to publish their questionable data and results in peer-reviewed research journals before and after a red flag was raised by the statisticians at MD Anderson clearly demonstrates the importance of the data problem.

In last September’s issue of the journal Significance, Darrel Ince, professor of computing at the Open University, emphasized that “universities should support statistics as they support computing: specialist units should give advice on each.” An article in the Journal of the National Cancer Institute in June 2011 was entitled “Duke Scandal Highlights the Need for Genomics Research Criteria.” NCI actually held a workshop June 23–24, 2011, on “Criteria for Use of Omics-Based Predictors in Clinical Trials.”

The Duke incident was a fiasco on a number of levels, especially for the patients who put so much hope in the clinical trials. The increasing recognition of the need for a broader and more interdisciplinary approach to data acquisition, analysis, and application in clinical trial settings is a step in the right direction.

Previous articleNIH, NSF to Leverage 2013 Budget Toward More Translational Medicine
Next articleCritical Pharmaceuticals, University of Nottingham to Work on Nano-Enabled Nasal Spray for Osteoporosis