Progress in real-time quantitative PCR (qPCR) technology has been steady since its invention approximately 15 years ago. Recent innovations and where the technology is headed in the future will be discussed at a Select Biosciences’ upcoming conference on “Advances in qPCR”.
Jan Hellemans, co-founder and CEO of Biogazelle, emphasizes the critical importance of optimizing both the front end (experimental design) and the back end (data analysis) of the qPCR workflow. Biogazelle’s flagship product, qbasePLUS, is a software solution for the analysis of qPCR data.
Hellemans notes that “there are still misconceptions as to how to best address the problem of inter-run variation with the experimental design.” The key principles, he says, are to “avoid the problem if possible, minimize the problem if it is not avoidable, and to correct for any variation that should actually occur.”
It would be ideal to screen all samples for a given gene on the same plate, he says, adding that it is not necessary to screen the reference gene(s) on the same plate. “This is a common misconception,” he says, acknowledging that using the same plate for a gene is not always possible, particularly given the large number of samples in the increasingly large studies being carried out today.
Measures should be taken to ensure that the potential variation is as small as possible. These measures would include using the same qPCR instrument and Cq value determination software settings, using the same batch of reagents, and minimizing the plate-to-plate variation by standardization. When variation does occur, Hellemans says that at least one sample should be re-analyzed in two different runs to enable correction for the variation.
According to Hellemans, the use of imputation statistical methods (commonly used by statisticians, but not yet widely adopted in qPCR data analysis) is a useful approach to recovering crucial missing data from qPCR experiments. The gold standard for normalization of qPCR expression data is normalization against multiple validated reference genes, he says. With the increasing size of experiments, there is an increased risk of missing data from one or more of these reference genes due to technical failure. Imputation is an effective approach to recovering this missing data, he adds.