November 1, 2013 (Vol. 33, No. 19)

In its formative years PCR and its intellectual possessors had to go through some legal wrangling over the originality of the technique. In point of fact, a 13-year earlier paper by Nobelist Gobind Khorana pretty much laid out the theory for PCR, with sequential primed replication of both strands of a target DNA sequence leading to exponential accumulation of this sequence.

Moreover, the novelty of the crucial thermostable Thermus aquaticus DNA polymerase was also questioned in light of an earlier paper by a Russian group. Nonetheless, PCR as invented by Kary Mullis, prevailed, and the technique dramatically changed how molecular genetics is done.

In vitro nucleic acid amplification actually dates back considerably farther in time than the invention of PCR. Kudos to whomever should properly get credit for that particular feat. Bacteriophage Qβ has a linear RNA genome, which it replicates using Qβ replicase to form a copied strand, whereupon the original and the copy are both immediately available for yet another round of replication, and so on in a rapid process.

The mechanism was studied from the perspective of how genetic information can be amplified and evolve in response to selection pressure. Practical application for genetic analyses largely had to await the emergence of PCR-based procedures.

The idea that sequential molecular genetic enzyme reactions such as in PCR could be done without interfoliated gel separations, phenol extractions, and ethanol precipitations ran counter to practices then in use in molecular biology. Much more radical still was the notion a little later that polymerases could be found that would survive conditions where DNA is denatured at close to boiling temperatures, further simplifying the process.

And how about the still not fully appreciated miracle of having a much better than 99% chance for primers to detect any gene sequence in large genomes in just seconds in each temperature cycle, repeated over and over again?

All these mechanistic details aside, the real value of PCR is the unprecedented ability to quickly find any desired sequence in complex genomes such as ours by amplifying the target sequence, leaving all other DNA in the sample as insignificant contaminants of the final reaction products. The exponential amplification of target sequences to easily detectable levels through PCR is commonly regarded as the method’s main value.

For the cognoscenti, PCR’s more significant value lies in its ability to pinpoint specific targets in complex genomes with excellent specificity.

This gets us into the basis for the remarkable technical success of PCR. The two 20-nucleotide oligo primers used in PCR both nominally represent unique sequences in large genomes because 420 (the number of possible 20-mer sequences) is substantially larger than the 13×109 genome.

However, since also partially mismatched probes have some probability of priming replication, each of the two primers gives rise to many irrelevant extension products in every PCR cycle, replicating sequences here and there in the genome.

Nonetheless, because only exponentially amplified sequences are detected, it’s just those sequences whose both strands are successfully replicated that rise to detectable levels. This “coincidence detection” by pairs of primers ensures that typically only the desired sequence will emerge from the background noise and be detected as a product of exponential amplification.

Kary Mullis was quick to realize that the exponential amplification of specific nucleic acid sequences ultimately meant that we could talk about molecules in a very different way. Rather than expressing their amounts in grams or moles it was now, at least in principle, possible to detect and measure 1.66×10-24 moles of a DNA sequence but no less because this is just a complicated way of saying a single DNA molecule. It provides a very different perspective on biology when we are able to investigate the nature of and actions by individual molecules, rather than treating them as we do sodium ions as a cloud of units with identical properties.

Kary thus proposed that we talk about DNA molecules in terms of actual numbers or numbers per volume unit, without the detour of describing them as small fractions of the very large Avogadro’s number.


Only recently, with the advent of PCR-based molecular counting, has the digital perspective caught on. This technique is being used for absolute and precise enumeration of DNA molecules in various contexts. [Alila Medical Media/Fotolia]

Applications and Alternatives

With the basic PCR technique well established, the world turned to adapting PCR for a broad range of purposes. These range from forensics and authentication of works of art, to molecular computation, total amplification of all DNA in a sample, production of recombinant genetic constructs, and routine clinical diagnostics.

Another activity spurred by the success of PCR was inventing alternative techniques. This was done either with the hope of avoiding broad patents or for convenience.

Examples of such alternative amplification methods are nucleic acid sequence-based amplification (NASBA), strand displacement amplification (SDA), and loop-mediated isothermal amplification (LAMP).

In practice, however, PCR easily maintains grounds as the dominant approach for sensitive and specific gene detection.

Is there any room for alternatives to PCR for future genetic analyses? Well, there certainly are situations where PCR may not represent the ultimate solution.

First, PCR’s requirement for target recognition by pairs of oligos carries with it a rapidly increasing risk of cross-reactive detection as more sequences are targeted in the same reaction. The problem has been addressed, for example, by compartmentalizing many individual amplification reactions, something that can also allow digital counts of sequence copy numbers when applied with limiting amounts of sample DNA.

An alternative approach to achieving multiplex gene detection is that of using linear probes that incorporate pairs of target-complementary sequences at either ends, and substituting the polymerase with a DNA ligase that can connect the ends and thus circularize the probe strands when properly hybridized to their targets. Unlike PCR, these so-called padlock probes have been shown by us and others to permit tens of thousands target sequences to be targeted in the same reaction.

Incidentally, the circularized probes also lend themselves to localized amplification via a rolling circle amplification mechanism, resulting in easily detectable individual reaction products, as an attractive alternative to digital PCR for counting target molecules with no requirement for compartmentalization of reaction products and limiting dilution of samples.

One shortcoming of PCR is the restriction to amplification of DNA molecules, and of RNA molecules only after reverse transcription to DNA. Surely proteins represent another class of macromolecules in great need of amplified detection, but neither PCR nor any other known process is capable of amplifying protein sequences in convenient in vitro reactions.

The proximity ligation reaction developed in our lab meets this challenge by detecting target protein via binding by pairs of oligo-modified antibodies, resulting in the production through DNA ligation of amplifiable reporter DNA strands, as a replacement for amplifying the actual protein molecules.

A radically different approach to PCR and other affinity-based molecular detection reactions is that of simply analyzing all molecules in order to identify them via the sequence of building blocks they are composed of.

For protein analysis this is the approach taken via mass spectrometry. For nucleic acids, next-generation sequencing now affords a means of recording the nucleotide sequence of all molecules in a sample, so that the analysis of specific sequences in the sample can subsequently be undertaken entirely in silico.

So, will next-generation sequencing and its successors replace PCR for DNA detection? For my money, this will only be the case to a limited extent. Future molecular tools for DNA analysis are likely to remain dependent on affinity reactions to seek out molecules of interest. They are likely to exhibit greater molecular complexity than current techniques, so as to further improve specificity, efficiency, and convenience of reporting, but they will also be far simpler to use.

Ulf Landegren, M.D., Ph.D. ([email protected]), is a professor in the department of immunology, genetics, and pathology at Uppsala University.

Previous articleThe HIV Env Trimer Structure Unveiled
Next articleLiquid Handling Tips