March 15, 2011 (Vol. 31, No. 6)

Henry I. I. Miller, M.D. Physician and fellow Stanford University

Remedy for the Shrinking Number of New Medicines Is Not More Gov’t Involvement

Because of concerns within the Obama administration about the glacial pace of new drugs coming out of the pharmaceutical industry’s pipeline, the feds have announced a billion-dollar government drug development center to help create new medicines. It’s typical government-think: Having over-regulated drugs to the point at which industry is unable to innovate in spite of record investment in R&D and powerful new research tools, the bureaucrats intend to take over. They want to be both the arsonist and the firefighter.

Last December an NIH advisory panel approved a new National Center for Advancing Translational Sciences that will attempt to “translate” basic research findings into pharmaceuticals. Since then, NIH and the Department of Health and Human Services have been fast-tracking the center, hoping to have it up and running by October.

Getting new generations of drugs to patients who need them is a worthy goal but delegating that to the NIH is a bad idea. This federal agency has long focused on funding—and to a lesser extent conducting—basic research. It excels at both. The NIH should maintain those roles and leave the development and manufacture of commercial products to the private sector. NIH’s resources are a zero sum game, so expenditures on commercial enterprises inevitably reduce the funding available for basic research.

This is not the first proposal to federalize private-sector research and development of pharmaceuticals. Following the spate of anthrax-containing letters shortly after the terrorist attacks of September 11th, 2001, two independent reports called for the federal government to take over some vaccine production.

In 2002, the Institute of Medicine (which is not part of the NIH but is the health arm of the National Academy of Sciences) called for the creation of a National Vaccine Authority.

Similarly, the Gilmore Commission, which studied ways to counteract terrorism involving weapons of mass destruction, recommended “the establishment of a government-owned, contractor-operated national facility for the research, development, and production of vaccines for specified infections.”

Dismissing a primary role for the private sector, it argued that “direct government ownership or sponsorship is likely to be the only reasonable answer for producing vaccines” for such diseases as anthrax and smallpox. (Since those recommendations, these and other similar vaccines have been successfully produced by drug companies under government-funded contracts.)

These sorts of recommendations ignore the poor track record of the federal government’s production of pharmaceuticals. Consider, for example, the decades of production of a human growth hormone for short children by the National Pituitary Agency. This program, conducted from 1963 to 1985 under the auspices of the NIH, was a haphazard operation.

The hormone was prepared from human pituitary glands recovered from cadavers, and the absence of rigorous collection guidelines and purification procedures permitted contamination of the formulated drug with the agent that causes Creutzfeldt-Jacob disease or bovine spongiform encephalopathy. As a result, several dozen recipients died a lingering and gruesome death.

Had this been a private-sector enterprise, competition and the threat of liability would have spurred innovation in the form of frequent updating of the drug’s purification and formulation with state-of-the art technologies and would have required rigorous adherence to government regulations. But when government itself is the manufacturer, these forces tend to be attenuated and the shield of government safety regulation is weakened. The nation’s drug regulator, the FDA, is a sibling agency of the NIH, and their common political interests and fraternal relationship compromised rigorous oversight over the NIH’s production of human growth hormone.

Governments may be good at certain things that advance technology, such as the peer-reviewed funding of basic research and precommercial development, but they are rarely leaders at technological innovation. And what qualifies them to act as venture capitalists, choosing which commercial products are the most promising and deserving of funding?

Having said all that, we do need to find ways to stimulate pharmaceutical development, and I am sympathetic with NIH Director Francis Collins’ lament, “I am a little frustrated to see how many of the [basic science] discoveries that do look as though they have therapeutic implications are waiting for the pharmaceutical industry to follow through with them.” But his colleagues at the FDA bear much of the responsibility for drug companies’ unwillingness or inability to “follow through.”

At a time when drug development should have been spurred by huge increases in R&D expenditures—which almost quadrupled to more than $65 billion between 1995 and 2009—and by the exploitation of numerous new technologies, drug approvals have been dismal. Fred Hassan, CEO of Schering-Plough, expressed his frustration about regulation this way: “What will it take to get new drugs approved? The point is, we don’t know.”

Kenneth Kaitin, Ph.D., director of the Tufts Center for the Study of Drug Development, described the obstructionist culture at the FDA as having caused it to become viewed as “an agency that is supposed to keep unsafe drugs off the market, not to speed access to lifesaving drugs.”

From 1996 to 1999, FDA approved 176 new medicines; during 2007–2010, the number fell to 88. Bringing a new drug to market now requires on average 12 to 15 years, and costs have shot up to more than $1.4 billion—in no small part because the average length of a clinical trial increased 70% between 1999 and 2006. Perhaps the most ominous statistic of all is that drug manufacturers recoup their R&D costs for only one in five approved drugs.

Interestingly, most of the commentary about the new NIH center has been somewhat tangential, focusing on the need to break up an existing center because of a statutory limit on the number that NIH can have. The designated casualty is the National Center for Research Resources, although a far more rational choice would have been the worthless National Center for Complementary and Alternative Medicine. Its mission is “to define, through rigorous scientific investigation, the usefulness and safety of complementary and alternative medicine interventions and their roles in improving health and health care.” The problem is that many of their projects are trivial and the interventions tested have proven consistently to be worthless. The latest study found, for example, that cranberry juice cocktail was no better than placebo at preventing recurring urinary tract infections. This sort of “research” is an affront to the NIH investigators who are at the cutting edge of their scientific fields.

The remedy for the shrinking number of new medicines is not to federalize drug development—a flawed, unwise solution to a problem that government created in the first place. Rather, we need a new regulatory culture at the FDA that restores the balance by assuring the safety of drugs and medical devices, is committed to getting innovative new products to patients who need them, and treats drug and device developers in a fair and predictable way. When it comes to governmental involvement in society’s critical problems, sometimes less is more.

Henry I. Miller, M.D. ([email protected]), is the Robert Wesson Fellow in Scientific Philosophy and Public Policy at Stanford University’s Hoover Institution and is a fellow at the Competitive Enterprise Institute. He was the founding director of the Office of Biotechnology at the FDA.

Previous articleComplete Genomics Taps DNAnexus’ Cloud-Based Informatics Solution
Next articleArresting Disorderly Proteins for Disease Treatment