February 15, 2010 (Vol. 30, No. 4)
Jonathan E. Rackoff
Individuals Who Report Ethical Misconduct Unfortunately Often Put Their Jobs on the Line
In November 2008, Suzanne Stratton, Ph.D., was summarily dismissed from her job at the Carle Foundation Hospital in Urbana, IL.The events leading up to her firing, those currently known, paint a chilling picture of rogue research, an institutional culture unwilling to support bedrock bioethical values, and the substantial risks conscientious employees take in blowing the whistle on misconduct in science.
As vp for research at Carle, Dr. Stratton was charged with ensuring the clinic’s compliance with the federal regulations governing research with human participants.
Shortly before she was dismissed, Dr. Stratton warned hospital administrators about lax informed-consent practices, inadequate continuing review, protocol noncompliance, inadequate IRB record keeping, willful interference with IRB independence, and improper self-referrals.
Dr. Stratton’s concerns have been largely vindicated. The Office for Human Research Protections (OHRP)—the federal agency charged with oversight of all human-subjects research conducted or supported by DHHS—found multiple regulatory violations and threats to patient safety. OHRP has issued two letters of criticism so far, new patient enrollment has been halted, and the investigation continues.
Dr. Stratton, however, has not been reinstated. Last November she brought suit against Carle in federal court. Whether her claims have merit will be revealed in the crucible of litigation. But even if she prevails—an outcome that is far from certain—a successful recovery could take years. Dr. Stratton’s experience highlights a serious weakness in our current system of clinical research oversight: Whistle-blower protection is woefully inadequate.
Research-ethics whistle-blowers are critically necessary to supplement a system hamstrung by numerous structural obstacles to effective oversight. These problems have been fully described in the medical ethics literature.
NIH bioethicists point to federal regulations not universally applicable to all human subjects, a loophole that allows troubling research to be performed outside federal reach; FDA regulations that diverge in substance from their NIH counterparts, yet overlap in scope; IRBs that are underfunded, overworked, and subject to inherent and unmanaged conflicts of interest; IRB members who focus excessively on the formalities of consent documents; time-consuming and duplicative IRB review of inconsistent quality; clinical investigators under no obligation to complete ethics training, as well as poorly defined curricular material for those who are; and a persistent therapeutic misconception, which prevents doctors and patients alike from recognizing the bright line between research and ordinary clinical care.
Such challenges substantially reduce OHRP’s capacity for effective system-wide compliance oversight. Multiple legislative reforms have been proposed to relieve the pressure; but, even if enacted—and to date all have failed—the problem of sheer volume would likely remain. Not counting research entities falling under FDA’s exclusive jurisdiction, OHRP is charged to oversee more than 7,200 IRBs and over 30,000 approved federal-wide assurances (FWAs).
Research governance structures employing interacting administrative processes to continuously improve quality and safety may help, but they cannot prevent all recurrence of misconduct. Nor would continuous, comprehensive auditing be practical; the marginal cost of prevention would increase unreasonably fast in relation to gains. The scale of the U.S. research enterprise is simply too large; the relative importance of many ethical lapses is too small (e.g., minor clerical problems).
To be sure, stepped-up enforcement activity has returned high-profile sanctions against over a dozen prominent academic centers in recent years—most notably, the near-total suspension of human-subject research at Johns Hopkins. But extra OHRP scrutiny is no panacea. It did not prevent well-publicized deaths of research participants at several major centers over the same period.
Where misconduct threatens patient safety, immediate corrective action is critical. Nowhere is this more true than comparatively tiny community programs such as Carle, which may escape OHRP notice for years. Only physician-investigators, hospital administrators, IRB professionals, nurses, and social workers—the very persons directly involved in the local research effort—are well positioned to respond in time. Only an insider has the access, the perspective, and the institutional knowledge and commitment necessary to detect and report ethical lapses efficiently. And only by blowing the whistle can federal regulators investigate and remediate the violations.
Unfortunately, whistle-blowing remains a risky endeavor. By some estimates, over one-third of those who report scientific misconduct will face serious retaliation. They may suffer verbal harassment or intimidation, negative performance appraisals, demotions or, as Dr. Stratton experienced, even outright firing.
Recent empirical data on the full impact of research-ethics whistle-blowing on the careers of the whistle-blowers is lacking. There is, however, limited empirical evidence to suggest that members of the research community are acutely aware of the danger. Of several thousand federal scientists surveyed by the Union of Concerned Scientists, over 40% feared retaliation for speaking out.
This awareness of vulnerability affects their behavior. According to the Ethics Resource Center, approximately 25% of government employees who observe misconduct decline to report it because of potential repercussions. Narrowed to clinical trials, that statistic almost certainly gets worse. Lacking a credible promise of legal sanctuary, prospective whistle-blowers’ fear of retaliation may heavily chill their speech.
Patchwork of Protections
As of this writing, reporting ethical misconduct involving federally sponsored clinical research is not expressly protected by any single statute, regulation, or common-law rule. Rather, research-ethics whistle-blowers are forced to rely on a diverse patchwork of protections.
Frequently, these provisions are too narrow in scope, inadequate in remedy, or poorly enforced. Some protect reporting only if a law was actually violated. Others allow a reasonable belief. Some cover enumerated laws, while others limit by jurisdiction, and still others do not discriminate at all—any infraction will do. Many focus only on specific policy priorities (e.g., threats to public health and safety). Moreover, safeguards can depend on the recipient of the whistle-blower’s report, be restricted to public employees, or apply widely.
They have many sources, falling into four general categories: (1) federal statutes, (2) regulations and agency guidance, (3) state statutes, and (4) state common law causes of action. Consider each in turn:
Federal statutes. Most federal statutes are topic-specific and of no benefit to research-ethics whistle-blowers. There are two exceptions: the Whistleblower Protection Act of 1989 (WPA) and the federal False Claims Act (FCA). Technically, the WPA applies to violations of “any law, rule, or regulation . . . abuse of authority, or a substantial and specific danger to public health or safety.” In practice, however, the WPA has no teeth. Only government servants are protected, and, after the Supreme Court’s opinion in Garcetti v. Ceballos, disclosures made in the course of official duties, or made to supervisors or co-workers, are excluded.
The WPA established the Merit System Protection Board to decide retaliation claims independently; yet, since 2000, only three whistle-blowers have prevailed before it. The U.S. Court of Appeals for the Federal Circuit has jurisdiction on appeal. But the circuit has steadily narrowed the scope of WPA protection by increasing whistle-blowers’ burdens of proof and production. The result: less than 1.5% of whistle-blowers now win their appeal. These are astonishingly low rates of success for so broad a law.
But the FCA is no more potent. Because institutions with FWAs promise to comply with 45 C.F.R. 46 as a condition of DHHS funding, allegations of research misconduct fall under the aegis of the FCA’s anti-retaliation provisions. Failure to obtain fully informed consent from participants, IRB approval that deviates from IND conditions, nondisclosure of investigator conflicts of interest—all have been reported in this way. But FCA cases seeking remedies for ethics lapses are rare. Most have failed. Courts recognize that the FCA was never intended as a vehicle for compliance oversight.
Regulations and agency guidance. Neither OHRP nor the regulations it administers speak to whistle-blowers in human-subjects research. The DHHS Office of Research Integrity (ORI) does protect whistle-blowers who make good faith allegations of scientific misconduct. Under 42 C.F.R. Part 50.103(d)(13), institutions applying for research-related grants are required to “undertak[e] diligent efforts to protect the[ir] positions and reputations.”
Part 493(e) of the Public Health Service Act—a whistle-blower protection statute enacted by Part 163 of the NIH Revitalization Act of 1993—provides overlapping coverage. But, because “scientific misconduct” is narrowly defined as “fabrication, falsification, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the scientific community,” none of these provisions may apply.
State statutes. While virtually every state has enacted whistle-blower legislation in some form, and at least 18 expressly protect employees who report health and safety violations, statutory language varies considerably from jurisdiction to jurisdiction, and judicial interpretation has magnified the differences.
Dr. Stratton has the benefit of Illinois’ relatively broad Whistleblower Protection Act, which covers violations of any “federal law, rule, or regulation.” But the very same misconduct in another jurisdiction might be far more dangerous to report. Especially in the context of DHHS-funded research, similarly situated whistle-blowers currently receive disparate legal treatment based merely on the accident of their geography.
State common law. The common law tort of retaliatory discharge in violation of public policy may be available to research-ethics whistle-blowers not covered by state statutes. Over 40 states and the District of Columbia have recognized the tort in some form. But here again, the elements of the cause of action vary substantially from state to state. Courts have achieved no consensus on the elements of the public-policy claim. In some cases it is pre-empted by weaker state statutory provisions. And plaintiffs have achieved mixed results.
Proposals to reform whistle-blower laws are nothing new. Historically, the human-subjects research community has paid little attention. That is beginning to change. Dr. Stratton’s case illustrates why. Robust whistle-blower protection may well be essential to effective oversight of clinical trials—especially at the many hundreds of community hospitals across the nation.
Federally funded cancer research is dependent on these sites. Collectively, they enroll over one-third of all participants in the nation’s cancer trials. Ethical misconduct at community centers is thus a grave threat to public trust. With the absolute level of NIH and industry funding for research declining for the first time in decades, according to a recent study published in the Journal of the American Medical Society, it is a threat we can ill afford.
Whistle-blowers can help. But, in order to come forward, they require better safeguards. Empirical research strongly indicates that prospective whistle-blowers engage in a cost-benefit analysis in which personal risks are weighted heavily. Thus, disincentives must be removed, and whistle-blowing regarded—even rewarded—as virtuous.
How best to accomplish this goal requires further study, although an effective reform proposal will likely involve at least three components: First, the regulatory gaps must be addressed. The federal regulations should be amended to include express, dedicated protection for those who report violations of ethics rules applicable to human-subject research.
OHRP and the FDA should promulgate guidance for institutions on responding to allegations. And the definition of “scientific misconduct” should be amended to reach human-subjects protections, thereby bringing research-ethics violations under the joint jurisdiction of ORI’s whistle-blower provisions.
Second, uniformity and certainty as to applicable law must be promoted. To that end, the WPA should be amended to encompass most public and private employment relationships. This includes employees of CROs, pharmaceutical and medical device firms, private IRBs, and FDA contractors. Likewise, a wider range of external report recipients, as well as up-the-chain disclosures made internally, should be covered.
The scope of covered wrongdoing should be broad enough to cover any duty—whether emanating from statute, regulation, guidance, or policy, whether judicially created or professional, whether international, federal, state, or municipal—owed to or potentially relevant to the protection of human subjects. Potential recovery should be expanded beyond the usual equitable remedies of reinstatement and back pay to include full compensatory damages. A case might also be made for punitive damages and reasonable attorneys’ fees, although both impose trade offs that must be carefully weighed.
For misconduct under OHRP jurisdiction, the federal WPA should also preempt competing state or common-law protections—even if more protective. Overprotection may foster over-reporting, dramatically slowing the pace and raising the costs of research without justification. To avoid this inefficiency and as a prophylactic to employee abuse, a good faith element is essential. An objectively reasonable belief in the occurrence of the reported violation should be a prerequisite to protection.
Finally, antiretaliation provisions are necessary only because whistle-blowing is regarded as disloyal, disobedient, and a breach of trust. This perception is deeply misguided. Human-subjects research is unique. The primary duty of anyone involved in a clinical trial is to the patient/subjects without whom no research would be possible and whose sacrifices benefit us all. Protecting their welfare, their autonomy, their dignity, and their rights supersedes other obligations in all but the most rare circumstances.
Clinical research is a social good. To promote and perpetuate it, all stakeholders in the research endeavor—in our labs and in the classroom, from our research centers to our corporate offices, on cancer wards before research subjects and on camera before the public—must strive to foster an ethical culture aligned with these values.
Jonathan E. Rackoff, J.D. ([email protected]), is a senior associate at Sidley Austin. The opinions expressed herein are the author’s own. They do not necessarily reflect any position or policy of Sidley Austin or its clients, past or present.