In November 2008, Suzanne Stratton, Ph.D., was summarily dismissed from her job at the Carle Foundation Hospital in Urbana, IL.The events leading up to her firing, those currently known, paint a chilling picture of rogue research, an institutional culture unwilling to support bedrock bioethical values, and the substantial risks conscientious employees take in blowing the whistle on misconduct in science.
As vp for research at Carle, Dr. Stratton was charged with ensuring the clinic’s compliance with the federal regulations governing research with human participants.
Shortly before she was dismissed, Dr. Stratton warned hospital administrators about lax informed-consent practices, inadequate continuing review, protocol noncompliance, inadequate IRB record keeping, willful interference with IRB independence, and improper self-referrals.
Dr. Stratton’s concerns have been largely vindicated. The Office for Human Research Protections (OHRP)—the federal agency charged with oversight of all human-subjects research conducted or supported by DHHS—found multiple regulatory violations and threats to patient safety. OHRP has issued two letters of criticism so far, new patient enrollment has been halted, and the investigation continues.
Dr. Stratton, however, has not been reinstated. Last November she brought suit against Carle in federal court. Whether her claims have merit will be revealed in the crucible of litigation. But even if she prevails—an outcome that is far from certain—a successful recovery could take years. Dr. Stratton’s experience highlights a serious weakness in our current system of clinical research oversight: Whistle-blower protection is woefully inadequate.
Research-ethics whistle-blowers are critically necessary to supplement a system hamstrung by numerous structural obstacles to effective oversight. These problems have been fully described in the medical ethics literature.
NIH bioethicists point to federal regulations not universally applicable to all human subjects, a loophole that allows troubling research to be performed outside federal reach; FDA regulations that diverge in substance from their NIH counterparts, yet overlap in scope; IRBs that are underfunded, overworked, and subject to inherent and unmanaged conflicts of interest; IRB members who focus excessively on the formalities of consent documents; time-consuming and duplicative IRB review of inconsistent quality; clinical investigators under no obligation to complete ethics training, as well as poorly defined curricular material for those who are; and a persistent therapeutic misconception, which prevents doctors and patients alike from recognizing the bright line between research and ordinary clinical care.
Such challenges substantially reduce OHRP’s capacity for effective system-wide compliance oversight. Multiple legislative reforms have been proposed to relieve the pressure; but, even if enacted—and to date all have failed—the problem of sheer volume would likely remain. Not counting research entities falling under FDA’s exclusive jurisdiction, OHRP is charged to oversee more than 7,200 IRBs and over 30,000 approved federal-wide assurances (FWAs).
Research governance structures employing interacting administrative processes to continuously improve quality and safety may help, but they cannot prevent all recurrence of misconduct. Nor would continuous, comprehensive auditing be practical; the marginal cost of prevention would increase unreasonably fast in relation to gains. The scale of the U.S. research enterprise is simply too large; the relative importance of many ethical lapses is too small (e.g., minor clerical problems).
To be sure, stepped-up enforcement activity has returned high-profile sanctions against over a dozen prominent academic centers in recent years—most notably, the near-total suspension of human-subject research at Johns Hopkins. But extra OHRP scrutiny is no panacea. It did not prevent well-publicized deaths of research participants at several major centers over the same period.
Where misconduct threatens patient safety, immediate corrective action is critical. Nowhere is this more true than comparatively tiny community programs such as Carle, which may escape OHRP notice for years. Only physician-investigators, hospital administrators, IRB professionals, nurses, and social workers—the very persons directly involved in the local research effort—are well positioned to respond in time. Only an insider has the access, the perspective, and the institutional knowledge and commitment necessary to detect and report ethical lapses efficiently. And only by blowing the whistle can federal regulators investigate and remediate the violations.
Unfortunately, whistle-blowing remains a risky endeavor. By some estimates, over one-third of those who report scientific misconduct will face serious retaliation. They may suffer verbal harassment or intimidation, negative performance appraisals, demotions or, as Dr. Stratton experienced, even outright firing.
Recent empirical data on the full impact of research-ethics whistle-blowing on the careers of the whistle-blowers is lacking. There is, however, limited empirical evidence to suggest that members of the research community are acutely aware of the danger. Of several thousand federal scientists surveyed by the Union of Concerned Scientists, over 40% feared retaliation for speaking out.
This awareness of vulnerability affects their behavior. According to the Ethics Resource Center, approximately 25% of government employees who observe misconduct decline to report it because of potential repercussions. Narrowed to clinical trials, that statistic almost certainly gets worse. Lacking a credible promise of legal sanctuary, prospective whistle-blowers’ fear of retaliation may heavily chill their speech.