The field of safety toxicology is entering a new era shaped by the burden of evaluating the ever-growing number and diversity of chemical entities to determine their impact on the environment and human health. The need for establishing a new way to obtain critical information on the possible toxic effects of particular chemical exposures are further influenced by the associated costs, time, and the extensive animal experimentation that traditional risk assessment requires.
In particular, the time window for obtaining this information is urgent in light of the new regulatory requirements that are proposed or already in place in the U.S. and abroad. One approach to these issues is to develop a series of in vitro measurements that can not only serve as a first-line estimate of a compound’s potential toxicity, but also provide insight into possible mechanisms of that toxicity.
Such information from a large number of compounds can be rapidly assembled through high-throughput screening of relevant in vitro assays and then combined with a database of compounds with known in vivo toxicity data to extract possible similarities.
For the pharmaceutical industry, human safety concerns for new chemical entities are a continual issue since these compounds are targeted for human use. Within this industry, a finite set of assays is used to test for potential safety liabilities prior to their introduction to human populations.
The questions arise as to whether these existing in vitro datasets are comprehensive enough and if they are they used in the best way to address toxicity concerns, especially in predicting the effects of compounds that are developed beyond traditional pharmaceutical chemical space.
The ultimate success of a predictive toxicology approach to answer some of these safety concerns requires several key elements. Primarily among these is the availability of well-defined in vitro measures that reflect the response of important biological processes to xenobiotics. A vital part of this process is the selection of assays that provide data on the inhibition or activation of pathways that are central to mediating an adverse effect.
Many of the targets in such pathways may not have been identified yet. In these cases, we rely on correlative analysis with in vivo data to highlight those assays that provide a reliable marker of possible safety concern, even though we presently may not understand their role in that effect.
Additional factors that will contribute to this approach are both the quality and depth of the in vitro dataset. The data must come from high signal-to-noise assays that unambiguously define a reliable response and are able to differentiate it from artifacts such as natural fluorescence that are common to certain classes of compounds.
These results will reflect both positive and negative information, which are both important to the ultimate goal of predicting responses in humans. It is also imperative that the screening matrix be complete; that is, all compounds must be tested in all the relevant assays. This allows compounds to be ranked for activity across a continuous scale to optimize statistical power.
Finally, the compound sets must be information rich, covering a wide chemical space with overlap among compounds within that space allowing statistical confidence in the conclusion concerning specific chemical and biological interactions.