|Send to printer »|
Feature Articles : Aug 1, 2008 ( )
A Sensible Approach to Interpreting Research
Association Does Not Equal Causation, Making It Important to Follow Certain Guidelines in Analysis!--h2>
You wouldn’t know it from the way health news, both good and bad, is reported, but the truth is that just because you see two things happening at the same time doesn’t mean that one caused the other. For example, if computer use and street crime both increased in the 1980s, it doesn’t necessarily mean that one was due to the other.
Similarly, scientific studies that show an association between some factor (a food, perhaps, or a chemical) and a health effect do not necessarily imply that the factor actually causes the health effect. Many studies reported in the news showing such correlations are merely preliminary reports —they usually cannot justify a claim of causation without additional research, experimentation, and replication.
The following are some warnings to keep in mind the next time you hear people say, for instance, that those who eat avocados tend to live longer (or whatever the latest connection is).
> One person or a few people with personal stories about taking a certain pill and feeling better may be a fluke. To really determine if a drug works, randomized trials need to be conducted—studies in which human volunteers are randomly assigned to receive either the agent being studied or an inactive placebo, usually under double-blind conditions (where neither the participants nor the investigators know which substance each individual is receiving), and their health is then monitored for a period of time.
>The findings of animal experiments may not be directly applicable to the human situation. Animal experiments (such as the ones in the news almost daily suggesting that a new carcinogen has been discovered) may not correlate to health effects in humans because of genetic, anatomic, and physiologic differences between species or because of the use of unrealistically high doses in the animal experiments—doses bearing no resemblance to small amounts of the tested substance that humans encounter in everyday life.
>Test tube and Petri dish experiments are useful for defining and isolating biologic mechanisms but are not necessarily directly applicable to humans. That is to say, a lot of things can go wrong in a Petri dish, and the cell cultures can be easily damaged or influenced in ways that cells in our bodies are not.
>Observational epidemiologic studies are those done in human populations, in which researchers collect data on people’s exposures to various agents and relate these data to the occurrence of diseases or other health effects among the study participants. The findings from studies of this type are more relevant than animal studies, but the associations detected in such studies are not necessarily causal. A disease may be more common among a certain ethnic group, for instance, but it may turn out that this correlation has nothing to do with an underlying genetic cause, rather it might simply be that one geographic area had a higher prevalence of the disease and also had more people of a certain ethnic group for historical reasons unrelated to the disease itself.
The quality of new studies should be assessed before their results are touted. Those that include appropriate statistical analysis and have been published in peer-reviewed journals carry greater weight than those that lack statistical analysis and/or have been announced in other ways. Activist groups and cranks often prefer the immediate issuing of press releases to the slower and more technically demanding process of getting published in professional journals.
Claims of causation should never be made lightly. Premature or poorly justified claims of causation can mislead people into thinking that something they are exposed to is endangering their health, when this may not be true, or that a useless or even dangerous product is capable of creating desirable health effects.
When faced with exciting new studies, all of us need to keep the following pointers in mind:
>Focus on the study design not just the conclusions. What kind of study was it? Human? Animal? In vitro? Epidemiologic? Some study designs are more reliable than others, and findings derived from better-designed studies should carry more weight.
>Find out about possible confounding, that is, the presence of other possibly unknown causal factors that cannot be separated from the factors being studied. If two populations differ in thousands of ways, it may be difficult to pin the blame for a difference in health outcomes on just one difference between the two populations.
>Scrutinize animal tests with care. Were there appropriate controls? Were the results statistically significant? Did the study use well-accepted methodology? Is this animal a good model for possible reactions in humans? Do effects occur only at high doses unlike those to which humans are subjected?
>Check out the bona fides of a study and its authors. Completed studies published in peer-reviewed scientific journals carry much more weight than other types of reports. Studies that include appropriate statistical analysis of the data also carries more weight than those that do not. Studies produced by authors not affiliated with a university, medical center, or other established research organization should receive careful scrutiny.
>Consider results in context. New research results need to be interpreted in the context of related previous research. Checking with reputable health professionals and organizations who can contribute expertise and balance can be very helpful in putting findings into the appropriate context.
>Beware of the overinterpretation of study results by scientists themselves. Because researchers tend to be enthusiastic about their own work, some may exaggerate their findings, sometimes suggesting the possibility of causation when the data only supports an association.
>Fight the temptation to fill explanatory vacuums. Human beings dislike uncertainty. People are unsettled when the reason for an occurrence cannot readily be found. It is natural, therefore, to embrace any explanation, however unlikely, for an unexplained phenomenon.
And always use your wits. The first response to any incredible finding should be to question its credibility. If a little skepticism ruins a good story, it wasn’t such a good story in the first place.
Useful Criteria to Consider When Evaluating Whether an Association Is Causal
Temporality: For an association to be causal, the cause
must precede the effect.
Strength: Scientists can be more confident in the causality
of strong associations than weak ones.
Dose-response: Responses that increase in frequency and/or severity as exposure increases are more convincingly supportive of causality than those that do not show this pattern.
Consistency: Relationships that are repeatedly observed by different investigators, in different places, circumstances, and
times are more likely to be causal.
Biological plausibility: Associations that are consistent with the scientific understanding of the biology of the disease or health effect under investigation are more likely to be causal. For instance, years ago, activists argued that power lines caused leukemia and they were so desperate to keep that theory alive that some actually insisted that the entire fields of physics and biology should be revised.
© 2013 Genetic Engineering & Biotechnology News, All Rights Reserved