This post is part of The Pump Handle’s Public Health Classics series.
By Sara Gorman
Does cigarette smoking cause cancer? Does eating specific foods or working in certain locations cause diseases? Although we have determined beyond doubt that cigarette smoking causes cancer, questions of disease causality still challenge us because it is never a simple matter to distinguish mere association between two factors from an actual causal relationship between them. In an address to the Royal Society of Medicine in 1965, Sir Austin Bradford Hill attempted to codify the criteria for determining disease causality. An occupational physician, Hill was primarily concerned with the relationships among sickness, injury, and the conditions of work. What hazards do particular occupations pose? How might the conditions of a specific occupation cause specific disease outcomes?
In an engaging and at times humorous address, Hill delineates nine criteria for determining causality. He is quick to add that none of these criteria can be used independently and that even as a whole they do not represent an absolute method of determining causality. Nevertheless, they represent crucial considerations in any deliberation about the causes of disease, considerations that still resonate half a century later.
The criteria, which Hill calls “viewpoints,” are as follows:
- Strength. The association between the projected cause and the effect must be strong. Hill uses the example of cigarette-smoking here, noting that “prospective inquiries have shown that the death rate from cancer of the lung in cigarette smokers is nine to ten times the rate in non-smokers.” Even when the effects are objectively small, if the association is strong, causality can be contemplated. For example, during London’s 1854 cholera outbreak, John Snow observed that the death rate of customers supplied with polluted drinking water from the Southwark and Vauxhall Company was low in absolute terms (71 deaths in 10,000 houses). Yet in comparison to the death rate in houses supplied with the pure water of the Lambeth Company (5 in 10,000), the association became significant. Even though the mechanism by which polluted water causes cholera—transmission of the bacteria vibrio cholera—was then still unknown, the strength of this association was sufficient in Snow’s mind to correctly assign a causal link.
- Consistency. The effects must be repeatedly observed by different people, in different places, circumstances and times.
- Specificity. Hill admits this is a weaker criterion, since diseases may have many causes and etiologies. Nevertheless, the specificity of the association, meaning how limited the association is to specific workers and sites and types of disease, must be taken into account in order to determine causality.
- Temporality. Cause must precede effect.
- Biological gradient. This criterion is also known as the dose-response curve. A good indicator of causality is whether, for example, death rates from cancer rise linearly with the number of cigarettes smoked. A small amount of exposure should result in a smaller effect. This is indeed the case; the more cigarettes a person smokes over a lifetime, the greater the risk of getting lung cancer.
- Plausibility. The cause-and-effect relationship should be biologically plausible. It must not violate the known laws of science and biology.
- Coherence. The cause-and-effect hypothesis should be in line with known facts and data about the biology and history of the disease in question.
- Experiment. This would probably be the most important criterion if Hill had produced these “viewpoints” in 2012. Instead, Hill notes that “Occasionally it is possible to appeal to experimental, or semi-experimental, evidence.” An example of an informative experiment would be to take preventive action as a result of an observed association and see whether the preventive action actually reduces incidence of the disease.
- Analogy. If one cause results in a specific effect then a similar cause can be said to result in a similar effect. Hill uses the example of thalidomide and rubella, noting that similar evidence with another drug and another viral disease in pregnancy might be accepted on analogy, even if the evidence is slighter.
The impact of Hill’s criteria has been enormous. They are still widely accepted in epidemiological research and have even spread beyond the scientific community. In this short yet captivating address, Hill managed to propose criteria that would constitute a crucial aspect of epidemiological research for decades to come. One wonders how Hill would respond to the plethora of reports published today claiming a cause and effect relationship between two factors based on an odds ratio of 1.2, with a statistically significant probability value of less than 0.05. While such an association may indeed be real, it is far smaller than those Hill discusses in his first criterion (“strength”). Hill does say, “We must not be too ready to dismiss a cause-and-effect hypothesis merely on the grounds that the observed association appears to be slight.” Yet he also wonders if “the pendulum has not swung too far” in substituting statistical probability testing for biological common sense. Claims that environmental exposures, food, chemicals, and types of stress cause a myriad of diseases pervade both scientific and popular literature today. In evaluating these issues, Hill’s sobering ideas, albeit 50 years old, are still useful guidance.
Sara Gorman is a PhD candidate at Harvard University. She has written extensively about HIV, TB, and women’s and children’s health for a variety of public health organizations, including Save a Mother and Boston Center for Refugee Health and Human Rights. She most recently worked in the policy division at the HIV Law Project.
This is a very well-written post, and I am intrigued by one aspect, the way in which the “Bradford-Hill Criteria” are worded and the way in which it is inferred how the
Criteria should be applied. An example; quoting: “Plausibility. The cause-and-effect relationship should be biologically plausible. It must not violate the known laws of science and biology.” While I agree 100% that this would be a scientifically correct way of applying the Criteria, this wording is different from Hill’s original article in 1965, and I wonder how this (current) interpretation arose. Are there perhaps any authors after Hill who interpreted the criteria that way and published this interpretation (i.e. secondary sources)? The reason for asking is that I accidentally stumbled across a very major medical literature error in the area of evidence-based medicine (EBM). Note, in the formal EBM process it is not mandatory to seek biological plausibility; whatever comes out of systematic reviews and meta-analyses can be (and often is) taken at face value without querying whether it makes scientific sense. Our article describing the issue is here: http://dx.plos.org/10.1371/journal.pone.0044277.
I am not sure if my question goes too far, but I would appreciate if perhaps an “offline” discussion with the post author would be possible.
Thanks so much for your comment. Hill’s wording is a bit slippery. He says “it would be helpful if the causation we suspect is biologically plausible.” But then he goes on to say that we may not be able to “demand” this. So you are correct to pick up on the discrepancy here. I think Hill is saying that causation should be biologically plausible, or at least probable, but there may be cases in which we are limited to a certain extent by our current biological knowledge. I did see a secondary source that interprets Hill’s biological plausibility criterion closer to the way that I’ve interpreted it (http://www.sciencebasedmedicine.org/index.php/causation-and-hills-criteria/). Regardless of Hill, I agree with you that it is sometimes the case that assertions are made in evidence-based medicine literature that are statistically true but biologically implausible, and that this can prove problematic.