CADDIS Volume 1: Stressor Identification
The Step-by-Step Guide Introduction
This page briefly discusses the fundamental principles of causal analysis upon which the Stressor Identification Guidance Document and CADDIS are based. Causation is essential to science and everyday life, but it has been a topic of controversy in the philosophy of science, and causal analysis can pose practical difficulties. We based our approach on three particularly useful concepts for analyzing causes.
Discrimination of Site and Other Data
Because our purpose is to determine the cause of impairment of a particular ecosystem, we focus on discovering evidence of causal associations at the impaired site. Associations made using only data from the site are highly relevant but may be misleading, because coincidental associations are confounding. Evidence from other field and laboratory studies is less relevant to the site, but can indicate which associations are likely to be causal. We use evidence from both sources, but discriminate between them during the analysis so that they are properly interpreted.
We have adapted and integrated three methods for determining the most likely cause.
Refutation. Karl Popper (1968) and others have argued quite convincingly that a causal hypothesis can be falsified with greater confidence than it can be accepted. We may tentatively believe that the rooster's crow causes the sun to rise, because of the co-occurrence and temporal sequence of these two events, but we reject that hypothesis the first morning that we prevent the rooster from crowing. Similarly, when the evidence from the site allows, reject candidate causes.
Diagnosis. As in medicine, we can make a very strong case for a cause if it is known to cause certain consistent signs in the impaired organisms or biotic communities. A set of diagnostic symptoms may come from the conventional signs of poisoning or disease or may come from the pattern of loss or gain of species that are indicators of particular conditions.
Strength of Evidence. Sir Arthur Bradford Hill (1965) resolved the question "Does smoking cause lung cancer?" by organizing the evidence in terms of a set of causal criteria and then considering the overall strength of that evidence. We adapted that approach by identifying types of evidence (causal criteria) that can be applied to information from the site and from elsewhere (summary tables of types of evidence). We also adapted Merwyn Susser's (1986a) idea of assigning scores to each type of evidence based on whether it strengthens (+) or weakens (-) the case for a candidate cause and on the strength of that type of evidence, represented by the number of plus or minus (+ or -) signs (summary table of scores).
The three methods are integrated during the analysis of the strength of the fifteen types of evidence. If a type of evidence from the case is sufficient to refute a candidate cause, that candidate cause is rejected (scored R), and evidence from elsewhere need not be considered. If a set of diagnostic symptoms exists for a candidate cause and they are observed in the impaired system, the symptoms are scored D for diagnostic and that candidate cause is confirmed. Candidate causes that are neither rejected nor diagnosed are evaluated for their consistency (i.e., do all evaluated types of evidence strengthen or all weaken the candidate cause?). If inconsistencies occur, explanations are sought, based on background knowledge.
Although it is not possible to prove causation, particularly in ecoepidemiological assessments that do not control the conditions of exposure and response, it is possible to compare candidate causes of a particular effect and determine which is best supported by the evidence. Hence, the last step in the causal analysis is a comparison of the candidate causes to identify the most probable cause.