These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Search MEDLINE/PubMed
Title: Late lessons from early warnings: towards precaution and realism in research and policy. Author: Gee D, Krayer von Krauss MP. Journal: Water Sci Technol; 2005; 52(6):25-34. PubMed ID: 16304931. Abstract: This paper focuses on the evidentiary aspects of the precautionary principle. Three points are highlighted: (i) the difference between association and causation; (ii) how the strength of scientific evidence can be considered; and (iii) the reasons why regulatory regimes tend to err in the direction of false negatives rather than false positives. The point is made that because obtaining evidence of causation can take many decades of research, the precautionary principle can be invoked to justify action when evidence of causation is not available, but there is good scientific evidence of an association between exposures and impacts. It is argued that the appropriate level of proof is context dependent, as "appropriateness" is based on value judgements about the acceptability of the costs, about the distribution of the costs, and about the consequences of being wrong. A complementary approach to evaluating the strength of scientific evidence is to focus on the level of uncertainty. If decision makers are made aware of the limitations of the knowledge base, they can compensate by adopting measures aimed at providing early warnings of un-anticipated effects and mitigating their impacts. The point is made that it is often disregarded that the Bradford Hill criteria for evaluating evidence are asymmetrical, in that the applicability of a criterion increases the strength of evidence on the presence of an effect, but the inapplicability of a criterion does not increase the strength of evidence on the absence of an effect. The paper discusses the reason why there are so many examples of regulatory "false negatives" as opposed to "false positives". Two main reasons are put forward: (i) the methodological bias within the health and environmental sciences; and (ii) the dominance within decision-making of short term economic and political interests. Sixteen features of methods and culture in the environmental and health sciences are presented. Of these, only three features tend to generate "false positives". It is concluded that although the different features of scientific methods and culture produce robust science, they can lead to poor regulatory decisions on hazard prevention.[Abstract] [Full Text] [Related] [New Search]