Sei sulla pagina 1di 20

The Journal of Medicine and Philosophy

ISSN: 0360-5310 (Print) 1744-5019 (Online) Journal homepage: https://www.tandfonline.com/loi/njmp20

The Precautionary Principle and Medical Decision


Making

David B. Resnik

To cite this article: David B. Resnik (2004) The Precautionary Principle and Medical Decision
Making, The Journal of Medicine and Philosophy, 29:3, 281-299

To link to this article: https://doi.org/10.1080/03605310490500509

Published online: 09 Aug 2010.

Submit your article to this journal

Article views: 563

View related articles

Citing articles: 5 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=njmp20
Journal of Medicine and Philosophy
2004, Vol. 29, No. 3, pp. 281–299

The Precautionary Principle and Medical


Decision Making
David B. Resnik
The Brody School of Medicine at East Carolina University, Greenville, NC, USA

ABSTRACT

The precautionary principle is a useful strategy for decision-making when physicians and
patients lack evidence relating to the potential outcomes associated with various choices.
According to a version of the principle defended here, one should take reasonable measures to
avoid threats that are serious and plausible. The reasonableness of a response to a threat depends
on several factors, including benefit vs. harm, realism, proportionality, and consistency. Since a
concept of reasonableness plays an essential role in applying the precautionary principle, this
principle gives physicians and patients a decision-making strategy that encourages the careful
weighing and balancing of different values that one finds in humanistic approaches to clinical
reasoning. Properly understood, the principle presents a worthwhile alternative to approaches to
clinical reasoning that apply expected utility theory to decision problems.

Keywords: cancer screening tests, evidence-based medicine, expected utility theory, medical
decision-making, precautionary principle, probability

I. INTRODUCTION

One of the oldest rules in medical decision-making is the adage ‘‘an ounce of
prevention is worth a pound of cure.’’ For most diseases, the harms and costs
entailed by preventative measures are much less than the harms and costs
associated with the disease. The rule appears to be a straightforward
application of expected utility theory (EUT) to clinical reasoning. For
example, to decide whether one should immunize children against a disease,
one should calculate the expected utilities of different choices and pick the

Address correspondence to: David B. Resnik, J.D., Ph.D., Department of Medical Humanities,
The Brody School of Medicine at East Carolina University, Greenville, NC 27858, USA.
E-mail: resnikd@mail.ecu.edu

10.1080/03605310490500509$16.00 # Taylor & Francis Ltd.


282 DAVID B. RESNIK

option with the highest expected utility. EUT provides a scale for comparing
the expected costs and benefits of immunizing children against the expected
costs and benefits of not immunizing children.
What should one do when one does not have enough evidence to apply EUT
to a medical decision? Suppose that a 48-year-old male with no family history
of prostate cancer wants to know whether he should have a prostate specific
antigen (PSA) test to detect this disease. Prostate cancer is a serious illness that
mostly affects older males. 3% of all men in the United States die from prostate
cancer, but more than two-thirds of prostate cancer patients are 75 or older, with
a median age of death at 77 (Centers for Disease Control, 2003). The PSA test is
a relatively inexpensive way of detecting early-stage prostate cancer. Although
many medical experts recommend that men who are 50 or older receive the
PSA test, there is widespread disagreement in the medical community about
the merits of administering the test because of insufficient evidence that finding
and treating early-stage prostate cancers saves quality adjusted life years
(QUALY) (Centers for Disease Control, 2003). Many prostate cancers remain
localized and have no significant effect on male health. Moreover, treatments
for prostate cancer, such as surgery or radiation, can have harmful side effects,
such as impotence and incontinence. The test often yields inconclusive results
because it has a sensitivity (or false negative rate) of 86% and a specificity (or
false positive rate) of 33%, which means that it misses 14% of prostate cancers
and issues false alarms 66% of the time (Hoffman, Gilliland, Adams-Cameron,
Hunt, & Key, 2002).1 The decision to perform (or not perform) the PSA test has
been and continues to be fraught with clinical, ethical, and legal controversy
(Gerard & Frank-Stromborg, 1998).2
How should one make a medical decision when faced with such
uncertainty? One possible approach to take under these conditions would be
to appeal to the precautionary principle (PP), a rule for decision-making that
many political groups and activists have urged that society adopt in response
to environmental and public health threats. According to a popular version of
the PP, lack of scientific proof should not be used as an excuse for failing to
take reasonable measures to avert a serious threat (European Commission,
2000). While the PP makes intuitive sense, there are many difficulties with
interpreting and applying this principle (Cranor, 2001; Soule, 2000). Under
some interpretations, the PP is an extremely risk-aversive, anti-science rule
(Resnik, 2003). In order to use the PP to make rational decisions, one must
articulate a version of this principle that is reasonable and not exceedingly
risk-aversive.
THE PRECAUTIONARY PRINCIPLE AND MEDICAL DECISION MAKING 283

In this paper, I will argue that, properly understood, the PP can provide
physicians and patients with a useful approach to medical decisions when
EUT does not apply or does not yield clear recommendations. Since a concept
of reasonableness plays an essential role in applying the PP, this principle
gives physicians and patients a decision-making strategy that encourages the
careful weighing and balancing of different values that one finds in humanistic
approaches to clinical reasoning. I will defend this thesis as follows. In Section
II, I will explain how the PP differs from EUT. In Section III, I will explicate
and defend a version of the PP that could play a role in many types of practical
decisions. In Section IV, I will present an overview of different approaches to
medical decision-making and argue that the practical difficulties with
implementing EUT in medical decision-making create a niche for the PP. In
Section V, I will apply the version of the PP defined in Section III to a case
study.

II. EXPECTED UTILITY THEORY


AND THE PRECAUTIONARY PRINCIPLE

It is important to understand the relationship between the PP and other


decision-making strategies that use EUT, such as risk assessment or cost-
benefit analysis (Foster, Vecchia, & Repacholi, 2000). Understanding the
relationship between the EUT and the PP will help us to see more clearly how
the PP challenges the risk assessment paradigm that dominates environmental,
public health, and medical decision-making.
If we view a rational decision-maker as someone who takes effective means
to his or her ends, then EUT is a rule that applies when the decision-maker
does not know which outcomes will result from various choices, but he or she
can assign probabilities and values to the various outcomes associated with
different options.3 In short, he or she faces what decision theorists call a
decision under risk. For example, suppose I can buy a $10 lottery ticket for a 1/
1,000,000 chance at winning $1 million. According to EUT, the rational
choice is the choice that maximizes one’s expected utility, which is calculated
by multiplying probabilities and utilities and summing the results (Resnik,
1987). If we measure utility in dollars, EUT would recommend that I do not
buy the lottery ticket, because my expected utility for buying the ticket is
$19, while my expected utility for not buying the ticket is $0. EUT has
many different applications in practical decision-making and public policy,
284 DAVID B. RESNIK

including utilitarianism (in ethics and political philosophy), cost-benefit


analysis (in business and economics), environmental risk management,
insurance underwriting, and evidence-based medicine.
Suppose that one faces a decision where one does not have enough evidence
to apply EUT because one cannot objectively estimate the probabilities of
various outcomes. This would be a situation known in decision theory as a
decision under ignorance. According to one commonly used approach to
decisions under ignorance, called the ‘‘maximin’’ approach, we should make
the choice that maximizes our minimum outcome (Resnik, 1987). For
example, if I don’t know my odds of winning the lottery, I shouldn’t play,
because by not playing I avoid the worst outcome, i.e., buying the ticket and
not winning. The problem with this strategy is that you also lose the
opportunity for the best outcome: you can’t win the lottery if you don’t play.
Some theorists argue that the maximin approach is too pessimistic and risk-
aversive, since it recommends that we always take steps to avoid worst-case
scenarios. Thus, decision theorists have proposed a variety of other rules for
making decisions under ignorance. There is not sufficient space in this essay to
explain all of these decision rules in detail or discuss their strengths and
weaknesses. For the purposes of this essay, I will simply note that decision
theorists do not agree on which is the best rule for making decisions under
ignorance (Moser, 1990).
These controversies reflect fundamental disagreements about attitudes
toward risk and the management of uncertainty. Although formal decision
rules and theories can help define and clarify these controversies, they cannot
resolve them because these controversies reflect disputes about values (Audi,
2001). Some people like to minimize all risks, while others are willing to
tolerate risks in order to take advantage of opportunities.
The best way to understand the PP, I believe, is as an approach to making
decisions under ignorance. Consider a statement of the PP in Principle 15 of
the United Nations Conference on Development at Rio de Janreiro:
In order to protect the environment, the precautionary principle shall be
widely applied by States according to their capabilities. Where there are
threats of serious or irreversible damage, lack of full scientific certainty
shall not be used as a reason for postponing cost-effective measures to
prevent environmental degradation. (United Nations, 1992, p. 10)
This principle sounds a lot like EUT, since it recommends that we take cost-
effective measures to address threats. If the PP were simply a dressed up version
THE PRECAUTIONARY PRINCIPLE AND MEDICAL DECISION MAKING 285

of the EUT, then it would not challenge the status quo of environmental and
public health risk assessment, since risk assessment approaches already attempt
to take cost-effective measures to prevent harm (Cranor, 1993; Goklany, 2001).
Where the PP differs from EUT is in the ambiguous phrase ‘‘lack of full
scientific certainty.’’ The PP authorizes decision-makers to make choices even
when they lack scientific certainty. If we are to understand the PP as challenging
the status quo – and both proponents and opponents of the PP agree that it
challenges the status quo – then, by implication, the status quo (i.e. risk
assessment) must not authorize choices when one lacks ‘‘full scientific
certainty.’’ To properly distinguish the PP from EUT, it is important to define
this ambiguous phrase as well as other terms contained in the PP.

III. DEFINING THE PRECAUTIONARY PRINCIPLE

Opponents of the PP argue it is vague and ambiguous, and even proponents of


the PP agree that it requires additional interpretation and clarification (Cranor,
2001; Resnik, 2003; Soule, 2000). One legal scholar has identified 14 different
versions of the PP in various treaties and declarations (Vanderzwaag, 1999).
The first ambiguity that one must resolve concerns the interpretation of the
phrase ‘‘lack of full scientific certainty.’’ Certainty is an epistemological
concept related to degree of proof (Goldman, 1986). The concept of proof is
important in many disciplines, including science, law, mathematics, and
philosophy. Although the PP has been invoked in legal documents and treaties,
I will focus on scientific proof, not legal proof in this essay.4
Ever since the sixteenth century philosopher Rene Descartes reflected on
his knowledge of himself, God, and the external world, philosophers have
argued about whether any beliefs are or could be certain. The debate is too
long and complex to summarize here. For our purposes, we need only note that
there is a robust consensus that scientific knowledge is not certain (Kitcher,
1993). Scientific knowledge may be confirmed, verified, proven, accepted,
justified, reliable or entrenched, but it is not certain (Goldman, 1986). Thus,
the phrase ‘‘scientific certainty’’ is a misnomer. So, the first step to developing
a coherent interpretation of the PP would be to replace this oxymoronic phrase
with something more useful. For the purposes of this essay I will adopt a
probabilistic interpretation or ‘‘scientific proof.’’ To offer proof, in science, is
to offer evidence that has some bearing on the degree of probability assigned
to a statement or hypothesis (Howson & Urbach, 1989).
286 DAVID B. RESNIK

If we understand ‘‘scientific proof’’ probabilistically, the natural question to


ask is, ‘‘what degree of probability is required for scientific proof?’’ There is
no general answer to this question, because a great deal depends on the
practical applications and implications of the statement we are attempting to
prove (Rudner, 1953). For example, if the statement is ‘‘Medicine X is safe
and effective,’’ we would require a high degree of proof, e.g. probability
95%, before we would use this statement in deciding whether to approve a
new drug for human use, because the consequences of a mistake are very
great. If the statement is, ‘‘Medicine X tastes better than Medicine Y’’ we
would require only a probability 50%, since the consequences of making a
mistake are not very great. If the statement is ‘‘this tumor is cancerous,’’ we
might require only a 20% probability for this statement, because it would still
be worth examining the tumor even when there is a small chance that it is
cancerous. Thus, the amount of proof required may vary from case to case,
depending on our practical goals and circumstances. Even so, scientific proof
still must involve an assignment of objective probabilities, where ‘‘objective
probabilities’’ are based on facts that are independent of our subjective beliefs,
such as statistical frequencies or logical relationships. If we cannot assign an
objective probability to a statement, then it lacks scientific proof (Resnik,
2003).5
Some versions of the PP frame the issues of proof in terms of evidence of
causal relationships. For instance, the Declaration of the Third International
Conference on the Protection of the North Sea calls for actions to protect the
North Sea even when there is no evidence proving a causal connection
between an emission and its effects on the North Sea (North Sea Convention,
1990). Although the evidence about causal relationships between actions and
threats is important in deciding whether to take steps to minimize or prevent
those threats, evidence for causal relationships ultimately derives from
statistical probabilities (Pearl, 2000). To prove that smoking causes lung
cancer, one may conduct a linear regression analysis to establish that the rate
of lung cancer (the dependent variable) predicts the rate of smoking (the
independent variable) in a given population (Johnson & Bhattacharya, 1985).
Although discovering biological mechanisms and processes that explain the
pathology of lung cancer can aid our understanding of the relationship
between smoking and lung cancer, one does not need to find these mechanisms
and processes in order to use statistical methods to prove this causal
relationship. Thus, talk of ‘‘causal relationships’’ is a red herring when it
comes to understanding the PP, since the essential epistemological concept is
THE PRECAUTIONARY PRINCIPLE AND MEDICAL DECISION MAKING 287

the concept of probability. Without probability, there can be no scientific


proof.
Having explored these epistemological issues, we can now understand one
of the main critiques of the PP. According to many critics, people use the PP to
justify taking actions against threats that are not probable or even plausible
(Goklany, 2001). Like Chicken Little, we can always imagine nightmare
scenarios that are logically possible in order to claim that we need to take
precautionary measures against them. Thus, the first qualification that one
must add to the PP is that the threats must be at least plausible (or credible or
believable) (Cranor, 2001; Resnik, 2003). Without this qualification, the PP is
excessively risk-aversive and irrational.
What’s the difference between a probable threat and one that is merely
plausible? A threat is probable when we have enough data to assign an
objective probability to a statement describing the threat. Prostate cancer is a
probable threat for a man because we have good data from numerous studies
indicating that 3% of men die from prostate cancer. Thus, the statement ‘‘John
will die of prostate cancer’’ has a probability, P, of 0.03, where ‘‘John’’ is a
man. We can use Bayes’ Theorem to update this probability in light of new
evidence, such as information about John’s family history relating to prostate
cancer and his dietary risk factors.6
A threat is merely plausible when we do not have enough evidence to assign
an objective probability to it, but we have some evidence for it. For example,
the statement ‘‘I will have a flat tire on my way to work in Greenville, NC’’ is
plausible, because we have some evidence to believe that this event could
happen, yet we do not have enough evidence to assign an objective probability
to this statement.7 Although I have never had a flat tire on my way to work in
Greenville, I have evidence for this threat based on my general knowledge of
tires and driving. On the other hand, the statement ‘‘I will be attacked by a
pack of poodles on my way to work in Greenville, NC’’ is not probable or even
plausible. While the statement is logically possible, I simply do not have
enough evidence about the statement to assign it a probability or even regard it
as plausible.
Now, one might object that the statements I am calling ‘‘plausible’’ are
actually ‘‘probable’’ on the grounds that we can always assign a subjective
probability to any statement, where a subjective probability is our best guess,
given our background knowledge. According to some approaches to scientific
reasoning, we can assign subjective probabilities to all statements and then use
Bayes’ theorem to update those statements in light of new evidence (Howson
288 DAVID B. RESNIK

& Urbach, 1989). In the long run, our probability assignments will reflect the
evidence we have gathered rather than our initial biases. Thus, I can assign a
probability, albeit a small one, to the statement ‘‘I will be attacked by a pack of
poodles on my way to work in Greenville, NC.’’
The problems with the subjective approach to probability are well known,
and I will not review all of these arguments and counterarguments here.
Briefly, the main drawback with the subjective approach is that Bayesian
updating may not overcome our initial biases. New evidence may not convert
subjective probabilities into objective ones, especially when we have
conducted only a few tests (Earman, 1992). Although there are some formal
and rational constraints on the assignment of subjective probabilities, such as
the axioms of probability theory and prohibitions against the possibility of a
Dutch Book, initial subjective probabilities can vary greatly: one person could
regard the probability of a statement as 99% while someone else could regard
its probability as only 1% (Resnik, 1987). After a few tests, these initial
assignments might change to 80% and 20%, but this would still not be enough
to overcome initial biases.
One might also object that the statements that I am calling ‘‘plausible’’ are
really ‘‘nomologically possible,’’ where ‘‘nomological possibility’’ is under-
stood as a type of possibility short of logical possibility. A statement is
nomologically possible if it is consistent with scientific laws (Pargetter, 1984).
For example, it is logically possible for a rock to fly up in the sky when I
release it from my hand but nomologically impossible because this event
would be inconsistent with the law of gravity. My reply to this objection is that
‘‘nomological possibility’’ does not capture the full sense of what we mean by
‘‘plausibility.’’ It is nomologically possible that my car will fall apart into a
thousand pieces on the way to work, but I do not think that this is plausible.
The second ambiguity related to defining and applying the PP has to do with
the concept of reasonableness, which is another important part of the principle.
Many versions of the PP require that responses to threats be reasonable (Cranor,
2001). For example, the European Commission (2000) holds that the measures
taken in response to a threat should be proportional to the level of the threat,
consistent with other measures already taken, and based on an examination of
potential costs and benefits of responding to threat, including economic costs
and benefits. Elsewhere, I argue that responses to a threat should take a realistic
attitude toward the threat (Resnik, 2003). For example, it would be
unreasonable to respond to a threat that is impossible to prevent. It would
also be unreasonable to takes ineffective measures against a preventable threat.
THE PRECAUTIONARY PRINCIPLE AND MEDICAL DECISION MAKING 289

To see why the PP should be constrained by a concept of reasonableness,


consider possible responses to the threat of having a flat tire on the way to
work. I regard this threat as plausible – but what should I do about it? Suppose
that I have several options, such as (1) do nothing, (2) don’t go to work, and (3)
take a jack and spare tire. Option 1 makes an insufficient response to the threat
and does not do anything to minimize the potential cost of a flat tire. Option 2,
on the other hand, is an overreaction to the threat that does not allow me to
enjoy the benefits of going to work. Option 3 appears to be the most
reasonable option, because it uses a level of protection proportional the level
of threat and it balances benefits and costs.
The concept of ‘‘reasonableness’’ is not a very precise notion. It is not
identical to the more formal and precise concept of ‘‘rationality.’’ A rational
person is someone who takes effective means to their chosen ends (Rawls,
2001). A reasonable person, on the other hand, is someone who carefully
balances competing goals, exhibits discretion and good judgment, is non-
dogmatic and self-critical, and listens to reasons and arguments (Audi, 2001;
Rawls, 2001). It is possible that a person could act rationally but not act
reasonably, if that person pursued his goals in a dogmatic, uncritical, or
unbalanced way (Audi, 2001). The concept of ‘‘reasonableness’’ can play a
very important role in the assessment of human conduct because it involves
the careful balancing and weighing of competing norms and goals that
characterizes moral and political decision-making (Rawls, 2001).8
One of the advantages of using a concept of reasonableness to define the PP
is that there may be more than one reasonable response to a particular threat.
Consider a storeowner who wants to prevent burglaries when his store is
closed at night. He could choose among several different security systems,
employ a security service, or hire his own security personnel. All of these
options could be reasonable, depending on how he weighs and considers the
different values that hinge on his decision. In EUT, on the other hand, there is
one and only one choice that is rational, unless two or more choices have the
same expected utility.9
A final ambiguity relating the interpretation and application of the PP has to
do with the ‘‘seriousness’’ of a threat. What makes one threat more serious
than another? Intuitively, we can say that seriousness depends on two factors:
(1) the potential for harm and (2) reversibility. Although most people would
agree that the potential for harm makes a difference in our assessment of
threats, why would reversibility matter? Reversibility is important because
some threats may be impossible to reverse (or undo) once they occur, while
290 DAVID B. RESNIK

other threats can be reversed. In the short run, a reversible threat may seem to
be very serious, but in the long run, an irreversible threat may be more serious.
For example, consider two potential threats to a house, damage to the roof
and damage to the foundation. In the short run, hail or wind damage to the
roof may cause a great deal of harm to the house, but in the long run, damage
to the foundation may be more serious, because this harm may not be
reversible. It is usually not too difficult to replace damaged shingles or other
roofing materials, but it may be difficult or impossible to repair a cracked
foundation.
Having defined some of the key terms in the PP, we can now offer a
definition of this principle:
PP (definition): One should take reasonable measures to prevent or mitigate
threats that are plausible and serious.
This is a very short definition of the PP. The concepts of reasonableness,
plausibility, and seriousness distinguish the PP from EUT, which uses
‘‘probability’’ instead of ‘‘plausibility,’’ ‘‘utility maximization’’ instead of
‘‘reasonableness;’’ and ‘‘harm’’ instead of ‘‘seriousness.’’

IV. MEDICAL DECISION-MAKING

Having defined a version of the PP and discussed the difference between the
PP and EUT, I am now prepared to show how one might apply the PP to
medical decisions. Before beginning this discussion, it will be useful to
distinguish between medical decision-making and public health decision-
making. By ‘‘medical decision-making,’’ I mean the decision to make a
medical intervention, such as testing, treatment or prevention, in an individual
patient (Albert, Munson, & Resnik, 1999). Medical decisions take place
within the context of the physician-patient relationship. The physician and
patient (or his or her surrogate) both have different roles and rights in these
decisions (Beauchamp & Childress, 2001).
By ‘‘public health decision-making,’’ I mean the decision to conduct and
intervention or a study in a patient population (Kass, 2001). For example, the
decision to screen all men, who are 50 years old or older, for prostate cancer
would be a public health decision. Public health decisions also address the
social, political, and economic aspects of medicine. Scientific and medical
experts can make recommendations concerning the merits of public health
THE PRECAUTIONARY PRINCIPLE AND MEDICAL DECISION MAKING 291

interventions, but non-experts (i.e., politicians and the public) must approve of
any policy before it is adopted. This paper will focus on medical decisions, not
public health decisions.
The basic issue underlying this debate about the nature of medical decision-
making (or clinical reasoning) can be stated as a general question, ‘‘is
medicine a science or an art?’’ This question that is as old as medicine itself
(Little, 1995; Porter, 1997). Most people who have put some serious thought
into the question recognize that medicine is both a science and an art. The
science of medicine involves the application of scientific knowledge, theories,
principles, and methods to clinical problems; the art of medicine resides in the
relationship between the physician and patient, the practical skills of the
physician, and the decision about whether to perform a medical intervention
(Clouser & Zucker, 1974). Both scientific facts and human values play roles in
medical decisions. Scientific facts play a role in understanding the clinical
problem as well as potential solutions. Human values play a role in choosing
among different options (Beauchamp & Childress, 2001).
Even though most people agree that medicine is both a science and an art,
some authors view it as more like a science than an art, while others view it as
more like an art than a science. Those who emphasize the scientific aspects of
medicine argue that it is possible to apply formal methods, such as logic,
statistics and decision theory, to clinical reasoning. Although this approach
recognizes that values still play a role in medical decisions, it holds that one
can use quantitative methods to understand how values enter into clinical
decisions (Albert, Munson, & Resnik, 1999; Gorovitz & MacIntyre, 1976;
Schaffner, 1986). EUT is a quantitative method that has had considerable
influence on medical decision-making. In the PSA testing case mentioned at
the beginning of the paper, one could assign values and probabilities to the
outcomes associated with the different choices (test vs. don’t test) to
determine what one should do. EUT is not a value-free approach to decision-
making, because values play an important role in the calculation of expected
utility. However, since EUT is a quantitative approach to decision-making,
many people mistakenly view it as value-free (or objective).
During the 1990s, evidence-based medicine (or EBM) was the new
buzzword for the scientific approach to clinical reasoning. According to EBM
proponents, many medical decisions and accepted practices are based on
tradition, prejudice, intuition and other factors that have little to do with the
evaluation of scientific evidence relating to the outcomes of controlled
experiments. The EBM approach recommends that doctors make decisions
292 DAVID B. RESNIK

based on a precise weighing of risks and benefits in light of the best available
scientific evidence (Sackett, Straus, Richardson, Rosenberg, & Haynes, 2000).
Medical decisions should maximize benefits and minimize risks for the
patient. The medical community can assess safety, efficacy, and cost-
effectiveness by studying the outcomes of various treatments and procedures.
Ideally, all medical decisions should be based on evidence from randomized,
clinical trials, the so-called ‘‘gold standard’’ for medical research (Sackett
et al., 2000). EBM proponents also endorse the development of practice
guidelines and electronic databases to provide physicians with EBM
recommendations and up-to-date information (Boyle, 2000).
Those who do not accept the scientific approach to medical decision-
making are skeptical about attempts to use quantitative methods to understand
the role of values in decision-making. Although these methods can help our
understanding of clinical decisions, qualitative informal methods, such as
intuition and experience, as well as moral, political, religious, and aesthetic
values should play an important role in decision-making (Downie &
Macnaughton, 2000; Little, 1995; Murphy, 1976). Clinical reasoning is not
simply an application of quantitative methods to clinical problems; it involves
judgments and discretion as well as a careful weighing and balancing of
different values. It is humanistic, not scientific.
There are several reasons why clinical reasoning is not simply an
application of quantitative formal methods to clinical problems. First, doctors
must frequently make medical decisions with incomplete evidence (Miller,
1990). Very often there is not enough time to gather all the evidence one might
need to make a definitive diagnosis, and one must accept a working diagnosis
in order to begin treatment (Gorovitz & MacIntyre, 1976). For example, if a
78-year-old female has a 3-day history of a productive cough, fever, difficulty
breathing, and a chest x-ray consistent with left lower lobe pneumonia, a
doctor may begin antibiotic therapy before he or she knows the laboratory
results from a sputum culture, even though those results are essential for
determining the type of pathogen that is infecting the patient (Tierney,
McPhee, & Papadakis, 2002).
Second, the sheer complexity of medical problems can make it difficult to
develop definite solutions to clinical problems in real time (Gorovitz and
MacIntyre, 1976; Miller, 1990). Even the diagnosis and treatment of a
relatively simple human medical problem, such as a pneumonia, may involve
many different organ systems as well as many different tests and treatments.
Moreover, it is frequently the case a patient may have more than one medical
THE PRECAUTIONARY PRINCIPLE AND MEDICAL DECISION MAKING 293

problem or illness. Human beings are complex, living systems with many
different interacting parts, not controlled experiments.
Third, human values play a key role in determining whether to initiate a
particular test or treatment (Gorovitz & MacIntyre, 1976; Resnik, 1995). All
tests and treatments have risks and benefits. Patients and doctors must choose
among different therapeutic options by weighing risks and benefits.
Furthermore, doctors and patients do not always agree on the best way to
balance benefits and risks, since risk/benefit calculations often depend on
quality of life judgments (Little, 1995). Although science can help us to
determine the most effective means of promoting benefits or avoiding risks, it
cannot help us weigh and balance benefits and risks. To compare benefits and
risks, we must appeal to moral, aesthetic, political or religious values
(Beauchamp & Childress, 2001).
Fourth, human values play an important role in defining many of the
foundational concepts in healthcare, such as ‘‘health’’, ‘‘disease’’, ‘‘normal
functioning’’, ‘‘dysfunction’’, ‘‘disability’’, and ‘‘quality of life’’. We often
appeal to our conception of the good life to classify specific traits as diseases.
A person with a disease is someone who deviates from these normative ideals.
For example, at one time physicians and psychiatrists regarded homosexuality
and promiscuity as diseases. Since we often regard diseases as ‘‘bad’’ traits,
we feel morally obligated to treat all diseases in order to restore the person to
the good life. Although some scholars, such as Boorse (1975) have made
brilliant attempts to define these concepts in objective, scientific terms, many
other scholars are skeptical of such attempts.10
I do not intend to settle this ongoing debate about the nature of clinical
reasoning in this paper. However, I would like to show how the PP can shed
some light on this debate. First, as we saw earlier, the PP applies to decisions
under ignorance. One of the strongest arguments for regarding medicine as an
art is that physicians and patients must frequently make important choices
when they do not have enough evidence to assign objective probabilities to the
outcomes associated with different choices. The PP can offer physicians
and patients some useful guidance when they must make decisions under
ignorance. Instead attempting to maximize utility, physicians and patients can
make choices that are reasonable responses to plausible and serious threats.
Second, the PP is essentially a qualitative method for making decisions.
Instead of using quantitative concepts like ‘‘probability’’ and ‘‘utility
maximization,’’ it employs qualitative concepts, such as ‘‘plausibility’’ and
‘‘reasonableness.’’ To apply the PP to any particular problem, one must make
294 DAVID B. RESNIK

judgments regarding the plausibility and seriousness of a threat, as well as the


reasonableness of different responses to the threat. All of these concepts,
especially the concept of reasonableness, involve the type of subtle weighing
and balancing that one finds in humanistic approaches to decision-making,
rather than the precise calculation of expected utilities that one finds in
scientific and quantitative approaches. Although quantitative methods for
decision-making certainly have a place in clinical reasoning, the PP can offer
useful guidance when these methods do not apply.

V. APPLYING THE PRECAUTIONARY PRINCIPLE


TO MEDICAL DECISIONS

Let us return to the PSA test that we mentioned at the beginning of this paper.
To apply the PP to this case, we must first ask: is the threat plausible? The
answer to the question is ‘‘yes,’’ because we have some evidence that the
patient may develop the disease prostate cancer.
Next, we should ask: is the threat serious? Again, the answer to this
question would be ‘‘yes.’’ Although some prostate cancers remain sequestered
in the prostate and do not metasticize, other cancers have the potential to cause
great harm, including pain, disability, and death. However, since some
prostate cancers respond to treatment, some of the harms associated with
prostate cancer may be reversible. Although prostate cancer is not as serious a
threat as a cancer that is highly aggressive or untreatable, it is still a serious
threat.
Since the threat of prostate cancer is plausible and serious, our decision to
take (or not to take) precautionary measures will depend on the reason-
ableness of these measures. A reasonable response would be one that is
proportional to degree of the threat, consistent with other decisions, carefully
weighs benefits and harms, and takes a realistic attitude toward the threat and
its prevention. A PSA test would appear to meet the proportionality
requirement, since this is a relatively minor intervention to address a major
problem. The PSA test would also appear to meet the realism requirement,
since a screening exam could help one avoid some of the adverse outcomes of
prostate cancer. On the other hand, having the test may be inconsistent with
other decisions to forego other screening exams. For example, suppose the
patient has a PSA exam for prostate cancer, but he does not have a
colonoscopy for colon cancer (at age 48) or a chest CT-scan for lung cancer.
THE PRECAUTIONARY PRINCIPLE AND MEDICAL DECISION MAKING 295

Would this be a consistent approach to cancer screening? Having the test may
also not reflect a careful weighing of risks and benefits: although the test has
some potential benefits, there are also some risks. For example, the test could
yield a false positive or a false negative result. If the test yields a false positive
result, the patient would need to undergo other tests to determine that the
elevated PSA level is not due to cancer. If the test yields a false negative
result, the patient will be lulled into a false sense of security and may not take
proper steps to mitigate the effects of his cancer. Finally, if the test yields a
true positive result, the patient will need to choose among different options,
which may have undesirable complications, such as impotence and
incontinence. It may be the case that watchful waiting would be the best
option for the patient. If this is the case, then why even bother with the test at
all?
As one can see, most of the key issues in the decision whether to have the
PSA test relate to the reasonableness of different responses. In the EMB
literature, the discussion of PSA testing focuses on the issues relating to
maximizing expected utility. To date, there is widespread disagreement among
physicians and patients about the expected utility of the PSA test. Although
the PP does not provide a quick and easy solution to the problem of PSA
testing, it does offer a different and useful perspective on the problem. The PP
advises decision-makers to focus their discussion on what would constitute a
reasonable response to threat of prostate cancer, rather than on which option
has the highest expected utility. The PP does not necessarily provide us with a
clear and unambiguous answer to our original question, but it helps us to
frame the problem in a useful way. The PP instructs us to approach the
problem from the point of view of reasonableness rather than from the
perspective of cost-effectiveness. By thinking about the reasonableness of
various responses, we can consider not only questions about costs and benefits
but also questions about proportionality, consistency, and realism.11

VI. CONCLUSION

In this paper, I have argued that the precautionary principle can offer
physicians and patients a useful method for making choices when they do not
have enough evidence to apply expected utility theory to medical decisions.
When physicians and patients lack adequate scientific proof relating to the
potential outcomes associated with various choices, they should take
296 DAVID B. RESNIK

reasonable measures to avoid health threats that are serious and plausible.
Although the PP may not yield definite solutions to all medical dilemmas, it
can help focus our attention on questions that physicians and patients do not
always consider when making medical decisions, such as questions about the
reasonableness of different options. The reasonableness of a response to a
health threat depends on several factors, including benefit vs. harm, realism,
proportionality, and consistency. Since the PP requires that decisions be
reasonable, it encourages the careful weighing and balancing of different
values that one finds in humanistic approaches to clinical reasoning, and it
constitutes a worthwhile alternative to simplistic applications of expected
utility theory that one finds in some approaches to clinical reasoning.12

NOTES

1. A sensitivity of 84% is not very impressive, and specificity of 33% is very unimpressive.
For comparison, self-administered HIV tests have a sensitivity and specificity of 99% or
greater (The One Minute HIV Test, 2003).
2. Many other cancer screening exams, such as mammography to detect early breast cancer
and computerized tomography to detect early lung cancer, have also been very contro-
versial. See Gates (2001) and Grann and Neugut (2003).
3. I followed the traditional analysis of rationality used in philosophy of science, decision
theory and economics: a rational agent is someone who takes effective means to his ends.
Some writers, such as Gert (1998) view irrationality as the more basic notion: a rational
agent is someone who does not make irrational choices. Others, such as Simon (1982),
argue that the concept of rationality must take human limitations into account: a rational
agent is someone who takes satisfactory means to his ends, given his ignorance and lack of
time to decide, where ‘‘satisfactory’’ does not mean ‘‘the best possible,’’ but only ‘‘good
enough.’’ For further discussion, see Audi (2001).
4. The three main degrees of proof recognized in the U.S. legal system include ‘‘preponder-
ance of evidence,’’ ‘‘clear and convincing,’’ and ‘‘beyond reasonable doubt’’ (Black’s Law
Dictionary, 1999). If one translates this concepts into probabilistic terms, ‘‘preponderance
of evidence’’ ¼ probability > 50%; ‘‘beyond reasonable doubt’’ ¼ probability  95%, and
‘‘clear and convincing’’ ¼ probability between 95% and 60%.
5. One must be careful to distinguish between de re and de dicto senses of the terms
‘‘plausibility’’, ‘‘probability’’, and ‘‘possibility’’. De re (or ‘‘of the thing’’) uses of these
terms occur when one applies them to a particular thing, process, or event. For example, the
statement ‘‘It will probably rain tomorrow,’’ uses probability in a de re sense, since it is
applying this term to events on the world. On the other hand, the statement ‘‘It is probable
that it will rain tomorrow’’ uses ‘‘probability’’ in a de dicto (or ‘‘of the speech’’) sense
because it applies ‘‘probability’’ to another statement, i.e. ‘‘it will rain tomorrow.’’ In this
paper, I will be using the de dicto sense of the words ‘‘probable’’, ‘‘possible’’, and
‘‘plausible’’, rather than the de re sense.
THE PRECAUTIONARY PRINCIPLE AND MEDICAL DECISION MAKING 297

6. Bayes’ theorem provides a method for calculating conditional probabilities based on prior
probabilities and the evidence. The theorem implies the following equation:
PðHÞ  PðE=HÞ
PðH=EÞ ¼
PðEÞ
where ‘‘P(H/E)’’ ¼ ‘‘the probability of the hypothesis, given the evidence from a test;’’
P(H) ¼ ‘‘the probability of the hypothesis prior to testing;’’ P(E/H) ¼ ‘‘the probability of
the evidence, given the hypothesis, prior to testing;’’ and P(E) ¼ ‘‘the probability of the
evidence, prior to testing.’’ See Howson and Urbach (1989).
7. The plausibility of a hypothesis is a function of the evidence pertaining to the hypothesis as
well as our background knowledge and epistemic values, such as simplicity, explanatory
power, consistency, fruitfulness, and the like. For example, one might argue that a
hypothesis is plausible because it is has some evidence in its favor, it is consistent with
our background knowledge and provides simple and powerful explanation of some
phenomena. See Resnik (2003).
8. I recognize that people may disagree about the definition of ‘‘reasonableness’’ as well as
the various criteria for determining reasonableness, but I will not address this problem in
this essay. For further discussion, see Audi (2001) and Rawls (2001). I also note that the law
has its own concept of reasonableness, which in some ways is similar to the concept of
reasonableness discussed here (Black’s Law Dictionary, 1999).
9. See the discussion of rationality in note 3.
10. For a review of these debates, see Brown (1985); Kovács (1998).
11. In both of these examples, the key questions focused on the reasonableness of the responses
to health threats, but other examples might focus on questions relating the plausibility or
seriousness of the threats. For instance, if it has very little evidence for a health threat, then
plausibility becomes a key issue. The threat posed by the use of smallpox as a biological
weapon raises important questions relating to plausibility. Is it plausible that terrorists
would be able to use smallpox as a weapon? For further discussion, Fauci (2002).
12. I would like to thank Loretta Kopelman and Douglas Weed for helpful comments.

REFERENCES

Albert, D., Munson, R., & Resnik, M. (1999). Reasoning in medicine: An introduction to
clinical inference, (2nd ed.). Baltimore: Johns Hopkins University Press.
Audi, R. (2001). The architecture of reason. New York: Oxford University Press.
Beauchamp, T., & Childress, J. (2001). Principles of biomedical ethics (5th ed.). New York:
Oxford University Press.
Black’s Law Dictionary. (1999). Minneapolis: West Publishing.
Boorse, C. (1975). On the distinction between disease and illness. Philosophy and Public
Affairs, 5, 49–68.
Boyle, P. (Ed.). (2000). Getting doctors to listen: Ethics and outcomes data in context.
Washington, DC: Georgetown University Press.
Brown, W. (1985). On defining disease. The Journal of Medicine and Philosophy, 10, 311–328.
Centers for Disease Control. (2003). Prostate Cancer Screening: A Decision Guide [On-line].
Available: http://www.cdc.gov/cancer/prostate/decisionguide/index.htm.
298 DAVID B. RESNIK

Clouser, D., & Zucker, A. (1974). Medicine as art: An initial exploration. Texas Reports on
Biology and Medicine, 32, 267–274.
Cranor, C. (1993). Regulating toxic substances. New York: Oxford University Press.
Cranor, C. (2001). Learning from the law to address uncertainty in the precautionary principle.
Science and Engineering Ethics, 7, 313–326.
Downie, R., & Macnaughton, J. (2000). Clinical Judgment. New York: Oxford University Press.
Earman, J. (1992). Bayes or bust? Cambridge, MA: MIT Press.
European Commission. (2000). Communication from the commission on the precautionary
principle. Brussels: European Commission.
Fauci, A. (2002). Smallpox vaccination policy – the need for dialogue. New England Journal of
Medicine, 346, 1319-1320.
Foster, K., Vecchia, P., & Repacholi, M. (2000). Risk management, science, and the
precautionary principle. Science, 288, 979–981.
Gates, T. (2001). Screening for cancer: Evaluating the evidence. American Family Physician,
63, 513–522.
Gerard, M., & Frank-Stromborg, M. (1998). Screening for prostate cancer in asymptomatic
men: Clinical, legal, and ethical implications. Oncology Nursing Forum, 25, 1561–1569.
Gert, B. (1998). Morality: Its nature and justification. New York: Oxford University Press.
Goklany, I. (2001). The precautionary principle: A critical appraisal of environmental risk
assessment. Washington, DC: The Cato Institute.
Goldman, A. (1986). Epistemology and cognition. Cambridge: Harvard University Press.
Gorovitz, S., & MacIntyre, A. (1976). Toward a theory of medical fallibility. The Journal of
Medicine and Philosophy, 1, 51–71.
Grann, V., & Neugut, A. (2003). Lung cancer screening at any price? Journal of the American
Medical Association, 289, 355–356.
Hoffman, R., Gilliland, F.D., Adams-Cameron, M., Hunt, W.C., & Key, C.R. (2002). Prostate-
specific antigen testing accuracy in community practice. Biomed Central Family
Practice, 3, 19–23.
Howson, C., & Urbach, P. (1989). Scientific reasoning. New York: Open Court.
Johnson, R., & Bhattacharya, J. (1985). Statistics: Principles and methods. New York:
John Wiley and Sons.
Kass, N. (2001). An ethics framework for public health. American Journal of Public Health,
91, 1776–1782.
Kitcher, P. (1993). The advancement of knowledge. New York: Oxford University Press.
Kovács, J. (1998). The concept of health and disease. Medicine, Health Care, and Philosophy,
1, 31–39.
Little, M. (1995). Humane medicine. Cambridge: Cambridge University Press.
Miller, R. (1990). Why the standard view is standard: People, not machines, understand
patients’ problems. The Journal of Medicine and Philosophy, 15, 581–591.
Moser, P. (Ed.). (1990). Rationality in action. Cambridge: Cambridge University Press.
Murphy, E. (1976). The logic of medicine. Baltimore: Johns Hopkins University Press.
North Sea Conference. (1990). Final Declaration of the Third International Conference on
Protection of the North Sea. International Environmental Law, 1, 662–673.
Pargetter, R. (1984). Laws and modal realism. Philosophical Studies, 46, 335–347.
Pearl, J. (2000). Causality. Cambridge: Cambridge University Press.
Porter, R. (1997). The greatest benefit to mankind: A medical history of humanity. New York:
W.W. Norton.
THE PRECAUTIONARY PRINCIPLE AND MEDICAL DECISION MAKING 299

Rawls, J. (2001). Justice as fairness: A restatement. Cambridge, MA: Harvard University Press.
Resnik, D. (1995). To test or not to test: A clinical dilemma. Theoretical Medicine, 16,
141–152.
Resnik, D. (2003). Is the precautionary principle unscientific? Studies in the History and
Philosophy of Biological and Biomedical Sciences, 34, 329–344.
Resnik, M. (1987). Choices: An introduction to decision theory. Minneapolis: University of
Minnesota Press.
Rudner, R. (1953). The scientist qua scientist makes value judgments. Philosophy of Science,
20, 1–6.
Sackett, D., Straus, S.E., Richardson, W.S., Rosenberg, W., & Haynes, R.B. (2000). Evidence-
based medicine: How to practice and teach EBM, (2nd ed.). London: Wolfe Publishing.
Schaffner, K. (1986). Exemplar reasoning about biological models and diseases: A relation
between the philosophy of science and the philosophy of medicine. The Journal of
Medicine and Philosophy, 11, 63–80.
Simon, W. (1982). Models of bounded rationality. Cambridge, MA: MIT Press.
Soule, E. (2000). Assessing the precautionary principle. Public Affairs Quarterly, 14, 309–329.
The One-Minute HIV Test. (2003). [On-line]. Available: http://www.1-minute-aids-test.com/
Tierney, L., McPhee, S., & Papadakis, M. (2002). Current medical diagnosis and treatment.
New York: McGraw-Hill.
United Nations. (1992). Agenda 21: The UN program of action from Rio. New York: United
Nations.
Vanderzwaag, D. (1999). The precautionary principle in environmental law and policy: Elusive
rhetoric and first embraces. Journal of Environmental Law and Practice, 8, 355–385.

Potrebbero piacerti anche