Sei sulla pagina 1di 44

Chapter 27

Facts, Values, and Objectivity


Heather Douglas

Although concern over values in social science spans a century, no serious commentator
has argued that values have no relevance for social science. Even Max Weber, the figure
most associated with the ideal of value-neutrality for social science, is quite clear that
social science cannot proceed without values. However, how the values do and should
play a role in social science has been a central issue. Questions at the forefront of
discussion include: When are values legitimate in social science? When are they
necessary? When are they a threat to objectivity? How should objectivity as an ideal for
science be understood? And does social science face greater problems concerning values
and objectivity than natural science?

In this essay, I will first review the key positions on values in social science from the
twentieth century. With this background in place, it will be clearer both how to parse the
various roles for values in social science and what these roles mean for the objectivity of
social science. Using recent work, I will map the terrain of values in social science and
then turn to a discussion of objectivity in light of this terrain. Objectivity turns out to be a
rather complex concept, with multiple facets which can bolster our confidence in social

59

science work. Perhaps most intriguing, I will discuss arguments that the objectivity of
science itself is underwritten by the social. Rather than suggesting that the social
undermines the possibilities for objectivity, as was often presumed by commentators from
the first part of the twentieth century, such an understanding places the social at the center
of scientific objectivity.

Values and Social Science: A Look Back

The central questions on values in social science were addressed repeatedly over the
course of the twentieth century. Although there is much agreement on how best to
understand values in science, there are also points of disagreement. In particular, there
have been major shifts over whether the social sciences face unique challenges with
respect to values and objectivity. Max Weber, for example, thought that the complexities
that social scientists attempted to study meant that social science had unique difficulties
in achieving the proper stance with respect to values, placing objectivity further from
reach. In contrast, mid-twentieth century philosophers influenced by the unity of science
movement argued that social sciences faced no special challenges, that the natural
sciences had similar kinds of difficulties with respect to values. At the same time, the
mid-century saw the formation of a much more definitive and clear value-free ideal.
Debate over that ideal helped to map the terrain for values in science.

60

Weber on Values in Science

Max Weber had a more complex view of the relationship between values and science than
is often recognized, as can be seen in his 1904 essay, Objectivity in Social Science and
Social Policy. It was an editorial essay, describing his vision for the scope and nature of
the journal Archiv fr Sozialwissenschaft und Sozialpolitik, of which he was assuming
editorship (along with Werner Sombart and Edgar Jaff) (Weber, 1949: iv). The essay was
to clear the ground for the kind of forum he thought should exist for the journal and for
social science in general. A methodological essay of the great subtlety, he argues in it that
although there is a clear conceptual distinction between is and ought, and social science
is interested in the development of descriptive is claims, social science cannot proceed
without value judgments. Despite the dependency of social science on values, he also
argues that social science, as an empirical science, must maintain some boundaries
between science and values, in order to protect the value of empirical science.

Foundational to Webers understanding of the necessity of values in social science is his


awareness that the complexity of human social life is simply too overwhelming and
variable for us to ever completely capture of all social facts. Because of this, the social
scientist must proceed with some sense of what is significant. What is significant requires
some kind of value judgment, Weber argues. Where we choose to look in gathering data
and what kind of data it seems worthwhile to gather requires values. We must value

61

something to find it significant enough to measure, to pluck it from the complexity of


human social life, and to see it as a set of phenomena worthy of study. Culture is a
finite segment of the meaningless infinity of the world process, a segment on which
human beings confer meaning and significance (Weber, 1949: 81). Because we must
decide what is important enough to study, the significance of cultural events
presupposes a value-orientation towards those events (p. 76). The conferring of
significance on social phenomena begins the process of structuring social science
research.The very recognition of the existence of a scientific problem coincides,
personally, with the possession of specifically oriented motives and values (p. 61).
Social science cannot have clearly defined problems without values to indicate what is
significant. Weber emphasizes this point in refuting economics claim of being a wholly
objective and value-free approach to social science, writing: There is no absolutely
objective scientific analysis of culture or social phenomena independent of special
and one-sided viewpoints according to which they are selected, analyzed, and
organized for expository purposes (p. 72). Social science requires values as
presuppositions to direct our attention and structure our basic concepts. Thus, when
Weber argues for value-neutral social science, he does not mean social science
proceeding without values altogether. Indeed, he eschews any sort of nave inductive
positivism (p. 78).

In addition, social science itself, indeed all science, is dependent upon being valued by
society. Weber suggests that this need not be the case, that we could live in a culture that

62

did not value the pursuit of empirical truths: It should be remembered that the belief in
the value of scientific truth is the product of certain cultures and is not a product of mans
original nature (p. 110). The very fact that empirical truths, and the social science that
produces them, is valued at all is based on a particular cultural value.

Not only are values essential to the conduct of social science, but also social science has
several ways in which it can usefully comment on values, according to Weber. First, it
can tell us whether, given our ends, a particular means would be effective in achieving
those ends (pp. 5253). Thus, we can use social science to critique the ends based on their
viability as ends. Indeed, the assessment of the means could help us to see how
problematic our ends are, by unveiling other possible consequences of the means which
we might use to achieve those ends (pp. 52-53). Although this is a crucial service social
science can offer social policymaking, Weber thought social science cannot ultimately
answer the question of whether the attainment of a desired end would cost too much
in terms of the predictable loss of other values (pp. 52-53). How to make such
tradeoffs in practice is beyond the bounds of social science, even as the discovery of the
necessity of such tradeoffs lies well within its purview.

Second, social science can help to clarify the values we do have by making explicit and
developing in a logically consistent manner the ideas which actually do or which can
underlie the concrete end (p. 53). Social science, in clarifying what we believe our ends

63

to be, can help provide a rational understanding of those ends (p. 54). In addition,
social science can judge the internal consistency of ones ideals and ends (p. 54). By
providing a more precise description of values, and clarification on the historical
development of our values, social science can make our values open to clearer analysis
and understanding, even if it cannot determine those values. In addition to providing
helpful assistance to society, such clarity about the nature of values ultimately aids the
objectivity of the social scientist, helping to prevent confusion between the scientists
values and the way reality is.

Thus, according to Weber, there are multiple ways in which values are crucial to social
science, and social science can be useful for understanding values. Yet he was at pains to
make clear that he thought there were important boundaries between values and science
as well. Maintaining these conceptual boundaries was central to scientific objectivity.

Given all these interrelationships between science and values, in what does scientific
objectivity consist according to Weber? Weber rejects some easy answers: An attitude of
moral indifference has no connection with scientific objectivity (p. 60). Social
scientists are not to pretend to have no moral response to the phenomena they study.
Weber does not believe in the cold-hearted scientist. Nor is a middle path among moral
extremes to be taken as objective: To mediate between antagonistic points of view
has nothing whatsoever to do with scientific objectivity. Scientifically the middle

64

course is not truer even by a hairs breadth, than the most extreme party ideals of the
right or left (p. 57). Finding the moderate position within ones culture may have
political, but not scientific advantages. Thus, moral indifference or artificial neutrality are
not laudatory goals for the social scientist.

More central to Webers understanding of objectivity is his warning against the social
scientist blurring their moral responses with their empirical work. Weber calls on social
scientists to be clear about the values and ideals that structure their conceptual approach:
it should be constantly made clear to the reader ( and above all to ones self!)
exactly at which point the scientific investigator becomes silent and the evaluating
and acting person begins to speak. In other words, it should be made explicit just
where the arguments are addressed to the analytical understanding and where to
the sentiments.
(Weber, 1949: 60)
Weber stresses that is claims should still be kept distinct from ought claims in the
conduct and discussion of empirical social science, even if one understands that values
structure the starting points and the ideas with which one begins any social scientific
investigation.

Weber admonishes that such clarity should also be maintained in the use of ideal types
that structure our understanding of complex social phenomena. Weber argues that these

65

ideal types (e.g. the ideal type of medieval Christianity) are essential to gaining any
traction with the concrete reality of social life, but we are never to confuse the ideal type
with the complex reality from which the ideal type is abstracted. Further, we should not
conflate the use of an ideal type in social science with an endorsement of it as an
appropriate ideal. Doing social science properly requires a sharp, precise distinction
between the logically comparative analysis of reality by ideal-types in the logical sense
and the value-judgment of reality on the basis of ideals (Weber, 1949: 98). The social
scientist was not to decide which ideals were the correct ideals nor whether a society
should meet certain ideals, but rather the social scientist should focus on what ideals
helped to best understand a culture and how descriptively close that society is to an ideal.
Weber makes clear that he thinks science can confer no legitimacy upon the ultimate ends
or core values one holds. As to whether the person expressing these value-judgments
should adhere to these ultimate standards is his personal affair; it involves will and
conscience, not empirical knowledge (p. 54). Thus, social science cannot and should not
tell us what our ideals should be. It can only inform our understanding of our ideals in the
ways sketched above.

With these separations of science and value, Weber can understand science as producing
universal truths, albeit in a constrained sense. A social science journal to the extent
that it is scientific should be a place where those truths are sought, which can claim
even for a Chinese, the validity appropriate to an analysis of empirical reality (Weber,
1949: 59). While noting that variations in what is significant will occur among both

66

cultures and individuals within cultures, he explicitly rejects a science subjective to the
individual: It obviously does not follow from this that research in the cultural sciences
can only have results which are subjective in the sense that they are valid for one person
and not for others. For scientific truth is precisely what is valid for all who seek the
truth (p. 84). Presumably, even if one disagreed with the starting choices of significance
or basic ideas, one could see the difference between a properly and an improperly done
analysis following from those starting ideas. For Weber, scientific objectivity is to be
found in analysis that all who examined it would find acceptable. He does not claim that
the social sciences could discover eternal truths about human social lifein fact he is
explicitly skeptical about this possibility. (pp. 104105) But within a given framework,
there should be broad agreement about the results of science. Once one is working with a
set of ideals and categories (shaped by what one thinks is significant), objective,
empirical truths are available:

The objective validity of all empirical knowledge rests exclusively upon the
ordering of the given reality according to categories which are subjective in a
specific sense, namely, in that they present the presuppositions of our knowledge
and are based on the presupposition of the value of those truths which empirical
knowledge alone is able to give us. (Weber, 1949: 110)
Given the starting points of valuing empirical science and a particular set of ideas about
what is significant in social life, objective truths are possible as long as one does not
confuse the normative and the descriptive as ones scientific work goes forward.

67

Values and Social Science in the Mid-Twentieth Century

Two different lines of inquiry into the relationship between social science and values
sprouted from Webers work. One, which we shall not pursue here, centered on the need
for a science of ethics. Critics of Weber, such as Leo Strauss, argued that Webers views
necessarily lead to nihilism, as each person is forced back to their own subjective value
choices, with no scientific or universal framework for comparing their morals (Strauss,
1953). In order to avoid such nihilism, a science of ethics was needed. This science
would presumably demonstrate the universal and scientific validity of the correct
framework of ethics. Discussion of a social science of ethics, which could determine
universally valid ends, was a serious endeavor in the 1950s. Authors such as Strauss,
Gottshalk (1952), and Hartmann (1950) pursued such a science with vigor, but their
efforts were relegated to the academic backwaters by 1960. The problem of drawing a
normative justification from a descriptive account remained unmet.

A second line of inquiry was to have greater traction. This line concerned itself with the
question of how different social science really was from natural science, and whether
there were qualitative differences in how important values were for the two types of
science and for the objectivity of each. For Weber, the complexity of social phenomenon
and the need for some sense of significance marked social science as having distinctive

68

methodological issues when it came to values and objectivity. Other thinkers disputed
these conclusions, most notably George Lundberg and Ernest Nagel.

That social science was qualitatively distinct from natural science, facing unique
methodological problems, seemed to explain the development of social science for some
observers. As the social sciences developed in the twentieth century and carved out a
place for themselves in the academy, the question arose of why social science did not
seem to be producing results as quickly or effectively as the natural sciences. In many of
these discussions, it was argued that social science was more inextricably tied with values
than natural science, and thus was unable to produce the same kind of reliable objective
results. For example, Julian Huxley argued in 1940 that while values are deliberately
excluded from the purview of natural science, social science could not avoid serious and
problematic entanglements with values. He believed that to understand and describe a
system involving values [as any human social system does] is impossible without some
judgment of values (quoted in Lundberg, 1941, 350).

In response to this line of thought, others defended social science as not being
importantly distinct from natural science with respect to the role of values in science.
Arguments that there were no sharp differences between the natural and social sciences
were part of the logical empiricist effort to unify the sciences, an effort originally
motivated by the need to roll back the fascist impulses separating the natural and social

69

sciences (Uebel, 2007, 254) In the postwar context, both George Lundberg and Ernest
Nagel argued that social science had no special problems in relation to values and that
whatever challenges existed were present for natural as well as social science. (Rudner
(1966) argues for a similar position concerning objectivity in social science.
Unfortunately he seriously misconstrues Weber in that discussion.) Unpacking these
arguments allows for insight into their views on the role of values in social science. For
the sake of brevity, I will focus on Nagels arguments here, as they provide one of the
clearest and most thorough expositions on this point.

In the final three chapters of The Structure of Science (1961), Nagel undertakes a careful
examination of social science. The chapter of most relevance for our purposes,
Methodological problems of the social sciences, focuses on the question of whether
there are distinctive and insuperable challenges faced by the social sciences, challenges
which might prevent it from ever achieving the kind of powerful and overarching
explanatory laws Nagel thought characterized the physical sciences (Nagel, 1961: 450).
He argues that there are no such special challengesthat although social science might
not yet have achieved such laws, there is nothing inherent in its subject matter or
methodologies that meant it would be kept from such achievement. In addition to
addressing such potential methodological pitfalls as limited opportunity for controlled
experimentation (pp. 450459), the historical and cultural contingency of social
phenomena (pp. 459466), and the potential for a study to influence the particular
behavior under study (pp. 466473), Nagel tackles the subjective nature of social

70

sciences subject matter (pp. 473485), and the role of values in social science (pp. 485
502). For each of these, Nagel argues that the problem can be adequately addressed or
that the natural sciences face a similar problem (or both). Thus, Nagel systematically
undermines the idea that there is something distinctive about the social sciences that
creates difficulties for their objectivity.

For example, a key objection to the potential objectivity of social science had been the
fact that the subject matter of the social sciences often centers on subjective human states,
such as internal emotions. Although social science inquiry often concerns the internal
states of human actors, Nagel sees this as no insurmountable obstacle to objective
evidence in social science. Objective observational evidence is needed, according to
Nagel, to check our imputation of internal states to human actors, but we can gather such
evidence by examining behavior, and it is no more mysterious than imputing internal
states to matter that we cannot see (such as electrical current), yet still have evidence for
(p. 484). Empathy with other humans can assist the social scientists in developing
hypotheses about human behavior, but evidence concerning observable behaviors serves
to test those hypotheses (pp. 484485).

More central to our concerns here, Nagels examination of values in social science
continues his theme of similarity between natural and social science. He discusses four
different locations where values may have a role in social science (legitimately or not)

71

and finds little difference between social and natural sciences. Examining these roles will
show both where Nagel thought values posed a methodological problem for science and
where values played acceptable roles.

First, Nagel notes the importance of values in selecting what one will study, a similar role
to what Weber called the value-laden choice of what seems significant. While Weber
saw this as a particular problem for social science, for any delineation of culture
automatically contained value judgments on what is significant, Nagel sees parallels with
the selection of research problems in natural science: There is no difference between any
of the sciences with respect to the fact that the interests of the scientist determine what he
selects for investigation (p. 486). That such a selection necessarily occurs in social (or
natural) science is no obstacle to scientific objectivity. Values can (and should!) direct a
scientists attention without threatening scientific objectivity.

Nagel next addresses the problem of ones values influencing the conclusions one draws,
purportedly a particularly acute problem for social science because so many social
scientists are hoping to reform society in view of some ideal they hold, an ideal that
reflects their own values. Nagel argues that solving this problem depends on the
conceptual distinction between facts and values, and that if this distinction holds (and he
defends it in the following section), then it should be possible to work on keeping values
from unduly influencing social science results (p. 489). He notes that one way to

72

counteract the potential undue influence of values is to have social scientists abandon
the pretense that they are free from all bias, and that instead they state their value
assumptions as explicitly and fully as they can (p. 489). This is not to be done with an
eye towards gaining agreement among all social scientists on the correct values, nor so
that social science can settle the question of the correct values. Rather it is so that
questions of fact and value can be more readily disentangled. Here we see echoes of
Webers approach to the problem, calling on social scientists to be explicit in their values
and to work to keep the normative and the descriptive conceptually distinct.

Nagel, however, cautions against expecting value explicitness from providing too much:
Although the recommendation that social scientists make fully explicit their value
commitments is undoubtedly salutary, and can produce excellent fruit, it verges on
being a counsel of perfection. For the most part we are unaware of many
assumptions that enter into our analyses and actions, so that despite resolute
efforts to make our preconceptions explicit some decisive ones may not even
occur to us. But in any event, the difficulties generated for scientific inquiry by
unconscious bias and tacit value orientations are rarely overcome by devout
resolutions to eliminate bias. They are overcome, often only gradually, through
the self-corrective mechanisms of science as a social enterprise.
(Nagel, 1961: 489)

73

It is this suggestionthat the social aspects of scientific practice are the appropriate
remedy for bias, and thus a key location for the objectivity of sciencethat will be taken
up by feminist critics of science, as we will see below. At any rate, the problem of
potentially pernicious bias is a problem that runs throughout science, and so it is no
special problem for social science.

Nagel then tackles the issue of whether values are actually distinguishable from facts (p.
490). He argues that a key distinction in the nature of value judgments is needed to clarify
the matter. There are value judgments that express approval or disapproval, that is,
appraising value judgments, and there are value judgments that assess whether some
entity has a property and how much of the property it has, that is, characterizing value
judgments (pp. 490491). The former express the ought-claims that must be kept
conceptually distinct from factual claims, whereas the latter are part of making factual
claims. Nagel notes that sometimes our language can embed the appraisal in the
characterizationit is hard to not hear disapproval in terms like deceitful or mercenary
(p. 494). (More recent commentators have noted these difficulties, as I will discuss
below.) Yet Nagel thought it possible to distinguish the disapproval from the description.
Nagel argues that there are no good reasons for thinking that it is inherently impossible
to distinguish between the characterizing and appraising judgments implicit in many
statements, whether the statements are asserted by students of human affairs or natural
scientists (p. 494). The distinction between normative and descriptive statements, while
potentially tricky in practice, is not conceptually undermined. In addition, this is not a

74

problem unique to social science. This problem also occurs in natural science, with terms
like anemic (when applied to organisms) or inefficient (when applied to a pumping
system). Thus, there is again no special problem for social science.

Finally, Nagel addresses the concern over the use of values in assessing available
evidence (a variant on the arguments of Rudner (1953), discussed in more detail below).
He notes that in deciding whether statistical evidence is sufficient for supporting or
rejecting a hypothesis, one must make a choice between risking two types of error:
rejecting a true hypothesis and accepting a false one (Nagel, 1961: 496). As no inference
rule can minimize both errors simultaneously, the scientist makes the choice based on
which errors are to be more assiduously avoided, and that choice can be based on the
scientists valuation of the consequences that may follow from the two kinds of error (p.
497). Nagel does not think that the specific consequences of error, and thus valuations of
those specific consequences, are always involved in this choice. Instead, the choice may
be guided solely by more general commitments to the development of scientific
knowledge, such as to conduct his inquiries with probity and responsibility (p. 498). As
we will see below, Nagel is mirroring a key aspect of the value-free ideal which is
developing at this time. Regardless of how it is handled, the need for such a choice is
found in both natural and social science, so again no special problem of values in social
science presents itself.

75

Thus, for each of the four challenges values pose for social science, none are
insurmountable (indeed the first is no problem but an acceptable aspect of the scientific
enterprise), and none are distinctive to the social sciences. For Nagel, there is no special
problem for social science concerning values. The problem of values in science was a
general one, resolved by most philosophers of science with the value-free ideal.

The Value-Free Ideal for Science

While the preponderance of argument was shifting away from considering social science
as distinct from natural science in methodological challenges concerning values,
philosophers of science were settling on a new ideal for values in science generally.
Discussions over values in science gained greater precision, and forced a reconsideration
of legitimate and illegitimate roles for values in science.

The new ideal emerged from debates begun over arguments made by C. West Churchman
and Richard Rudner about the indispensability of value judgments in scientific practice.
(Churchman, 1948; Rudner, 1953) Rudners 1953 essay, The scientist qua scientist
makes value judgments, most clearly laid out the argument. In that essay, Rudner
acknowledges the importance of values for both deciding to do science and for the
selection of projects one is to pursue (Rudner, 1953: 1). This role for values, however, he

76

calls extra-scientific or prescientific and thus it has not been shown to be any part of
the procedures of science (pp. 12). In order to show that values are an essential aspect
of doing science properly, Rudner focuses on the need for scientists to accept or reject
hypotheses. Because no hypotheses is ever completely proven by inductive evidence, in
accepting a hypothesis the scientist must make the decision that the evidence is
sufficiently strong or that the probability is sufficiently high to warrant the acceptance of
the hypothesis (p. 2). In making this assessment, Rudner argues that it is the
importance, in the typically ethical sense, of making a mistake in accepting or rejecting
the hypothesis that must be considered by the scientist (p. 2). Because values are needed
to decide whether the available evidence is sufficient to warrant accepting or rejecting a
hypothesis, values are essential to all scientific reasoning.

Rudners argument was well publicized at the time (he gave a talk based on the paper at
the joint Philosophy of Science Association/American Association for the Advancement
of Science meetinS in December 1953 and a shortened version of the paper was
published in Scientific Monthly in 1954). It drew several critical responses. For example,
Richard Jeffrey critiqued one of Rudners key presumptions, arguing that scientists
properly speaking never accept or reject hypotheses (Jeffrey, 1956). Instead, Jeffrey
suggested that they merely assign probabilities to them. However, Rudner had already
considered this line of argument, and countered it by noting that one still had to accept or
reject the probability, which required the same kind of value judgment (Rudner, 1953: 4)

77

A response that gained more traction among philosophers of science was put forth by
Isaac Levi (Levi, 1960, 1962). Levi argued that Rudner was correct in noting the need for
a value judgment, but that scientists should be constrained in the values used to make
such a judgment, constrained to consider only the canons of scientific inference. (Levi,
1960: 355). As philosophers of science developed this idea, the canons became the set of
epistemic or cognitive values, such as simplicity, scope, explanatory power, consistency
and predictive accuracy (Kuhn, 1977). Thus the value-free ideal for science became
codified as the cognitive values only rule when assessing the strength of evidence for a
hypothesis. Cognitive values were to fill the gap between the set of available evidence
and the hypothesis under scrutiny, not social or ethical values. Philosophers of science
from Ernan McMullin (1983) to Hugh Lacey (1999) and Sandra Mitchell (2004) have
defended this ideal.

Although disputed by Leach (1968) and Scriven (1974), this view remained the standard
ideal for science through the rest of the twentieth century. What philosophers of science
usually mean when they say science should be value-free is that when assessing the
sufficiency of a body of evidence with respect to a theory, only cognitive values should
be used. Thus, the value-free ideal should not be taken literally to mean no values in
scienceinstead it should be taken to mean cognitive values only in science, once a
research project is underway.

78

It must be noted, however, that most cognitive values have little direct epistemic import.
Whether a theory is simple, has broad scope, or has explanatory power provides no clear
indication that it is true. (Laudan, 2004; Wylie and Nelson, 2007) The history of science
is littered with explanatory theories (caloric, ether), broadly scoped theories (mechanical
theories of matter), simple theories (Newtons spacetime), and even precise theories
(Keplers spacing of the planets with platonic solids) that have gone by the wayside. If
cognitive values do not indicate that a theory is more likely to be true, why are they held
up as canonical in science? The justification has usually been a descriptive one
scientists just do value theories of scope, simplicity, explanatory power, and precision.
(Kuhn, 1977) These values are internal to science and part of its historical functioning. If
one examines the scientific community as a closed community driven by its own internal
dynamics, then cognitive values are indeed the only justifiable values to be utilized.

Upon reflection, such a view of the functioning of science, natural or social, should be
suspect. As many have noted, particularly for the social sciences, the social relevance of
the work cannot be ignored. Social science does not function as a closed community
divorced from the society that it studies, nor should we want such complete separation
that the value-free ideal seems to demand. As this author (Douglas, 2003a, 2009: Chapter
4) and others (Forge, 2008) have argued, scientists have unavoidable moral
responsibilities to consider the consequences of their work, particularly the consequences

79

of error. If so, the value-free ideal must be rejected. A new map of values in science is
needed.

Mapping the Terrain of Values in Social Science

The entanglement between science and values remains as complex, or more so, than in
Webers account. In agreement with Weber, it is widely acknowledged that science can
assist our investigation of human values both through descriptive accounts of values and
by clarifying the means required to meet various ends we might hold. As Hempel noted in
the mid-1960s, science can tell us little about what our categorical values should be, but
can greatly inform our instrumental values (Hempel, 1965). If we have certain goals (e.g.
smaller recidivism rates for released prisoners), social science can help inform how to
reach those goals (e.g. which kinds of social interventions reduce recidivism rates). The
science informs our instrumental values, and may even assist us in assessing our goals, if
the means of achieving them appear too costly. But a direct assessment of the worthiness
of our goals is outside the bounds of social science.

More central for the philosophy of social science are the ways that values play important
and legitimate roles in the scientific process. Key locations for the influence of values
include: (1) the value placed on science by society, exemplified by the social support

80

given to science; (2) the decision to pursue a particular research project because of the
value placed on the knowledge likely to be produced, whether it is a value to society as
whole, a value to the particular fields internal questions, or the quirky interest of the
investigator; and (3) the ethical restrictions on methodological means to pursue a research
project, particularly when working with human subjects. The importance of values for
these decisions is not disputed, nor is the need for values at these locations seen as a
threat to the legitimacy or objectivity of science produced, despite the strong and direct
role values must play in shaping the decisions.

Despite the acceptability of values in selecting research projects and restricting


methodological options, values must not direct the scientist to construct a methodology
that will most certainly confirm their favorite theory. As feminist critics of science have
noted, failure to gather and examine the full range of possible evidence can assure that
the results of a study conform to researcher (and societal) expectations (e.g. Wylie, 2002:
186). This is a pernicious and problematic role for values, as it undermines our
confidence in the empirical reliability of the results. If evidence that could contravene a
theory is systematically left ungathered or ignored, because it presents an undesirable
challenge to the theory, then values are playing a determinative role in shaping the
outcome of the results. This gets to the heart of concerns over values in science, that we
could confuse our desire that the world be a particular way with evidence that it actually
is. If values blind us to unpleasant evidence, the value of the empirical enterprise is
undermined.

81

This same concern arises in the use of values to assess the sufficiency of evidence to
support a theory. The value-free ideal holds that in the interpretation of results, only
cognitive values should have any influence. It was hoped that this would maintain the
needed objectivity in theory assessment. However, as scientists have taken on a more
prominent public role as authorities whose guidance for policy issues is sought, the
cognitive values only ideal has looked increasingly suspect. First, in the weighing of the
importance of uncertaintythe key role for values in science by Rudners argumentit
is doubtful that cognitive values will provide an appropriate weighing. As noted above,
cognitive values are poor indicators of epistemic reliability. The presence of a cognitive
value like scope or simplicity may give one hope that uncertainty can be diminished as
research moves forward, as theories exemplifying cognitive values are easier to work
with, easier to draw additional testable implications, and thus more fruitful in general.
With the ease of further testing, discovery of error and epistemic refinement becomes
more likely. But policymakers and the general public need to use the research now.
Policymaking does not have the luxury of the long view. Thus, some appropriate
weighing of uncertainty with respect to current use is needed.

Jeffreys proposal that scientists only report the probabilities they attach to hypothesis
might seem an attractive option. And many scientists do report their work with such
probabilities. Nevertheless, they still need to bridge an inductive gap, assessing whether
the reported probabilities are the correct ones to use. In addition, there are decisions of

82

interpretation prior to the final assessment of the strength of the initial hypothesis in the
face of the new evidence that will remain hidden. The scientists have to decide whether a
characterization of data is sufficiently reliable before they can decide whether that data
sufficiently supports the evidence. Thus, the uncertainties to be assessed in a piece of
scientific research can run deep. It is the significance of those uncertainties, particularly
with respect to the consequences if the science proves inaccurate, that are of heightened
concern to policymakers and the public.

It is for these reasons that some (including myself) have argued that, given the
importance of scientists as public experts, social and ethical values are essential in the
internal reasoning of science. Social scientists need the values to weigh the significance
of error in their work, evaluating the probable consequences if their claims prove
incorrect (see Douglas, 2003b for an example). However, in using values to assess the
sufficiency of evidence, one must be careful not to confuse the values with the evidence.
In order to do this, it is crucial that the values assess the importance of uncertainty only.
Thus, the less uncertainty is present, the less the values will shape the choice of theory. In
this way, both cognitive and social values play the same roleas hedges against the
uncertainty and the consequences of error (Douglas, 2009: Chapter 5). If values are used
to direct choices in a stronger way, if values serve as the reason in themselves for a theory
choice, we have confused the normative and the descriptive in precisely the ways that
Weber and Nagel warned us against. Our values are not a good indication, in themselves,
of the way the world is.

83

Language and Values in Social Science

So far I have discussed the role of values in explicit social science reasoning. However,
as both Weber and Nagel noted, our language can make a clear distinction between the
normative and descriptive difficult. Values can become embedded in our ideas and ideal
types, and carry with them connotations of approval and disapproval. They can also
become embedded in the ways that we define key concepts. Thus, we need to consider
how the language we use to describe social phenomena and how we define social
concepts encode values into our accounts, for better or for worse.

Some terms used by social scientists unnecessarily carry with them evaluative
connotations, and in doing so can obscure the nature of social phenomena.

The danger of this is seen in John Duprs discussion of scientific explanations of rape
(Dupr, 2007: 3235). Some sociobiologists draw parallels between the human
phenomena of rape and what they see as similar behavior in animals, which they also call
rape. In utilizing the term rape for both animal and human forced copulation, they
seek to explain rape in terms of a reproductive strategy employed by those who cannot
otherwise attract mates. ,While such naturalized explanations of rape are not meant to

84

justify rape, they have problematic effects on our normative take on rape, altering our
understanding of the crime, making it appear natural even if still morally wrong. More
problematically, the explanations ignore crucial evidence about rape in humans. Much
human rape is reproductively futile, targeting women who are not fertile. In addition, it is
phenomenologically far more about violence and control than reproduction. Thus, to seek
an explanation for the behavior along naturalistic lines, and to include animal behaviors
in the construct of rape obscures key evidence about the nature of rape among humans.
Here concerns over the role of values in overly shaping both hypotheses and the
methodological approach utilized come to the fore. In attempting to naturalize rape, the
scientists ignore the evidence that goes against their account, and in doing so, devalue the
experience of the victims of the crime. It would be both normatively preferable and
descriptively more accurate to use a less fraught term for the animal behaviors,
dismantling the explanatory parallels.

In other cases, the normative connotations embedded in language choice and conceptual
construction are helpful in revealing phenomena. Social scientists construct categories,
such as spousal abuse or alcoholism, and in doing so they are seeing certain
behaviors as having similar enough characteristics to be able to be grouped together
(Root, 2007). Unlike in the rape case discussed above, the parallels among phenomena
drawn here reveal further aspects of the phenomena, rather than obscuring key evidence
about it. Placing disparate events under the same label and then studying them
collectively can then generate awareness about a new social problem and potential

85

remedies. The act of categorization allows the problem, such as spousal abuse, to be
seen, even if social actors did not see the behaviors as problematic or even similar prior
to the work of the social scientist. Here values directing the attention of the social
scientist produce new insights, just as the use of the value-laden term abuse helps to
properly reveal the nature of the phenomena. The social scientist sees and cares about a
particular potential problem, generates the study that codifies the problem, gathers
important evidence about it, and places it before the publics eye, ultimately shifting
normative judgments about that problem.

Thus, even when our language is fraught with normative and descriptive elements,
Webers admonishment to work for clarity on the normative commitments stands. The
extent to which a new category helps us to see connections to things we have Yet many
cases of normatively laden language in social science are more difficult to assess.
Consider an example drawn from economics, a field riddled with value judgments at its
foundations, recounted in Hausman and McPherson (2006) concerning involuntary
unemployment. This term is supposed to capture the situation where one is out of work
and cannot find employment. But economists debate whether this situation ever actually
existsthat is, whether or not the failure to find a job is simply a reflection of ones
unwillingness to take a lower-paying or more menial job, and thus the unemployment is
in some sense voluntary. McPherson and Hausman note that economists in this debate
draw upon too different senses of involuntary: whether one has choices for
employment but all the jobs are so poor the situation feels coercive versus whether one
has literally no choices (which very rarely, if ever, occurs). If one defines involuntary in

86

the second sense, almost no unemployment is involuntary. If one defines it in the first
sense, it can occur at substantial rates. McPherson and Hausman point out that ethicists
usually do not consider choices under duress voluntary. Thus, they argue that because
the term voluntariness is at root a moral notion, economists should use the term
voluntary in a way informed by ethical theories of freedom and choice (pp. 3738). But
the problem runs deeper than being clear about what voluntary should mean. In the
case of unemployment, whether or not the person facing choices about jobs is in a
coercive situation will be a matter of dispute. Does the person have reasonable
alternatives or not? Answering this question will depend on what one considers to be
reasonable expectations, another morally fraught decision (pp. 275277).

This example can be used to illustrate two points. First, there is a deep potential
inconsistency in the skepticism about involuntary unemployment. Economists who doubt
the existence of involuntary unemployment but consider the marketplace the best
protection for individual autonomy are imposing their view of what should count as
reasonable alternatives on the workforce. If autonomy is a value to be defended, we
should each be able to define for ourselves what would be a reasonable employment
option, and thus involuntary unemployment is a robust phenomena. Second, the
example shows how deep and complex the normative judgments embedded in the
language of social science can be. In this case, the normative judgments are rather
inaccessible, and they are at the heart of the definition of the concept. As Nagel noted, it
can be difficult to fully explicate the value commitments embedded in social science
methodologies. And this might mean social science does have a special problem with

87

respect to valuesnot a difference in kind with natural science, but a sufficient difference
in degree that social scientists need to pay special attention, as Weber suggested.

With the potential for such deeply embedded


normative commitments that they are difficult to explicate, social scientists need to be
especially careful to examine and clarify the normative commitments driving their work
and to make sure such commitments are not causing them to ignore evidence. Even with
attention to these methodological concerns, it seems

that there is no place in scientific practice where values do not have some role to play.
Values shape the projects to be pursued, the methodologies employed, the
characterization and interpretation of evidence, and the assessment of hypotheses. Given
this, what are we to make of objectivity?

Objectivity in Social Science

88

With the entanglements between science and values discussed above, it seems we may
have breached the normative/descriptive divide, and opened science to rampant
relativism. Is there anything left of objectivity? If we construe objectivity as a basis for
the trust and endorsement of a result, various aspects of objectivity can bolster our
confidence in social science.

First, as should be clear from all of the above discussion, there is always a need for some
restraint on values in science. In this restraint we find a key aspect of objectivity. Social
science must be protected from the conflation between a desire that the world be a certain
way and evidence that it is so. This can be accomplished by distinguishing between two
different roles for values in science, a direct role and an indirect role. A direct role occurs
when values are a reason for a particular choice, when the value directs that choice on its
own. This is a legitimate role for values in social science when one is selecting a
particular projectthe scientist chooses the project because of the value s/he places in
that project. It is also a legitimate role when values dictate that a methodological
approach is ethically unacceptable, such as the use of human subjects without their
consent. However, it is an illegitimate role when values dictate a particular result, one the
scientist wants. This can happen through a pernicious shaping of methodology (so that
only desired evidence is produced) or through a problematic assessment of evidence (so
that only desired evidence is noticed). Values should not direct the results of research in

89

this strong way. To do so undermines the very value we place in science, that it can
capture aspects of the world that may be surprising (even if unwelcome) to us.

The importance of values in the assessment of evidence and hypotheses is better captured
by the indirect role for values. In this role, the values are not assessing the desirability of
the hypothesis or theory per se, but rather the uncertainty around the hypothesis. As
noted above, cognitive, ethical, and social values all have a role to play here. Keeping
values to their proper roles requires some detachment, or disinterestedness, a traditional
aspect of objectivity (Douglas, 2004; Merton, 1973).

There is more to objectivity, though, than a stance taken with respect to values in
scientific reasoning about evidence. As much recent work on objectivity has noted
(Daston and Gallison, 1992; Lloyd, 1995), objectivity is a complex concept carrying with
it multiple meanings. Following Fine (1998), one can characterize these meanings as all
providing a key epistemic function, of indicating bases for trustworthiness. Indeed, it is a
particularly strong kind of trustworthiness of interest to us, that of I trust this and you
should too. With a pluralistic understanding of objectivity, we can have multiple bases
for evaluating the trustworthiness of social science claims. There are several kinds of
things we can look at when deciding whether a claim should be considered objective. We
can examine the evidential basis; we can examine the thought processes of the scientists

90

(the role of values in the reasoning, discussed above); we can examine the social
processes scientists utilized to produce the claim. (Douglas 2004)

One key basis for assessments of objectivity rests on multiple lines of independent
evidence. As Alison Wylie describes, when different evidential sources, none of which is
causally dependent on the others, point towards the same hypothesis, our confidence in
the objectivity of that hypothesis is substantially increased (Wylie, 2002: 191198). For
example, Wylie describes how isotopic analysis of skeletal remains, paleobotanical
analysis of the remains of plants found in households, and other lines of evidence point
towards an account of how the development of the Inka state changed the lives of people
in the Andes, particularly in a gender differential way (p. 193) Without the independence
of these lines of evidence, we would have substantially less confidence in the theory.
Utilizing different lines of evidence to date historical artifacts (chemical composition,
isotopic analysis, stylistic considerations, provenance discussions) also exemplifies this
aspect of objectivity. When the evidence from independent lines of inquiry converges,
the claim supported by the evidence appears substantially more objective, although
assessments of independence must be made carefully (pp. 206209).

Another aspect of objectivity in social science is the ability to utilize the claim to
consistently and reliably perform tasks in the world. The scientific claim becomes a tool,

91

ready to intervene in processes in a regular and predictable way (Hacking, 1983). An


example of this sort of manipulable objectivity might be found in behavioural economics:
the utilization of inertia or a status quo bias in structuring programs to increase
savings rate (Thaler and Benartizi, 2004). Because we should develop this kind of
knowledge even if it does exist. There would be moral concerns about having knowledge
of human behavior or cultural development so precise that one could direct the actions of
people. And even if such knowledge could be developed at one time, revealing the nature
of the technique often undermines its effectiveness among human actors. We are such
adaptable creatures it is difficult and disturbingconsistently loathe to imagine knowledge
that could function in such a tool-like manner. Manipulable objectivity poses a
challengemake changes to social science both the status quo once a pattern is in terms of
feasibility and desirability.

Rather than develop tools that interveneplace, behavioral economists were able to
structure a savings program that sets increases in contributions in the future when
employees get raises. Such programs proved wildly popular and greatly increased the
savings rates of participants, as intended. Scientists were able to use the status quo effect
as a tool to get increased retirement contributions. Further uses of this effect would
increase the world reliably, we can developsense of manipulable objectivity. As with any
tool, there can arise moral concerns over its use, particularly when used to manipulate
human behavior. In the savings case, all participation was fully informed, voluntary, and
the goal was generally paternalistic. Less benign goals could raise more serious worries.
In addition, some tools may lose effectiveness if the humans on which it is used become

92

aware of the knowledge underlying the toola problem of negative reflexivity. However,
some results are likely to be robust even with full awareness of the human actors. Finally,
no ability to use a tool to manipulate events guarantees that one has captured one rather
than several phenomena (or vice versa) under ones categories. The ready manipulability
gives the theory underlying ones interventions its objectivity, but whether one has
characterized the phenonema fully and correctly can still be an open question.

Another aspect of objectivity focuses on practices that reduce the need for individual
judgment. It can be useful to have agreed upon processes for working through data such
that the same data set will always produce the same outcome, regardless of the
practitioner. Developing this kind of mechanical or procedural objectivity was
crucial to the rise of social statistical measures (such as the census or crime rates) in the
nineteenth century (Porter, 1995). Yet we should keep in mind that such procedural
objectivity does not eliminate biasing factors; instead it merely makes sure that the biases
(or values) are encoded into the procedure, thus eliminating the need for individual
judgment among the users of the procedure. The procedure itself becomes the location for
contestation. For example, utilizing market values to assess the worth of aesthetic
experiences may provide a procedurally objective way to assess aesthetic worth, but then
whether the market values can properly capture the aesthetic value becomes the locus of
dispute.

93

Just as crucial for social science are the aspects of objectivity that arise from group efforts
to vet scientific claims. It can be helpful to distinguish here between simple agreement
among multiple observers (all the observers of a particular event record the same
descriptionwhat I have called concordant objectivity) and agreement that arises from
the social discourse of science (what I have called interactive objectivity) (Douglas,
2004). Both senses of objectivity depend upon some diversity of participants to
strengthen the claim to objectivitythat is the more diverse the observers or discussants
who come to agree, the more confidence we have that the claim is objective. An example
of concordant objectivity can be found in the phenomenon of preference reversals. Both
the social scientists who first theorized the existence of preference reversal behavior and
skeptics who doubted the robustness of the phenomenon found the same behavior.
(Angner, 2002: 289). If all who look see the same thing, we have some basis for claiming
an objective result.

However, in facing the pervasive challenges of values in social science discussed above,
it is the discursive, interactive aspect that has come to the fore. Indeed, increasing the
diversity among scientists has had a profound and positive influence on social science.
For example, as feminist critics of science have noted, the introduction of large numbers
of women has helped to reveal problematic blinders and biases in science. Women have
helped to develop alternative hypotheses, to problematize unquestioned presuppositions,
and to point out where inferences were based on shoddy work. Longino (1990) discusses

94

how women critiqued the traditional centrality of man-the-hunter stories for human
evolution theories by introducing an equally plausible alternative story centered on
woman the gatherer. In developing an alternative account, the obviousness of the
original account, shaped in part by the sexist presuppositions of male anthropologists,
was undermined, the presence of their presuppositions exposed (Longino, 1990: 106
111). Or consider Wylie and Nelsons account of how an increasing number of women in
archaeology altered the kinds of evidence gathered and the kinds of hypotheses explored,
thus greatly enriching accounts developed by archaeologists (Wylie and Nelson, 2007:
6470). The same is suggested for the increase of diversity by class (p. 63). By increasing
the diversity of participants in science, the scientists are forced to consider a wider array
of plausible theories, to argue for their claims under greater scrutiny, and thus to work
harder to convince their fellow scientists. In this process of vetting, our confidence in the
claims ultimately produced is bolstered. The social process of science is central to the
objectivity of its results.

Indeed, in Longinos account, science is objective precisely because it is an interactive


social process (Longino, 1990: 7680, Longino, 2002: 128135). It must be properly
structured to maximize its objectivity, but it is in its interactive social nature that its
objectivity lies. By properly structured, Longino is concerned not just with the diversity
of the participants, but also with the nature of the discourse. For example, is intellectual
authority properly distributed, that is as equally as possible? Are there recognized
avenues for critique and response, and are participants responsive to critique? Are there

95

shared standards for argument and are these also subject to reflection and critique? It is in
the shared epistemic efforts of a diverse group of people that we can have the most
confidence problematic bias will be revealed and excised.

Conclusion

With the development of the social at the core of scientific objectivity, the trajectory for
the objectivity of social science has come full circle. At the beginning of the twentieth
century, Weber argued that social sciences focus on the social presented unique problems
for its objectivity. Philosophers of science at the middle of the twentieth century argued
that there were no qualitatively different challenges for social science compared to
natural science with respect to objectivity and values. At the beginning of the twenty-first
century, it is the social that provides a key resource in assessing and achieving objectivity
for all science, social, or natural.

We should perhaps not be surprised by this conceptual turn. As noted above, Nagel wrote
in the early 1960s that the social processes of science were essential to grappling with
problematic roles for values in science. And even before Nagel, in 1935 Ludwig Fleck
suggested that the notion of a fact, an objective claim that required no further
contestation, depended on the social aspects of sciencethat it was first and foremost a

96

reflection of the group consensus on a topic, even if it shifted its form and content over
time (Fleck, 1979). If the very nature of the fact depends on the social, it should not
surprise us that the social has become a focal point for understanding objectivity in both
natural and social science. Objectivity is to be found in epistemically useful detachment,
disunity, diversity, disagreement, and discourse.

Thus, despite the entanglements between values and social science catalogued above,
there are still plenty of resources with which to assess the objectivity of a scientific claim.
The entanglements between the normative and the descriptive cast doubt on the
possibility for any truly value-free statement of fact, but that need not mean we can have
no objective statements. Central to such objectivity is the maintenance of at least a
conceptual distinction between the descriptive and the normative. Even as values shape
the projects pursued, the acceptability of methodologies, the concepts employed, and the
sufficiency of evidence, we must refrain from conflating values and evidence, or from
allowing values to determine the results of our studies. No particular descriptive claim
can be cut free from the value judgments needed to do science, but we can still maintain
the boundary that keeps the normative from dictating the descriptive, the core concern at
the discussion of science and values.

Acknowledgements

97

I would like to thank Anna Alexandrova, Ted Richards, and Eric Schliesser for comments
on this essay, and Erik Angner, Paul Roth, and Alison Wylie for help with examples.
Deficiencies remaining are wholly my own.

References
Angner, Erik (2002) Levis account of preference reversals, Economics and Philosophy,
18: 287302.
Churchman, C. West (1948) Statistics, pragmatics, induction, Philosophy of Science, 15:
249268.
Daston, Lorraine and Peter Gallison (1992) The image of objectivity, Representations,
40: 81128.
Douglas, Heather (2003a) The moral responsibilities of scientists: Tensions between
autonomy and responsibility, American Philosophical Quarterly, 40(1): 5968.
Douglas, Heather (2003b) Hempelian insights for feminism, in Sharyn Clough (ed.)
Siblings Under the Skin: Feminism, Social Justice, and Analytic Philosophy. Aurora,
Colorado: Davies Publishing. pp. 283306.
Douglas, Heather (2004) The irreducible complexity of objectivity, Synthese, 138(3):
453473.
Douglas, Heather (2009) Science, Policy, and the Ideal of Value-Free Science, in
Kincaid, Dupr, and Wylie, pp. 120-139.

98

(2008), The Role of Values in Expert Reasoning, Public Affairs Quarterly 22,
1: 1-18Ideal. Pittsburgh: University of Pittsburgh Press.
Dupr, John (2007) Fact and value, in Harold Kincaid, John Dupr, and Alison Wylie
(eds) Value-Free Science? Ideals and Illusions. New York: Oxford University Press. pp.
2741.
Fine, Arthur (1998) The viewpoint of no-one in particular, Proceedings and Addresses
of the APA, 72: 920.
Fleck, Ludwig (1979) Genesis and Development of a Scientific Fact. Chicago: University
of Chicago Press. (First published in 1935.)
Forge, John (2008) The Responsible Scientist. Pittsburgh: University of Pittsburgh Press.
Gotshalk, D.W. (1952) Value science, Philosophy of Science, 19: 183192.
Hacking, Ian (1983) Representing and Intervening. New York: Cambridge University
Press.
Hartman, Robert (1950) Is a science of ethics possible, Philosophy of Science, 17: 238
246.
Hausman, Daniel and Michael McPherson (2006) Economic Analysis, Moral Philosophy,
and Public Policy. New York: Cambridge University Press.
Hempel, Carl G. (1965) Science and human values, in Aspects of Scientific Explanation.
New York: The Free Press. pp. 8196.
Jeffrey, Richard (1956) Valuation and acceptance of scientific hypotheses, Philosophy
of Science, 22: 237246.
Kuhn, Thomas (1977) Objectivity, value, and theory choice, in The Essential Tension.
Chicago: University of Chicago Press. pp. 320339.

99

Lacey, Hugh (1999) Is Science Value-Free? Values and Scientific Understanding. New
York: Routledge.
Laudan, Larry (2004) The epistemic, the cognitive, and the social, in Peter Machamer
and Gereon Wolters (eds) Science, Values, and Objectivity. Pittsburgh: University of
Pittsburgh Press. pp. 1423.
Leach, James (1968) Explanation and value neutrality, British Journal for the
Philosophy of Science, 19: 93108.
Levi, Isaac (1960) Must the scientist make value judgments?, Journal of Philosophy,
57: 345357.
Levi, Isaac (1962) On the seriousness of mistakes, Philosophy of Science, 29: 4765.
Lloyd, Elizabeth (1995) Objectivity and the double standard for feminist
epistemologies, Synthese, 104: 351381.
Longino, Helen (1990) Science as Social Knowledge. Princeton, NJ: Princeton University
Press.
Longino, Helen (2002) The Fate of Knowledge. Princeton, NJ: Princeton University
Press.
Martin, Michael and Lee C. McIntyre, eds. (1994).

Readings in Lundberg, George

(1941) The future of the Philosophy of social Science.

Cambridge, MA:

MIT

Presssciences, The Scientific Monthly, 53: 346359.


McMullin, Ernan (1983) Values in science, in Peter D. Asquith and Thomas Nickles
(eds) Proceedings of the 1982 Biennial Meeting of the Philosophy of Science Association,
Volume 1. East Lansing: Philosophy of Science Association, 2: 328.

100

Merton, Robert (1973) The normative structure of science, in Norman W. Storer (ed.)
The Sociology of Science. Chicago: University of Chicago Press. pp. 267278
Mitchell, Sandra (2004) The prescribed and proscribed values in science policy, in Peter
Machamer and Gereon Wolters (eds) Science, Values, and Objectivity. Pittsburgh:
University of Pittsburgh Press,. pp. 245255.
Nagel, Ernest (1961) The Structure of Science: Problems in the Logic of Scientific
Explanation. New York: Harcourt, Brace & World, Inc.
Porter, Theodore (1995) Trust in Numbers: The Pursuit of Objectivity in Science and
Public Life. Princeton, NJ: Princeton University Press.
Root, Michael (2007) Social problems, in Harold Kincaid, John Dupr, and Alison
Wylie (eds) Value-Free Science? Ideals and Illusions. New York: Oxford University
Press. pp. 4257.
Rudner, Richard (1953) The scientist qua acientist makes value judgments, Philosophy
of Science, 20: 16.
Rudner, Richard (1966) Philosophy of Social Science. Englewood Cliffs, NJ: PrenticeHall.
Scriven, Michael (1974) The exact role of value judgments in science, in K.F. Shaffner
and R.S. Cohen (eds) PSA 1972. Dordrecht, Holland: D. Reidel Publishing pp. 219247.
Strauss, Leo (1953) Natural Right and History. Chicago: University of Chicago Press.
Thaler, Richard and Shlomo Benartzi (2004) Save more tomorrow: Using behavioral
economics to increase employee savings, Journal of Political Economy, 112: S164
S187.

101

Uebel, Thomas (2007) Philosophy of social science in early logical empiricism: The case
of radical physicalism, in Alan Richardson and Thomas Uebel (eds) The Cambridge
Companion to Logical Empiricism. New York: Cambridge University Press. pp. 250277.
Weber, Max (1949) The Methodology of the Social Sciences (Edward A. Shils and Henry
A. Finch, trans.). Glencoe, IL: Free Press.
Wylie, Alison (1994), Evidential Constraints, in Martin and McIntyre, pp. 747765.2002) Thinking from Things: Essays in the Philosophy of Archaeology. Berkeley:
University of California Press.
Wylie, Alison and Lynn Hankinson Nelson (2007) Coming to terms with the values of
science: Insights from feminist science studies scholarship, in Harold Kincaid, John
Dupr and Alison Wylie (eds) Value-Free Science? Ideals and Illusions. New York:
Oxford University Press. pp. 5886.

102

Potrebbero piacerti anche