SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents


Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Underdetermination of Scientific Theory

At the heart of the underdetermination of scientific theory by evidence is the simple idea that the evidence available to us at a given time may be insufficient to determine what beliefs we should hold in response to it. In a textbook example, if I know that you spent $10 on apples and oranges and that apples cost $1 while oranges cost $2, then I know that you did not buy six oranges, but I do not know whether you bought one orange and eight apples, two oranges and six apples, and so on. A simple scientific example can be found in the rationale behind the important methodological adage that “correlation does not imply causation”. If playing violent video games causes children to be more aggressive in their playground behavior, then we should (barring complications) expect to find a correlation between time spent playing such video games and aggressive behavior on the playground. But that is also what we would expect to find if children who are prone to aggressive behavior tend to enjoy and seek out violent video games more than other children, or if propensities for playing violent video games and for aggressive playground behavior are both caused by some third factor (like being bullied or general parental neglect). So a high correlation between time spent playing violent video games and aggressive playground behavior (by itself) simply underdetermines what we should believe about the causal relationship between the two. But it turns out that this simple and familiar predicament only scratches the surface of the various ways in which problems of underdetermination can arise in the course of scientific investigation.

1. A First Look: Duhem, Quine, and the Problems of Underdetermination

2.1 holist underdetermination: the very idea, 2.2 challenging the rationality of science, 3.1 contrastive underdetermination: back to duhem, 3.2 empirically equivalent theories, 3.3 unconceived alternatives and a new induction, other internet resources, related entries.

The scope of the epistemic challenge arising from underdetermination is not limited only to scientific contexts, as is perhaps most readily seen in classical skeptical attacks on our knowledge more generally. René Descartes ([1640] 1996) famously sought to doubt any and all of his beliefs which could possibly be doubted by supposing that there might be an all-powerful Evil Demon who sought to deceive him. Descartes’ challenge appeals to a form of underdetermination: he notes that all our sensory experiences would be just the same if they were caused by this Evil Demon rather than an external world of rocks and trees. Likewise, Nelson Goodman’s (1955) “New Riddle of Induction” turns on the idea that the evidence we now have could equally well be taken to support inductive generalizations quite different from those we usually take them to support, with radically different consequences for the course of future events. [ 1 ] Nonetheless, underdetermination has been thought to arise in scientific contexts in a variety of distinctive and important ways that do not simply recreate such radically skeptical possibilities.

The traditional locus classicus for underdetermination in science is the work of Pierre Duhem, a French physicist as well as historian and philosopher of science who lived at the turn of the 20 th Century. In The Aim and Structure of Physical Theory , Duhem formulated various problems of scientific underdetermination in an especially perspicuous and compelling way, although he himself argued that these problems posed serious challenges only to our efforts to confirm theories in physics. In the middle of the 20 th Century, W. V. O. Quine suggested that such challenges applied not only to the confirmation of all types of scientific theories, but to all knowledge claims whatsoever. His incorporation and further development of these problems as part of a general account of human knowledge was one of the most significant developments of 20 th Century epistemology. But neither Duhem nor Quine was careful to systematically distinguish a number of fundamentally distinct lines of thinking about underdetermination found in their work. Perhaps the most important division is between what we might call holist and contrastive forms of underdetermination. Holist underdetermination (Section 2 below) arises whenever our inability to test hypotheses in isolation leaves us underdetermined in our response to a failed prediction or some other piece of disconfirming evidence. That is, because hypotheses have empirical implications or consequences only when conjoined with other hypotheses and/or background beliefs about the world, a failed prediction or falsified empirical consequence typically leaves open to us the possibility of blaming and abandoning one of these background beliefs and/or ‘auxiliary’ hypotheses rather than the hypothesis we set out to test in the first place. But contrastive underdetermination (Section 3 below) involves the quite different possibility that for any body of evidence confirming a theory, there might well be other theories that are also well confirmed by that very same body of evidence. Moreover, claims of underdetermination of either of these two fundamental varieties can vary in strength and character in any number of ways: one might, for example, suggest that the choice between two theories or two ways of revising our beliefs is transiently underdetermined simply by the evidence we happen to have at present , or instead permanently underdetermined by all possible evidence. Indeed, the variety of forms of underdetermination that confront scientific inquiry, and the causes and consequences claimed for these different varieties, are sufficiently heterogeneous that attempts to address “the” problem of underdetermination for scientific theories have often engendered considerable confusion and argumentation at cross-purposes. [ 2 ]

Moreover, such differences in the character and strength of various claims of underdetermination turn out to be crucial for resolving the significance of the issue. For example, in some recently influential discussions of science it has become commonplace for scholars in a wide variety of academic disciplines to make casual appeal to claims of underdetermination (especially of the holist variety) to support the idea that something besides evidence must step in to do the further work of determining beliefs and/or changes of belief in scientific contexts. Perhaps most prominent among these are adherents of the sociology of scientific knowledge (SSK) movement and some feminist science critics who have argued that it is typically the sociopolitical interests and/or pursuit of power and influence by scientists themselves which play a crucial and even decisive role in determining which beliefs are actually abandoned or retained in response to conflicting evidence. As we will see in Section 2.2, however, Larry Laudan has argued that such claims depend upon simple equivocation between comparatively weak or trivial forms of underdetermination and the far stronger varieties from which they draw radical conclusions about the limited reach of evidence and rationality in science. In the sections that follow we will seek to clearly characterize and distinguish the various forms of both holist and contrastive underdetermination that have been suggested to arise in scientific contexts (noting some important connections between them along the way), assess the strength and significance of the heterogeneous argumentative considerations offered in support of and against them, and consider just which forms of underdetermination pose genuinely consequential challenges for scientific inquiry.

2. Holist Underdetermination and Challenges to Scientific Rationality

Duhem’s original case for holist underdetermination is, perhaps unsurprisingly, intimately bound up with his arguments for confirmational holism: the claim that theories or hypotheses can only be subjected to empirical testing in groups or collections, never in isolation. The idea here is that a single scientific hypothesis does not by itself carry any implications about what we should expect to observe in nature; rather, we can derive empirical consequences from an hypothesis only when it is conjoined with many other beliefs and hypotheses, including background assumptions about the world, beliefs about how measuring instruments operate, further hypotheses about the interactions between objects in the original hypothesis’ field of study and the surrounding environment, etc. For this reason, Duhem argues, when an empirical prediction is falsified, we do not know whether the fault lies with the hypothesis we originally sought to test or with one of the many other beliefs and hypotheses that were also needed and used to generate the failed prediction:

A physicist decides to demonstrate the inaccuracy of a proposition; in order to deduce from this proposition the prediction of a phenomenon and institute the experiment which is to show whether this phenomenon is or is not produced, in order to interpret the results of this experiment and establish that the predicted phenomenon is not produced, he does not confine himself to making use of the proposition in question; he makes use also of a whole group of theories accepted by him as beyond dispute. The prediction of the phenomenon, whose nonproduction is to cut off debate, does not derive from the proposition challenged if taken by itself, but from the proposition at issue joined to that whole group of theories; if the predicted phenomenon is not produced, the only thing the experiment teaches us is that among the propositions used to predict the phenomenon and to establish whether it would be produced, there is at least one error; but where this error lies is just what it does not tell us. ([1914] 1954, 185)

Duhem supports this claim with examples from physical theory, including one designed to illustrate a celebrated further consequence he draws from it. Holist underdetermination ensures, Duhem argues, that there cannot be any such thing as a “crucial experiment” (experimentum crucis): a single experiment whose outcome is predicted differently by two competing theories and which therefore serves to definitively confirm one and refute the other. For example, in a famous scientific episode intended to resolve the ongoing heated battle between partisans of the theory that light consists of a stream of particles moving at extremely high speed (the particle or “emission” theory of light) and defenders of the view that light consists instead of waves propagated through a mechanical medium (the wave theory), the physicist Foucault designed an apparatus to test the two theories’ competing claims about the speed of transmission of light in different media: the particle theory implied that light would travel faster in water than in air, while the wave theory implied that the reverse was true. Although the outcome of the experiment was taken to show that light travels faster in air than in water, [ 3 ] Duhem argues that this is far from a refutation of the hypothesis of emission:

in fact, what the experiment declares stained with error is the whole group of propositions accepted by Newton, and after him by Laplace and Biot, that is, the whole theory from which we deduce the relation between the index of refraction and the velocity of light in various media. But in condemning this system as a whole by declaring it stained with error, the experiment does not tell us where the error lies. Is it in the fundamental hypothesis that light consists in projectiles thrown out with great speed by luminous bodies? Is it in some other assumption concerning the actions experienced by light corpuscles due to the media in which they move? We know nothing about that. It would be rash to believe, as Arago seems to have thought, that Foucault’s experiment condemns once and for all the very hypothesis of emission, i.e., the assimilation of a ray of light to a swarm of projectiles. If physicists had attached some value to this task, they would undoubtedly have succeeded in founding on this assumption a system of optics that would agree with Foucault’s experiment. ([1914] 1954, p. 187)

From this and similar examples, Duhem drew the quite general conclusion that our response to the experimental or observational falsification of a theory is always underdetermined in this way. When the world does not live up to our theory-grounded expectations, we must give up something , but because no hypothesis is ever tested in isolation, no experiment ever tells us precisely which belief it is that we must revise or give up as mistaken:

In sum, the physicist can never subject an isolated hypothesis to experimental test, but only a whole group of hypotheses; when the experiment is in disagreement with his predictions, what he learns is that at least one of the hypotheses constituting this group is unacceptable and ought to be modified; but the experiment does not designate which one should be changed. ([1914] 1954, 187)

The predicament Duhem here identifies is no mere rainy day puzzle for philosophers of science, but a methodological challenge that consistently arises in the course of scientific practice itself. It is simply not true that for practical purposes and in concrete contexts there is always just a single revision of our beliefs in response to disconfirming evidence that is obviously correct, most promising, or even most sensible to pursue. To cite a classic example, when Newton’s celestial mechanics failed to correctly predict the orbit of Uranus, scientists at the time did not simply abandon the theory but protected it from refutation by instead challenging the background assumption that the solar system contained only seven planets. This strategy bore fruit, notwithstanding the falsity of Newton’s theory: by calculating the location of a hypothetical eighth planet influencing the orbit of Uranus, the astronomers Adams and Leverrier were eventually led to discover Neptune in 1846. But the very same strategy failed when used to try to explain the advance of the perihelion in Mercury’s orbit by postulating the existence of “Vulcan”, an additional planet located between Mercury and the sun, and this phenomenon would resist satisfactory explanation until the arrival of Einstein’s theory of general relativity. So it seems that Duhem was right to suggest not only that hypotheses must be tested as a group or a collection, but also that it is by no means a foregone conclusion which member of such a collection should be abandoned or revised in response to a failed empirical test or false implication. Indeed, this very example illustrates why Duhem’s own rather hopeful appeal to the ‘good sense’ of scientists themselves in deciding when a given hypothesis ought to be abandoned promises very little if any relief from the general predicament of holist underdetermination.

As noted above, Duhem thought that the sort of underdetermination he had described presented a challenge only for theoretical physics, but subsequent thinking in the philosophy of science has tended to the opinion that the predicament Duhem described applies to theoretical testing in all fields of scientific inquiry. We cannot, for example, test an hypothesis about the phenotypic effects of a particular gene without presupposing a host of further beliefs about what genes are, how they work, how we can identify them, what other genes are doing, and so on. In the middle of the 20 th Century, W. V. O. Quine would incorporate confirmational holism and its associated concerns about underdetermination into an extraordinarily influential account of knowledge in general. As part of his famous (1951) critique of the widely accepted distinction between truths that are analytic (true by definition, or as a matter of logic or language alone) and those that are synthetic (true in virtue of some contingent fact about the way the world is), Quine argued that all of the beliefs we hold at any given time are linked in an interconnected web, which encounters our sensory experience only at its periphery:

The totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even of pure mathematics and logic, is a man-made fabric which impinges on experience only along the edges. Or, to change the figure, total science is like a field of force whose boundary conditions are experience. A conflict with experience at the periphery occasions readjustments in the interior of the field. But the total field is so underdetermined by its boundary conditions, experience, that there is much latitude of choice as to what statements to reevaluate in the light of any single contrary experience. No particular experiences are linked with any particular statements in the interior of the field, except indirectly through considerations of equilibrium affecting the field as a whole. (1951, 42–3)

One consequence of this general picture of human knowledge is that all of our beliefs are tested against experience only as a corporate body—or as Quine sometimes puts it, “The unit of empirical significance is the whole of science” (1951, p. 42). [ 4 ] A mismatch between what the web as a whole leads us to expect and the sensory experiences we actually receive will occasion some revision in our beliefs, but which revision we should make to bring the web as a whole back into conformity with our experiences is radically underdetermined by those experiences themselves. To use Quine’s example, if we find our belief that there are brick houses on Elm Street to be in conflict with our immediate sense experience, we might revise our beliefs about the houses on Elm Street, but we might equally well modify instead our beliefs about the appearance of brick, our present location, or innumerable other beliefs constituting the interconnected web. In a pinch, we might even decide that our present sensory experiences are simply hallucinations! Quine’s point was not that any of these are particularly likely or reasonable responses to recalcitrant experiences (indeed, an important part of his account is the explanation of why they are not), but instead that they would serve equally well to bring the web of belief as a whole in line with our experience. And if the belief that there are brick houses on Elm Street were sufficiently important to us, Quine insisted, it would be possible for us to preserve it “come what may” (in the way of empirical evidence), by making sufficiently radical adjustments elsewhere in the web of belief. It is in principle open to us, Quine argued, to revise even beliefs about logic, mathematics, or the meanings of our terms in response to recalcitrant experience; it might seem a tempting solution to certain persistent difficulties in quantum mechanics, for example, to reject classical logic’s law of the excluded middle (allowing physical particles to both have and not have some determinate classical physical property like position or momentum at a given time). The only test of a belief, Quine argued, is whether it fits into a web of connected beliefs that accords well with our experience on the whole . And because this leaves any and all beliefs in that web at least potentially subject to revision on the basis of our ongoing sense experience or empirical evidence, he insisted, there simply are no beliefs that are analytic in the originally supposed sense of immune to revision in light of experience, or true no matter what the world is like.

Quine recognized, of course, that many of the logically possible ways of revising our beliefs in response to recalcitrant experiences that remain open to us nonetheless strike us as ad hoc, perfectly ridiculous, or worse. He argues (1955) that our actual revisions of the web of belief seek to maximize the theoretical “virtues” of simplicity, familiarity, scope, and fecundity, along with conformity to experience, and elsewhere suggests that we typically seek to resolve conflicts between the web of our beliefs and our sensory experiences in accordance with a principle of “conservatism”, that is, by making the smallest possible number of changes to the least central beliefs we can that will suffice to reconcile the web with experience. That is, Quine recognized that when we encounter recalcitrant experience we are not usually at a loss to decide which of our beliefs to revise in response to it, but he claimed that this is simply because we are strongly disposed as a matter of fundamental psychology to prefer whatever revision requires the most minimal mutilation of the existing web of beliefs and/or maximizes virtues that he explicitly recognizes as pragmatic in character. Indeed, it would seem that on Quine’s view the very notion of a belief being more central or peripheral or in lesser or greater “proximity” to sense experience should be cashed out simply as a measure of our willingness to revise it in response to recalcitrant experience. That is, it would seem that what it means for one belief to be located “closer” to the sensory periphery of the web than another is simply that we are more likely to revise the first than the second if doing so would enable us to bring the web as a whole into conformity with otherwise recalcitrant sense experience. Thus, Quine saw the traditional distinction between analytic and synthetic beliefs as simply registering the endpoints of a psychological continuum ordering our beliefs according to the ease and likelihood with which we are prepared to revise them in order to reconcile the web as a whole with our sense experience as a whole.

It is perhaps unsurprising that such holist underdetermination has been taken to pose a threat to the fundamental rationality of the scientific enterprise. The claim that the empirical evidence alone underdetermines our response to failed predictions or recalcitrant experience might even seem to invite the suggestion that what systematically steps into the breach to do the further work of singling out just one or a few candidate responses to disconfirming evidence is something irrational or at least arational in character. Imre Lakatos and Paul Feyerabend each suggested that because of underdetermination, the difference between empirically successful and unsuccessful theories or research programs is largely a function of the differences in talent, creativity, resolve, and resources of those who advocate them. And at least since the influential work of Thomas Kuhn, one important line of thinking about science has held that it is ultimately the social and political interests (in a suitably broad sense) of scientists themselves which serve to determine their responses to disconfirming evidence and therefore the further empirical, methodological, and other commitments of any given scientist or scientific community. Mary Hesse suggests that Quinean underdetermination showed why certain “non-logical” and “extra-empirical” considerations must play a role in theory choice, and claims that “it is only a short step from this philosophy of science to the suggestion that adoption of such criteria, that can be seen to be different for different groups and at different periods, should be explicable by social rather than logical factors” (1980, 33). Perhaps the most prominent modern day defenders of this line of thinking are those scholars in the sociology of scientific knowledge (SSK) movement and in feminist science studies who argue that it is typically the career interests, political affiliations, intellectual allegiances, gender biases, and/or pursuit of power and influence by scientists themselves which play a crucial or even decisive role in determining precisely which beliefs are abandoned or retained when faced with conflicting evidence (classic works in SSK include Bloor 1991, Collins 1992, and Shapin and Schaffer 1985; in feminist science studies, see Longino, 1990, 2002, and for a recent review, Nelson 2022). The shared argumentative schema here is one on which holist underdetermination ensures that the evidence alone cannot do the work of picking out a unique response to failed predictions or recalcitrant experience, thus something else must step in to do the job, and sociologists of scientific knowledge, feminist critics of science, and other interest-driven theorists of science each have their favored suggestions close to hand. (For useful further discussion, see Okasha 2000. Note that historians of science have also appealed to underdetermination in presenting “counterfactual histories” exploring the ways in which important historical developments in science might have turned out quite differently than they actually did; see, for example, Radick 2023.)

In response to this line of argument, Larry Laudan (1990) argues that the significance of such underdetermination has been greatly exaggerated. Underdetermination actually comes in a wide variety of strengths, he insists, depending on precisely what is being asserted about the character, the availability, and (most importantly) the rational defensibility of the various competing hypotheses or ways of revising our beliefs that the evidence supposedly leaves us free to accept. Laudan usefully distinguishes a number of different dimensions along which claims of underdetermination vary in strength, and he goes on to insist that those who attribute dramatic significance to the thesis that our scientific theories are underdetermined by the evidence defend only the weaker versions of that thesis, yet draw dire consequences and shocking morals regarding the character and status of the scientific enterprise from much stronger versions. He suggests, for instance, that Quine’s famous claim that any hypothesis can be preserved “come what may” in the way of evidence can be defended simply as a description of what it is psychologically possible for human beings to do, but Laudan insists that in this form the thesis is simply bereft of interesting or important consequences for epistemology— the study of knowledge . Along this dimension of variation, the strong version of the thesis asserts that it is always normatively or rationally defensible to retain any hypothesis in the light of any evidence whatsoever, but this latter, stronger version of the claim, Laudan suggests, is one for which no convincing evidence or argument has ever been offered. More generally, Lauden argues, arguments for underdetermination turn on implausibly treating all logically possible responses to the evidence as equally justified or rationally defensible. For example, Laudan suggests that we might reasonably hold the resources of deductive logic to be insufficient to single out just one acceptable response to disconfirming evidence, but not that deductive logic plus the sorts of ampliative principles of good reasoning typically deployed in scientific contexts are insufficient to do so. Similarly, defenders of underdetermination might assert the nonuniqueness claim that for any given theory or web of beliefs, either there is at least one alternative that can also be reconciled with the available evidence, or the much stronger claim that all of the contraries of any given theory can be reconciled with the available evidence equally well. And the claim of such “reconciliation” itself disguises a wide range of further alternative possibilities: that our theories can be made logically compatible with any amount of disconfirming evidence (perhaps by the simple expedient of removing any claim(s) with which the evidence is in conflict), that any theory may be reformulated or revised so as to entail any piece of previously disconfirming evidence, or so as to explain previously disconfirming evidence, or that any theory can be made to be as well supported empirically by any collection of evidence as any other theory. And in all of these respects, Laudan claims, partisans have defended only the weaker forms of underdetermination while founding their further claims about and conceptions of the scientific enterprise on versions much stronger than those they have managed or even attempted to defend.

Laudan is certainly right to distinguish these various versions of holist underdetermination, and he is equally right to suggest that many of the thinkers he confronts have derived grand morals concerning the scientific enterprise from much stronger versions of underdetermination than they are able to defend, but the underlying situation is somewhat more complex than he suggests. Laudan’s overarching claim is that champions of holist underdetermination show only that a wide variety of responses to disconfirming evidence are logically possible (or even just psychologically possible), rather than that these are all rationally defensible or equally well-supported by the evidence. But his straightforward appeal to further epistemic resources like ampliative principles of belief revision that are supposed to help narrow the merely logical possibilities down to those which are reasonable or rationally defensible is itself problematic, at least as part of any attempt to respond to Quine. This is because on Quine’s holist picture of knowledge such further ampliative principles governing legitimate belief revision are themselves simply part of the web of our beliefs, and are therefore open to revision in response to recalcitrant experience as well. Indeed, this is true even for the principles of deductive logic and the (consequent) demand for particular forms of logical consistency between parts of the web itself! So while it is true that the ampliative principles we currently embrace do not leave all logically or even psychologically possible responses to the evidence open to us (or leave us free to preserve any hypothesis “come what may”), our continued adherence to these very principles , rather than being willing to revise the web of belief so as to abandon them, is part of the phenomenon to which Quine is using underdetermination to draw our attention, and so cannot be taken for granted without begging the question. Put another way, Quine does not simply ignore the further principles that function to ensure that we revise the web of belief in one way rather than others, but it follows from his account that such principles are themselves part of the web and therefore candidates for revision in our efforts to bring the web of beliefs into conformity (by the resulting web’s own lights) with sensory experience. This recognition makes clear why it will be extremely difficult to say how the shift to an alternative web of belief (with alternative ampliative or even deductive principles of belief revision) should or even can be evaluated for its rational defensibility. Each proposed revision is likely to be maximally rational by the lights of the principles it itself sanctions. [ 5 ] Of course we can rightly say that many candidate revisions would violate our presently accepted ampliative principles of rational belief revision, but the preference we have for those rather than the alternatives is itself simply generated by their position in the web of belief we have inherited, and the role that they themselves play in guiding the revisions we are inclined to make to that web in light of ongoing experience.

Thus, if we accept Quine’s general picture of knowledge, it becomes quite difficult to disentangle normative from descriptive issues, or questions about the psychology of human belief revision from questions about the justifiability or rational defensibility of such revisions. It is in part for this reason that Quine famously suggests (1969, 82; see also p 75–76) that epistemology itself “falls into place as a chapter of psychology and hence of natural science.” His point is not that epistemology should simply be abandoned in favor of psychology, but instead that there is ultimately no way to draw a meaningful distinction between the two. (James Woodward, in comments on an earlier draft of this entry, pointed out that this makes it all the harder to assess the significance of Quinean underdetermination in light of Laudan’s complaint or even know the rules for doing so, but in an important way this difficulty was Quine’s point all along!) Quine’s claim is that “[e]ach man is given a scientific heritage plus a continuing barrage of sensory stimulation; and the considerations which guide him in warping his scientific heritage to fit his continuing sensory promptings are, where rational, pragmatic” (1951, 46), but the role of these pragmatic considerations or principles in selecting just one of the many possible revisions of the web of belief in response to recalcitrant experience is not to be contrasted with those same principles having rational or epistemic justification. Far from conflicting with or even being orthogonal to the search for truth and our efforts to render our beliefs maximally responsive to the evidence, Quine insists, revising our beliefs in accordance with such pragmatic principles “at bottom, is what evidence is” (1955, 251). Whether or not this strongly naturalistic conception of epistemology can ultimately be defended, it is misleading for Laudan to suggest that the thesis of underdetermination becomes trivial or obviously insupportable the moment we inquire into the rational defensibility rather than the mere logical or psychological possibility of alternative revisions to the holist’s web of belief.

In fact, there is an important connection between this lacuna in Laudan’s discussion and the further uses made of the thesis of underdetermination by sociologists of scientific knowledge, feminist epistemologists, and other vocal champions of holist underdetermination. When faced with the invocation of further ampliative standards or principles that supposedly rule out some responses to disconfirmation as irrational or unreasonable, these thinkers typically respond by insisting that the embrace of such further standards or principles (or perhaps their application to particular cases) is itself underdetermined, historically contingent, and/or subject to ongoing social negotiation. For this reason, they suggest, such appeals (and their success or failure in convincing the members of a given community) should be explained by reference to the same broadly social and political interests that they claim are at the root of theory choice and belief change in science more generally (see, e.g., Shapin and Schaffer, 1985). On both accounts, then, our response to recalcitrant evidence or a failed prediction is constrained in important ways by features of the existing web of beliefs. But for Quine, the continuing force of these constraints is ultimately imposed by the fundamental principles of human psychology (such as our preference for minimal mutilation of the web, or the pragmatic virtues of simplicity, fecundity, etc.), while for constructivist theorists of science such as Shapin and Schaffer, the continuing force of any such constraints is limited only by the ongoing negotiated agreement of the communities of scientists who respect them.

As this last contrast makes clear, recognizing the limitations of Laudan’s critique of Quine and the fact that we cannot dismiss holist underdetermination with any straightforward appeal to ampliative principles of good reasoning by itself does nothing to establish the further positive claims about belief revision advanced by interest-driven theorists of science. Conceding that theory choice or belief revision in science is underdetermined by the evidence in just the ways that Duhem and/or Quine suggested leaves entirely open whether it is instead the (suitably broad) social or political interests of scientists themselves that do the further work of singling out the particular beliefs or responses to falsifying evidence that any particular scientist or scientific community will actually adopt or find compelling. Even many of those philosophers of science who are most strongly convinced of the general significance of various forms of underdetermination remain deeply skeptical of this latter thesis and thoroughly unconvinced by the empirical evidence that has been offered in support of it (usually in the form of case studies of particular historical episodes in science).

Appeals to underdetermination have also loomed large in recent philosophical debates concerning the place of values in science, with a number of authors arguing that the underdetermination of theory by data is among the central reasons that values (or “non-epistemic” values) do and perhaps must play a central role in scientific inquiry. Feminist philosophers of science have sometimes suggested that it is such underdetermination which creates room not only for unwarranted androcentric values or assumptions to play central roles in the embrace of particular theoretical possibilities, but also for the critical and alternative approaches favored by feminists themselves (e.g. Nelson 2022). But appeals to underdetermination also feature prominently in more general arguments against the possibility or desirability of value-free science. Perhaps most influentially, Helen Longino’s “contextual empiricism” suggests that a wide variety of non-epistemic values play important roles in determining our scientific beliefs in part because underdetermination prevents data or evidence alone from doing so. For this and other reasons she concludes that objectivity in science is therefore best served by a diverse set of participants who bring a variety of different values or value-laden assumptions to the enterprise (Longino 1990, 2002).

3. Contrastive Underdetermination, Empirical Equivalents, and Unconceived Alternatives

Although it is also a form of underdetermination, what we described in Section 1 above as contrastive underdetermination raises fundamentally different issues from the holist variety considered in Section 2 (Bonk 2008 raises many of these issues). John Stuart Mill articulated the challenge of contrastive underdetermination with impressive clarity in A System of Logic , where he writes:

Most thinkers of any degree of sobriety allow, that an not to be received as probably true because it accounts for all the known phenomena, since this is a condition sometimes fulfilled tolerably well by two conflicting hypotheses...while there are probably a thousand more which are equally possible, but which, for want of anything analogous in our experience, our minds are unfitted to conceive. ([1867] 1900, 328)

This same concern is also evident in Duhem’s original writings concerning so-called crucial experiments, where he seeks to show that even when we explicitly suspend any concerns about holist underdetermination, the contrastive variety remains an obstacle to our discovery of truth in theoretical science:

But let us admit for a moment that in each of these systems [concerning the nature of light] everything is compelled to be necessary by strict logic, except a single hypothesis; consequently, let us admit that the facts, in condemning one of the two systems, condemn once and for all the single doubtful assumption it contains. Does it follow that we can find in the ‘crucial experiment’ an irrefutable procedure for transforming one of the two hypotheses before us into a demonstrated truth? Between two contradictory theorems of geometry there is no room for a third judgment; if one is false, the other is necessarily true. Do two hypotheses in physics ever constitute such a strict dilemma? Shall we ever dare to assert that no other hypothesis is imaginable? Light may be a swarm of projectiles, or it may be a vibratory motion whose waves are propagated in a medium; is it forbidden to be anything else at all? ([1914] 1954, 189)

Contrastive underdetermination is so-called because it questions the ability of the evidence to confirm any given hypothesis against alternatives , and the central focus of discussion in this connection (equally often regarded as “the” problem of underdetermination) concerns the character of the supposed alternatives. Of course the two problems are not entirely disconnected, because it is open to us to consider alternative possible modifications of the web of beliefs as alternative theories between which the empirical evidence alone is powerless to decide. But we have already seen that one need not think of the alternative responses to recalcitrant experience as competing theoretical alternatives to appreciate the character of the holist’s challenge, and we will see that one need not embrace any version of holism about confirmation to appreciate the quite distinct problem that the available evidence might support more than one theoretical alternative. It is perhaps most useful here to think of holist underdetermination as starting from a particular theory or body of beliefs and claiming that our revision of those beliefs in response to new evidence may be underdetermined, while contrastive underdetermination instead starts from a given body of evidence and claims that more than one theory may be well-supported by that evidence. Part of what has contributed to the conflation of these two problems is the holist presuppositions of those who originally made them famous. After all, on Quine’s view, we simply revise the web of belief in response to recalcitrant experience, and so the suggestion that there are multiple possible revisions of the web available in response to any particular evidential finding just is the claim that there are in fact many different “theories” (i.e. candidate webs of belief) that are equally well-supported by any given body of data. [ 6 ] But if we give up such extreme holist views of evidence, meaning, and/or confirmation, the two problems take on very different identities, with very different considerations in favor of taking them seriously, very different consequences, and very different candidate solutions. Notice, for instance, that even if we somehow knew that no other hypothesis on a given subject was well-confirmed by a given body of data, that would not tell us where to place the blame or which of our beliefs to give up if the remaining hypothesis in conjunction with others subsequently resulted in a failed empirical prediction. And as Duhem suggests in the passage cited above, even if we supposed that we somehow knew exactly which of our hypotheses to blame in response to a failed empirical prediction, this would not help us to decide whether or not there are other hypotheses available that are also well-confirmed by the data we actually have.

One way to see why not is to consider an analogy that champions of contrastive underdetermination have sometimes used to support their case. If we consider any finite group of data points, an elementary proof reveals that there are an infinite number of distinct mathematical functions describing different curves that will pass through all of them. As we add further data to our initial set we will eliminate functions describing curves which no longer capture all of the data points in the new, larger set, but no matter how much data we accumulate, there will always be an infinite number of functions remaining that define curves including all the data points in the new set and which would therefore seem to be equally well supported by the empirical evidence. No finite amount of data will ever be able to narrow the possibilities down to just a single function or indeed, any finite number of candidate functions, from which the distribution of data points we have might have been generated. Each new data point we gather eliminates an infinite number of curves that previously fit all the data (so the problem here is not the holist’s challenge that we do not know which beliefs to give up in response to failed predictions or disconfirming evidence), but also leaves an infinite number still in contention.

Of course, generating and testing fundamental scientific hypotheses is rarely if ever a matter of finding curves that fit collections of data points, so nothing follows directly from this mathematical analogy for the significance of contrastive underdetermination in most scientific contexts. But Bas van Fraassen has offered an extremely influential line of argument intended to show that such contrastive underdetermination is a serious concern for scientific theorizing more generally. In The Scientific Image (1980), van Fraassen uses a now-classic example to illustrate the possibility that even our best scientific theories might have empirical equivalents : that is, alternative theories making the very same empirical predictions, and which therefore cannot be better or worse supported by any possible body of evidence. Consider Newton’s cosmology, with its laws of motion and gravitational attraction. As Newton himself realized, exactly the same predictions are made by the theory whether we assume that the entire universe is at rest or assume instead that it is moving with some constant velocity in any given direction: from our position within it, we have no way to detect constant, absolute motion by the universe as a whole. Thus, van Fraassen argues, we are here faced with empirically equivalent scientific theories: Newtonian mechanics and gravitation conjoined either with the fundamental assumption that the universe is at absolute rest (as Newton himself believed), or with any one of an infinite variety of alternative assumptions about the constant velocity with which the universe is moving in some particular direction. All of these theories make all and only the same empirical predictions, so no evidence will ever permit us to decide between them on empirical grounds. [ 7 ]

Van Fraassen is widely (though mistakenly) regarded as holding that the prospect of contrastive underdetermination grounded in such empirical equivalents demands that we restrict our epistemic ambitions for the scientific enterprise itself. His constructive empiricism holds that the aim of science is not to find true theories, but only theories that are empirically adequate: that is, theories whose claims about observable phenomena are all true. Since the empirical adequacy of a theory is not threatened by the existence of another that is empirically equivalent to it, fulfilling this aim has nothing to fear from the possibility of such empirical equivalents. In reply, many critics have suggested that van Fraassen gives no reasons for restricting belief to empirical adequacy that could not also be used to argue for suspending our belief in the future empirical adequacy of our best present theories. Of course there could be empirical equivalents to our best theories, but there could also be theories equally well-supported by all the evidence up to the present which diverge in their predictions about observables in future cases not yet tested. This challenge seems to miss the point of van Fraassen’s epistemic voluntarism: his claim is that we should believe no more but also no less than we need to take full advantage of our scientific theories, and a commitment to the empirical adequacy of our theories, he suggests, is the least we can get away with in this regard. Of course it is true that we are running some epistemic risk in believing in even the full empirical adequacy of our present theories, but this is the minimum we need to take full advantage of the fruits of our scientific labors, and the risk is considerably less than what we assume in believing in their truth: as van Fraassen famously suggests, “it is not an epistemic principle that one might as well hang for a sheep as a lamb” (1980, 72).

In an influential discussion, Larry Laudan and Jarrett Leplin (1991) argue that philosophers of science have invested even the bare possibility that our theories might have empirical equivalents with far too much epistemic significance. Notwithstanding the popularity of the presumption that there are empirically equivalent rivals to every theory, they argue, the conjunction of several familiar and relatively uncontroversial epistemological theses is sufficient to defeat it. Because the boundaries of what is observable change as we develop new experimental methods and instruments, because auxiliary assumptions are always needed to derive empirical consequences from a theory (cf. confirmational holism, above), and because these auxiliary assumptions are themselves subject to change over time, Laudan and Leplin conclude that there is no guarantee that any two theories judged to be empirically equivalent at a given time will remain so as the state of our knowledge advances. Accordingly, any judgment of empirical equivalence is both defeasible and relativized to a particular state of science. So even if two theories are empirically equivalent at a given time this is no guarantee that they will remain so, and thus there is no foundation for a general pessimism about our ability to distinguish theories that are empirically equivalent to each other on empirical grounds. Although they concede that we could have good reason to think that particular theories have empirically equivalent rivals, this must be established case-by-case rather than by any general argument or presumption.

One fairly standard reply to this line of argument is to suggest that what Laudan and Leplin really show is that the notion of empirical equivalence must be applied to larger collections of beliefs than those traditionally identified as scientific theories—at least large enough to encompass the auxiliary assumptions needed to derive empirical predictions from them. At the extreme, perhaps this means that the notion of empirical equivalents (or at least timeless empirical equivalents) cannot be applied to anything less than “systems of the world” (i.e. total Quinean webs of belief), but even that is not fatal: what the champion of contrastive underdetermination asserts is that there are empirically equivalent systems of the world that incorporate different theories of the nature of light, or spacetime, or whatever (for useful discussion, see Okasha 2002). On the other hand, it might seem that quick examples like van Fraassen’s variants of Newtonian cosmology do not serve to make this thesis as plausible as the more limited claim of empirical equivalence for individual theories. It seems equally natural, however, to respond to Laudan and Leplin simply by conceding the variability in empirical equivalence but insisting that this is not enough to undermine the problem. Empirical equivalents create a serious obstacle to belief in a theory so long as there is some empirical equivalent to that theory at any given time, but it need not be the same one at each time. On this line of thinking, cases like van Fraassen’s Newtonian example illustrate how easy it is for theories to admit of empirical equivalents at any given time, and thus constitute a reason for thinking that there probably are or will be empirical equivalents to any given theory at any particular time, assuring that whenever the question of belief in a given theory arises, the challenge posed to it by contrastive underdetermination arises as well.

Laudan and Leplin also suggest, however, that even if the universal existence of empirical equivalents were conceded, this would do much less to establish the significance of underdetermination than its champions have supposed, because “theories with exactly the same empirical consequences may admit of differing degrees of evidential support” (1991, 465). A theory may be better supported than an empirical equivalent, for instance, because the former but not the latter is derivable from a more general theory whose consequences include a third, well supported, hypothesis. More generally, the belief-worthiness of an hypothesis depends crucially on how it is connected or related to other things we believe and the evidential support we have for those other beliefs. [ 8 ] Laudan and Leplin suggest that we have invited the specter of rampant underdetermination only by failing to keep this familiar home truth in mind and instead implausibly identifying the evidence bearing on a theory exclusively with the theory’s own entailments or empirical consequences (but cf. Tulodziecki 2012). This impoverished view of evidential support, they argue, is in turn the legacy of a failed foundationalist and positivistic approach to the philosophy of science which mistakenly assimilates epistemic questions about how to decide whether or not to believe a theory to semantic questions about how to establish a theory’s meaning or truth-conditions.

John Earman (1993) has argued that this dismissive diagnosis does not do justice to the threat posed by underdetermination. He argues that worries about underdetermination are an aspect of the more general question of the reliability of our inductive methods for determining beliefs, and notes that we cannot decide how serious a problem underdetermination poses without specifying (as Laudan and Leplin do not) the inductive methods we are considering. Earman regards some version of Bayesianism as our most promising form of inductive methodology, and he proceeds to show that challenges to the long-run reliability of our Bayesian methods can be motivated by considerations of the empirical indistinguishability (in several different and precisely specified senses) of hypotheses stated in any language richer than that of the evidence itself that do not amount simply to general skepticism about those inductive methods. In other words, he shows that there are more reasons to worry about underdetermination concerning inferences to hypotheses about unobservables than to, say, inferences about unobserved observables. He also goes on to argue that at least two genuine cosmological theories have serious, nonskeptical, and nonparasitic empirical equivalents: the first essentially replaces the gravitational field in Newtonian mechanics with curvature in spacetime itself, [ 9 ] while the second recognizes that Einstein’s General Theory of Relativity permits cosmological models exhibiting different global topological features which cannot be distinguished by any evidence inside the light cones of even idealized observers who live forever. [ 10 ] And he suggests that “the production of a few concrete examples is enough to generate the worry that only a lack of imagination on our part prevents us from seeing comparable examples of underdetermination all over the map” (1993, 31) even as he concedes that his case leaves open just how far the threat of underdetermination extends (1993, 36).

Most philosophers of science, however, have not embraced the idea that it is only lack of imagination which prevents us from finding empirical equivalents to our scientific theories generally. They note that the convincing examples of empirical equivalents we do have are all drawn from a single domain of highly mathematized scientific theorizing in which the background constraints on serious theoretical alternatives are far from clear, and suggest that it is therefore reasonable to ask whether even a small handful of such examples should make us believe that there are probably empirical equivalents to most of our scientific theories most of the time. They concede that it is always possible that there are empirical equivalents to even our best scientific theories concerning any domain of nature, but insist that we should not be willing to suspend belief in any particular theory until some convincing alternative to it can actually be produced: as Philip Kitcher puts it, “give us a rival explanation, and we’ll consider whether it is sufficiently serious to threaten our confidence” (1993, 154; see also Leplin 1997, Achinstein 2002). That is, these thinkers insist that until we are able to actually construct an empirically equivalent alternative to a given theory, the bare possibility that such equivalents exist is insufficient to justify suspending belief in the best theories we do have. For this same reason most philosophers of science are unwilling to follow van Fraassen into what they regard as constructive empiricism’s unwarranted epistemic modesty. Even if van Fraassen is right about the most minimal beliefs we must hold in order to take full advantage of our scientific theories, most thinkers do not see why we should believe the least we can get away with rather than believing the most we are entitled to by the evidence we have.

Champions of contrastive underdetermination have most frequently responded by trying to establish that all theories have empirical equivalents, typically by proposing something like an algorithmic procedure for generating such equivalents from any theory whatsoever. Stanford (2001, 2006) suggests that these efforts to prove that all our theories must have empirical equivalents fall roughly but reliably into global and local varieties, and that neither makes a convincing case for a distinctive scientific problem of contrastive underdetermination. Global algorithms are well-represented by Andre Kukla’s (1996) suggestion that from any theory T we can immediately generate such empirical equivalents as T ′ (the claim that T ’s observable consequences are true, but T itself is false), T ″ (the claim that the world behaves according to T when observed, but some specific incompatible alternative otherwise), and the hypothesis that our experience is being manipulated by powerful beings in such a way as to make it appear that T is true. But such possibilities, Stanford argues, amount to nothing more than the sort of Evil Deceiver to which Descartes appealed in order to doubt any of his beliefs that could possibly be doubted (see Section 1, above). Such radically skeptical scenarios pose an equally powerful (or powerless) challenge to any knowledge claim whatsoever, no matter how it is arrived at or justified, and thus pose no special problem or challenge for beliefs offered to us by theoretical science. If global algorithms like Kukla’s are the only reasons we can give for taking underdetermination seriously in a scientific context, then there is no distinctive problem of the underdetermination of scientific theories by data, only a salient reminder of the irrefutability of classically Cartesian or radical skepticism. [ 11 ]

In contrast to such global strategies for generating empirical equivalents, local algorithmic strategies instead begin with some particular scientific theory and proceed to generate alternative versions that will be equally well supported by all possible evidence. This is what van Fraassen does with the example of Newtonian cosmology, showing that an infinite variety of supposed empirical equivalents can be produced by ascribing different constant absolute velocities to the universe as a whole. But Stanford suggests that empirical equivalents generated in this way are also insufficient to show that there is a distinctive and genuinely troubling form of underdetermination afflicting scientific theories, because they rely on simply saddling particular scientific theories with further claims for which those theories themselves (together with whatever background beliefs we actually hold) imply that we cannot have any evidence. Such empirical equivalents invite the natural response that they simply tack on to our theories further commitments that are or should be no part of those theories themselves. Such claims, it seems, should simply be excised from our theories, leaving over just the claims that sensible defenders would have held were all we were entitled to believe by the evidence in any case. In van Fraassen’s Newtonian example, for instance, this could be done simply by undertaking no commitment concerning the absolute velocity and direction (or lack thereof) of the universe as a whole. Note also that if we believe a given scientific theory when one of the empirical equivalents we could generate from it by the local algorithmic strategy is correct instead, most of what we originally believed will nonetheless turn out to be straightforwardly true.

Stanford (2001, 2006) concludes that no convincing general case has been made for the presumption that there are empirically equivalent rivals to all or most scientific theories, or to any theories besides those for which such equivalents can actually be constructed. But he goes on to insist that empirical equivalents are no essential part of the case for a significant problem of contrastive underdetermination. Our efforts to confirm scientific theories, he suggests, are no less threatened by what Larry Sklar (1975, 1981) has called “transient” underdetermination, that is, theories which are not empirically equivalent but are equally (or at least reasonably) well confirmed by all the evidence we happen to have in hand at the moment, so long as this transient predicament is also “recurrent”, that is, so long as we think that there is (probably) at least one such (fundamentally distinct) alternative available—and thus the transient predicament re-arises—whenever we are faced with a decision about whether to believe a given theory at a given time. Stanford argues that a convincing case for contrastive underdetermination of this recurrent, transient variety can indeed be made, and that the evidence for it is available in the historical record of scientific inquiry itself.

Stanford concedes that present theories are not transiently underdetermined by the theoretical alternatives we have actually developed and considered to date: we think that our own scientific theories are considerably better confirmed by the evidence than any rivals we have actually produced. The central question, he argues, is whether we should believe that there are well confirmed alternatives to our best scientific theories that are presently unconceived by us. And the primary reason we should believe that there are, he claims, is the long history of repeated transient underdetermination by previously unconceived alternatives across the course of scientific inquiry. In the progression from Aristotelian to Cartesian to Newtonian to contemporary mechanical theories, for instance, the evidence available at the time each earlier theory dominated the practice of its day also offered compelling support for each of the later alternatives (unconceived at the time) that would ultimately come to displace it. Stanford’s “New Induction” over the history of science claims that this situation is typical; that is, that “we have, throughout the history of scientific inquiry and in virtually every scientific field, repeatedly occupied an epistemic position in which we could conceive of only one or a few theories that were well confirmed by the available evidence, while subsequent inquiry would routinely (if not invariably) reveal further, radically distinct alternatives as well confirmed by the previously available evidence as those we were inclined to accept on the strength of that evidence” (2006, 19). In other words, Stanford claims that in the past we have repeatedly failed to exhaust the space of fundamentally distinct theoretical possibilities that were well confirmed by the existing evidence, and that we have every reason to believe that we are probably also failing to exhaust the space of such alternatives that are well confirmed by the evidence we have at present. Much of the rest of his case is taken up with discussing historical examples illustrating that earlier scientists did not simply ignore or dismiss, but instead genuinely failed to conceive of the serious, fundamentally distinct theoretical possibilities that would ultimately come to displace the theories they defended, only to be displaced in turn by others that were similarly unconceived at the time. He concludes that “the history of scientific inquiry itself offers a straightforward rationale for thinking that there typically are alternatives to our best theories equally well confirmed by the evidence, even when we are unable to conceive of them at the time” (2006, 20; for reservations and criticisms concerning this line of argument, see Magnus 2006, 2010; Godfrey-Smith 2008; Chakravartty 2008; Devitt 2011; Ruhmkorff 2011; Lyons 2013). Stanford concedes, however, that the historical record can offer only fallible evidence of a distinctive, general problem of contrastive scientific underdetermination, rather than the kind of deductive proof that champions of the case from empirical equivalents have typically sought. Thus, claims and arguments about the various forms that underdetermination may take, their causes and consequences, and the further significance they hold for the scientific enterprise as a whole continue to evolve in the light of ongoing controversy, and the underdetermination of scientific theory by evidence remains very much a live and unresolved issue in the philosophy of science.

  • Achinstein, P., 2002, “Is There A Valid Experimental Argument for Scientific Realism?”, Journal of Philosophy , 99: 470–495.
  • Bloor, D., 1981 [1976], Knowledge and Social Imagery , Chicago: University of Chicago, 2nd edition
  • Bonk, T., 2008, Underdetermination: An Essay on Evidence and the Limits of Natural Knowledge , Dordrecht, The Netherlands: Springer.
  • Belot, G., 2015, “Down to Earth Underdetermination”, Philosophy and Phenomenological Research , 91: 455–464.
  • Butterfield, J., 2014, “On Underdetermination in Cosmology”, Studies in History and Philosophy of Modern Physics , 46: 57–69.
  • Carman, C., 2005, “The Electrons of the Dinosaurs and the Center of the Earth”, Studies in History and Philosophy of Science , 36: 171–174.
  • Chakravartty, A., 2008, “What You Don’t Know Can’t Hurt You: Realism and the Unconceived”, Philosophical Studies , 137: 149–158.
  • Cleland, C., 2002, “Methodological and Epistemic Differences Between Historical Science and Experimental Science”, Philosophy of Science , 69: 474–496.
  • Collins, H., 1992 [1985], Changing Order: Replication and Induction in Scientific Practice , Chicago: University of Chicago Press, 2nd edition.
  • Currie, A., 2018, Rock, Bone, and Ruin , Cambridge, MA: MIT University Press.
  • Currie, A. and Sterelny, K., 2017, “In Defence of Story-telling”, Studies in History and Philosophy of Science (Part A), 62: 12–21.
  • Descartes, R., [1640] 1996, Meditations on First Philosophy , trans. by John Cottingham, Cambridge: Cambridge University Press.
  • Devitt, M., 2011, “Are Unconceived Alternatives a Problem for Scientific Realism”, Journal for General Philosophy of Science , 42: 285–293.
  • Duhem, P., [1914] 1954, The Aim and Structure of Physical Theory , trans. from 2 nd ed. by P. W. Wiener; originally published as La Théorie Physique: Son Objet et sa Structure (Paris: Marcel Riviera & Cie.), Princeton, NJ: Princeton University Press.
  • Earman, J., 1993, “Underdetermination, Realism, and Reason”, Midwest Studies in Philosophy , 18: 19–38.
  • Feyerabend, P., 1975, Against Method , London: Verso.
  • Fletcher, S.C., 2021, “The Role of Replication in Psychological Science”, European Journal for Philosophy of Science , 11: 1–19.
  • Forber, P. and Griffith, E., 2011, “Historical Reconstruction: Gaining Epistemic Access to the Deep Past”, Philosophy and Theory in Biology , 3. doi:10.3998/ptb.6959004.0003.003
  • Gilles, D., 1993, “The Duhem Thesis and the Quine Thesis,”, in Philosophy of Science in the Twentieth Century , Oxford: Blackwell Publishers, pp. 98–116.
  • Glymour, C., 1970, “Theoretical Equivalence and Theoretical Realism”, Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1970: 275–288.
  • –––, 1977, “The Epistemology of Geometry”, Noûs , 11: 227–251.
  • –––, 1980, Theory and Evidence , Princeton, NJ.: Princeton University Press.
  • –––, 2013, “Theoretical Equivalence and the Semantic View of Theories”, Philosophy of Science , 80: 286–297.
  • Godfrey-Smith, P., 2008, “Recurrent, Transient Underdetermination and the Glass Half-Full”, Philosophical Studies , 137: 141–148.
  • Goodman, N., 1955, Fact, Fiction, and Forecast , Indianapolis: Bobbs-Merrill.
  • Halvorson, H., 2012, “What Scientific Theories Could Not Be”, Philosophy of Science , 79: 183–206.
  • –––, 2013, “The Semantic View, If Plausible, Is Syntactic”, Philosophy of Science , 80: 475–478.
  • Hesse, M., 1980, Revolutions and Reconstructions in the Philosophy of Science , Brighton: Harvester Press.
  • Kitcher, P., 1993, The Advancement of Science , New York: Oxford University Press.
  • Kovaka, K., 2019, “Underdetermination and Evidence in the Developmental Plasticity Debate”, British Journal for the Philosophy of Science , 70: 127–152.
  • Kuhn, T., [1962] 1996, The Structure of Scientific Revolutions , Chicago: University of Chicago Press, 3 rd edition.
  • Kukla, A., 1996, “Does Every Theory Have Empirically Equivalent Rivals?”, Erkenntnis , 44: 137–166.
  • Lakatos, I. 1970, “Falsification and the Methodology of Scientific Research Programmes”, in Criticism and the Growth of Knowledge , I. Lakatos and A. Musgrave (eds.), Cambridge: Cambridge University Press, pp. 91–196.
  • Laudan, L., 1990, “Demystifying Underdetermination”, in Scientific Theories , C. Wade Savage (ed.), (Series: Minnesota Studies in the Philosophy of Science, vol. 14), Minneapolis: University of Minnesota Press, pp. 267–297.
  • Laudan, L. and Leplin, J., 1991, “Empirical Equivalence and Underdetermination”, Journal of Philosophy , 88: 449–472.
  • Leplin, J., 1997, A Novel Defense of Scientific Realism , New York: Oxford University Press.
  • Longino, H., 1990, Science as Social Knowledge , Princeton: Princeton University Press.
  • –––, 2002, The Fate of Knowledge , Princeton: Princeton University Press.
  • Lyons, T., 2013, “A Historically Informed Modus Ponens Against Scientific Realism: Articulation, Critique, and Restoration”, International Studies in the Philosophy of Science , 27: 369–392.
  • Magnus, P. 2006, “What’s New About the New Induction?”, Synthese , 148: 295–301.
  • –––, 2010, “Inductions, Red Herrings, and the Best Explanation for the Mixed Record of Science”, British Journal for the Philosophy of Science , 61: 803–819.
  • Manchak, J. 2009, “Can We Know the Global Structure of Spacetime?”, Studies in History and Philosophy of Modern Physics , 40: 53–56.
  • Mill, J. S., [1867] 1900, A System of Logic, Ratiocinative and Inductive, Being a Connected View of the Principles of Evidence and the Methods of Scientific Investigation , New York: Longmans, Green, and Co.
  • Miyake, T., 2015, “Underdetermination and Decomposition in Kepler’s Astronomia Nova”, Studies in History and Philosophy of Science , 50: 20–27.
  • Nelson, L. H., 2022, “Underdetermination, Holism, and Feminist Philosophy of Science”, Synthese , 200(50). first online 27 February 2022. doi:10.1007/s11229-022-03569-2
  • Norton, J. 2008, “Must Evidence Underdetermine Theory?”, in The Challenge of the Social and the Pressure of Practice: Science and Values Revisited , M. Carrier, D. Howard, and J. Kourany (eds.), Pittsburgh: University of Pittsburgh Press, 17–44.
  • Okasha, S., 2000, “The Underdetermination of Theory by Data and the ‘Strong Programme’ in the Sociology of Knowledge”, International Studies in thePhilosophy of Science , 14: 283–297.
  • –––, 2002, “Underdetermination, Holism, and the Theory/Data Distinction”, The Philosophical Quarterly , 52: 303–319.
  • Pietsch, W., 2012, “Hidden Underdetermination: A Case Study in Classical Electrodynamics”, International Studies in the Philosophy of Science , 26: 125–151.
  • Quine, W. V. O., 1951, “Two Dogmas of Empiricism”, Reprinted in From a Logical Point of View , 2 nd Ed., Cambridge, MA: Harvard University Press, pp. 20–46.
  • –––, 1955, “Posits and Reality”, reprinted in The Ways of Paradox and Other Essays , 2 nd Ed., Cambridge, MA: Harvard University Press, pp. 246–254.
  • –––, 1969, “Epistemology Naturalized”, in Ontological Relativity and Other Essays , New York: Columbia University Press, pp. 69–90.
  • –––, 1975, “On Empirically Equivalent Systems of the World”, Erkenntnis , 9: 313–328.
  • –––, 1990, “Three Indeterminacies”, in Perspectives on Quine , R. B. Barrett and R. F. Gibson, (eds.), Cambridge, MA: Blackwell, pp. 1–16.
  • Radick, G., 2023, Disputed Inheritance: The Battle over Mendel and the Future of Biology , Chicago: University of Chicago Press.
  • Ruhmkorff, S., 2011, “Some Difficulties for the Problem of Unconceived Alternatives”, Philosophy of Science , 78: 875–886.
  • Shapin, S. and Shaffer, S., 1985, Leviathan and the Air Pump , Princeton: Princeton University Press.
  • Sklar, L., 1975, “Methodological Conservatism”, Philosophical Review , 84: 384–400.
  • –––, 1981, “Do Unborn Hypotheses Have Rights?”, Pacific Philosophical Quarterly , 62: 17–29.
  • –––, 1982, “Saving the Noumena”, Philosophical Topics , 13: 49–72.
  • Stanford, P. K., 2001, “Refusing the Devil’s Bargain: What Kind of Underdetermination Should We Take Seriously?”, Philosophy of Science , 68: S1–S12.
  • –––, 2006, Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives , New York: Oxford University Press.
  • –––, 2010, “Getting Real: The Hypothesis of Organic Fossil Origins”, The Modern Schoolman , 87: 219–243
  • Tulodziecki, D., 2012, “Epistemic Equivalence and Epistemic Incapacitation”, British Journal for the Philosophy of Science , 63: 313–328.
  • –––, 2013, “Underdetermination, Methodological Practices, and Realism”, Synthese: An International Journal for Epistemology, Methodology and Philosophy of Science , 190: 3731–3750.
  • Turner, D., 2005, “Local Underdetermination in Historical Science”, Philosophy of Science , 72: 209–230.
  • –––, 2007, Making Prehistory: Historical Science and the Scientific Realism Debate , Cambridge: Cambridge University Press.
  • van Fraassen, B., 1980, The Scientific Image , Oxford: Oxford University Press.
  • Werndl, C., 2013, “On Choosing Between Deterministic and Indeterministic Models: Underdetermination and Indirect Evidence”, Synthese: An International Journal for Epistemology, Methodology and Philosophy of Science , 190: 2243–2265.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

confirmation | constructive empiricism | Duhem, Pierre | epistemology: naturalism in | feminist philosophy, interventions: epistemology and philosophy of science | Feyerabend, Paul | induction: problem of | Quine, Willard Van Orman | scientific knowledge: social dimensions of | scientific realism


I have benefited from discussing both the organization and content of this article with many people including audiences and participants at the 2009 Pittsburgh Workshop on Underdetermination and the 2009 Southern California Philosophers of Science retreat, as well as the participants in graduate seminars both at UC Irvine and Pittsburgh. Special thanks are owed to John Norton, P. D. Magnus, John Manchak, Bennett Holman, Penelope Maddy, Jeff Barrett, David Malament, John Earman, and James Woodward.

Copyright © 2023 by Kyle Stanford < stanford @ uci . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

The Duhem-Quine Thesis

Cite this chapter.

Book cover

  • Jerzy Kmita 6  

Part of the book series: Synthese Library ((SYLI,volume 191))

100 Accesses

As we noted in the preceding chapter, W. V. Quine rejects every construction of the nature of Carnap’s concept of an observational language (either strict or extended) as substantively incorrect since—in his opinion—it is based on the false assumption that the vocabulary serving to verbalize our knowledge about the world also includes predicates, elsewhere referred to as primitively observational, each of which is equipped, independently of all other predicates and therefore in a ‘natural’ way, with a denotation in the form of a definite observable relation. This assumption is false because first of all, there are no such terms in our language which denote objects (in the wider meaning of the word) of one kind or another in a manner totally independent of that which all the remaining defined terms denote. Secondly, the denotations of these terms, in particular of the terms regarded by Carnap as primitive observational predicates, are not assigned to the terms directly and ‘naturally’, but rather on the basis of a specified set of ontological-semantic assumptions. It is this very set of assumptions that designates the complex distribution, so to speak, of references of individual elements of the lexical system. A distribution of this kind can be carried out in a variety of ways—corresponding to a variety of systems of ontological-semantic assumptions, each of which is capable of accounting for the purely empirical data represented by ‘stimulus meanings’, that is by types of physically characterized situations determining positive or negative replies to appropriate occasion sentences. Thus, the choice of one of the possible ontological-semantic systems is empirically arbitrary. It is dictated by considerations of a formal-technical nature. This is true both when we acquire our own mother tongue and when we conduct linguistic research into some foreign language.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unable to display preview.  Download preview PDF.

W. V. Quine, ‘Two Dogmas of Empiricism’, in: From a Logical Point of View; 9 Logico-Philosophical Essays , Harvard University Press, Cambridge, 1964, p. 44.

Google Scholar  

J. Giedymin, ‘Odpowiedz’ (‘Reply’) in Teoria i doswiadczenie (Theory and Experience) , Warszawa, 1966, p. 165.

W. V. Quine, Ontoloçical Relativity and Other Essays , New York-London, 1969, p. 29.

Download references

Author information

Authors and affiliations.

Adam Mickiewicz University, Poznań, Poland

Jerzy Kmita ( Professor of Logic and Methodology of Science )

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 1988 PWN—Polish Scientific Publishers, Warszawa

About this chapter

Kmita, J. (1988). The Duhem-Quine Thesis. In: Problems in Historical Epistemology. Synthese Library, vol 191. Springer, Dordrecht.

Download citation


Publisher Name : Springer, Dordrecht

Print ISBN : 978-94-010-7136-9

Online ISBN : 978-94-009-1421-6

eBook Packages : Springer Book Archive

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Advanced Search
  • All new items
  • Journal articles
  • Manuscripts
  • All Categories
  • Metaphysics and Epistemology
  • Epistemology
  • Metaphilosophy
  • Metaphysics
  • Philosophy of Action
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Religion
  • Value Theory
  • Applied Ethics
  • Meta-Ethics
  • Normative Ethics
  • Philosophy of Gender, Race, and Sexuality
  • Philosophy of Law
  • Social and Political Philosophy
  • Value Theory, Miscellaneous
  • Science, Logic, and Mathematics
  • Logic and Philosophy of Logic
  • Philosophy of Biology
  • Philosophy of Cognitive Science
  • Philosophy of Computing and Information
  • Philosophy of Mathematics
  • Philosophy of Physical Science
  • Philosophy of Social Science
  • Philosophy of Probability
  • General Philosophy of Science
  • Philosophy of Science, Misc
  • History of Western Philosophy
  • Ancient Greek and Roman Philosophy
  • Medieval and Renaissance Philosophy
  • 17th/18th Century Philosophy
  • 19th Century Philosophy
  • 20th Century Philosophy
  • History of Western Philosophy, Misc
  • Philosophical Traditions
  • African/Africana Philosophy
  • Asian Philosophy
  • Continental Philosophy
  • European Philosophy
  • Philosophy of the Americas
  • Philosophical Traditions, Miscellaneous
  • Philosophy, Misc
  • Philosophy, Introductions and Anthologies
  • Philosophy, General Works
  • Teaching Philosophy
  • Philosophy, Miscellaneous
  • Other Academic Areas
  • Natural Sciences
  • Social Sciences
  • Cognitive Sciences
  • Formal Sciences
  • Arts and Humanities
  • Professional Areas
  • Other Academic Areas, Misc
  • Submit a book or article
  • Upload a bibliography
  • Personal page tracking
  • Archives we track
  • Information for publishers
  • Introduction
  • Submitting to PhilPapers
  • Frequently Asked Questions
  • Subscriptions
  • Editor's Guide
  • The Categorization Project
  • For Publishers
  • For Archive Admins
  • PhilPapers Surveys
  • Bargain Finder
  • About PhilPapers
  • Create an account

Quine-Duhem Thesis

  • Bayesian Reasoning ( 2,668 | 533)
  • Confirmation Holism ( 51 )
  • Varieties of Confirmation ( 68 )
  • Paradox of Confirmation ( 109 )
  • Confirmation, Misc ( 264 )
  • Underdetermination of Theory by Data ( 439 | 10)
  • Pierre Duhem ( 91 )

Phiosophy Documentation Center

  • Than Christopoulos
  • May 30, 2023
  • 19 min read

The Duhem-Quine Thesis and the Critique of Falsificationism: Rethinking Theory Evaluation

Many Christians face accusations of making their worldview unfalsifiable by providing responses to objections that opponents perceive as post-hoc rationalizations. Is this truly the decisive blow to Christianity that some people claim it to be? Or is it possible that such accusations stem from a misunderstanding of how effective theory comparison works? To explore this question, we will delve into a concept known as the Duhem-Thesis, which sheds light on how Christianity possesses explanatory flexibility to accommodate data that might initially appear to challenge it. Through this investigation, we aim to unravel the role of reason in understanding how Christianity can reconcile seemingly contradictory information.


The Duhem-Quine thesis, also known as the Duhem-Quine problem or the underdetermination of theory by evidence, is a concept in the philosophy of science that addresses the relationship between theories and evidence. It is named after the French physicist Pierre Duhem and the American philosopher Willard Van Orman Quine, who made significant contributions to our understanding of the complex nature of scientific theories and the challenges involved in testing them.both of whom made significant contributions to this idea.

One of Duhem's most influential works, "La Théorie physique, son objet et sa structure" (The Aim and Structure of Physical Theory), published in 1906, laid the foundation for what would later become known as the Duhem-Quine thesis. In this work, Duhem explored the fundamental aspects of scientific theories and their interconnectedness, focusing on the holistic nature of theory evaluation.

Duhem argued that scientific theories are not isolated entities but are composed of a network of interconnected hypotheses, auxiliary assumptions, and background knowledge. He emphasized that when testing a hypothesis or evaluating a theory, it is crucial to consider the entire theoretical framework rather than isolating individual components. This perspective challenged the traditional notion of straightforward hypothesis testing and called for a more comprehensive evaluation of scientific theories.

Duhem's work paved the way for subsequent developments in the philosophy of science, leading to the formulation of the Duhem-Quine thesis. This thesis, named after both Duhem and the American philosopher Willard Van Orman Quine, further expanded on Duhem's ideas and highlighted the underdetermination of theory by evidence.

While Duhem's original work focused primarily on the physical sciences, his ideas have had far-reaching implications for philosophy and the evaluation of scientific, historical and stretching even to philosophical theories more broadly. His emphasis on the holistic nature of theories and the interconnectedness of hypotheses and assumptions has influenced various fields, including the philosophy of religion.

Therefore, when discussing the Duhem-Quine thesis, it is important to recognize the seminal contribution of Pierre Duhem through his influential work "La Théorie physique, son objet et sa structure." Duhem's insights into the complex structure of scientific theories and his recognition of the interdependencies within them laid the groundwork for a deeper understanding of the challenges inherent in theory evaluation and the development of the Duhem-Quine thesis as a significant concept in the philosophy of science.

So What Exactly Is The Point?

The thesis suggests that it is impossible to test a single scientific hypothesis in isolation because any experiment or observation is influenced by a network of assumptions and auxiliary hypotheses. According to Duhem and Quine, when a hypothesis is tested and produces a result that conflicts with predictions, it is difficult to determine which part of the network of beliefs and assumptions is responsible for the discrepancy. This means that it is challenging to pinpoint whether a particular hypothesis or auxiliary assumption is false based solely on empirical evidence.

Keep this in mind when we return to apply this to the debate about God’s existence…

The Duhem-Quine thesis challenges the traditional notion of straightforward hypothesis testing and highlights the holistic nature of scientific theories. It suggests that theories are not confirmed or refuted in isolation, but rather as part of a larger web of beliefs and assumptions. Therefore, if empirical evidence contradicts a theory, scientists have the flexibility to revise or modify any component of the theory, including auxiliary hypotheses, background assumptions, or even the entire theoretical framework.

This thesis has important implications for the philosophy of science, as it emphasizes the role of scientific communities in evaluating and revising theories based on evidence. It also highlights the inherent uncertainty and subjectivity involved in scientific inquiry, as scientists must make judgments and decisions about which parts of the theoretical network to modify when faced with conflicting evidence.

It's worth noting that while the Duhem-Quine thesis challenges the idea of conclusive hypothesis testing, it does not imply that all theories are equally valid or that science is arbitrary. Rather, it underscores the complexity of theory confirmation and the need for critical evaluation and ongoing refinement in scientific practice.

The Duhem-Quine thesis, which challenges the traditional view of falsificationism (The claim that the main activity of a researcher is to invalidate a theory by observation or experiment as a definitive method for theory evaluation.) By highlighting the holistic nature of scientific theories and the interconnectedness of hypotheses and auxiliary assumptions, the Duhem-Quine thesis sheds light on the limitations of falsificationism. Because of this falsificationism is an outdated and inadequate approach to theory evaluation, as it oversimplifies the complexity of scientific inquiry and fails to account for the subjective and context-dependent aspects of theory assessment.


We will be arguing that falsificationism is therefore false and this is why when Christians face accusations of making their worldview unfalsifiable by providing responses to objections that opponents perceive as post-hoc rationalizations is not the blow people may think it is. Falsificationism, popularized by Karl Popper, has long been regarded as a cornerstone of scientific methodology. It posits that scientific theories should be evaluated based on their ability to be falsified through empirical evidence (Popper, K. R. (1959). The Logic of Scientific Discovery. Routledge.). However, the Duhem-Quine thesis challenges this notion by highlighting the inherent complexity and interdependence of scientific hypotheses and auxiliary assumptions. As noted by philosopher Peter Lipton, "The Duhem-Quine thesis poses a significant challenge to the simplistic idea that theories can be tested and falsified in isolation" (Lipton, P. (2004). Inference to the Best Explanation. Routledge.).The reasons why can be broken up into 4 main parts.

The Holistic Nature of Scientific Theories:

Scientific theories consist of a network of interconnected hypotheses, auxiliary assumptions, and background knowledge. The Duhem-Quine thesis argues that when a hypothesis is tested and conflicts with predictions, it is difficult to isolate which specific component of the theoretical network is responsible for the discrepancy. As philosopher Paul Hoyningen-Huene explains, "The Duhem-Quine thesis emphasizes the holistic character of scientific theories and the fact that a single hypothesis cannot be tested in isolation" (Hoyningen-Huene, P. (2006). Reconstructing Scientific Revolutions: Thomas S. Kuhn's Philosophy of Science. University of Chicago Press.). This holistic perspective suggests that the failure of a prediction does not necessarily imply that the specific hypothesis being tested is false. Instead, it calls for a critical examination of the entire theoretical framework and auxiliary assumptions.

Underdetermination of Theory by Evidence:

The underdetermination of theory by evidence, a key aspect of the Duhem-Quine thesis, further undermines the validity of falsificationism. Since evidence alone cannot unequivocally identify the cause of a discrepancy between theory and observation, scientists are faced with multiple plausible explanations. As philosopher Imre Lakatos points out, "The Duhem-Quine thesis reveals that a single falsifying observation cannot definitively refute a theory, as there are always alternative hypotheses and auxiliary assumptions that can be modified to accommodate the conflicting evidence" (Lakatos, I. (1970). Falsification and the Methodology of Scientific Research Programmes. In I. Lakatos & A. Musgrave (Eds.), Criticism and the Growth of Knowledge (pp. 91-196). Cambridge University Press.). Consequently, the falsification of a specific hypothesis does not necessarily lead to the rejection of an entire theory. Rather, it prompts scientists to consider alternative explanations and revise auxiliary assumptions, rendering falsificationism an incomplete and limited methodology.

Context-Dependent Theory Evaluation:

Falsificationism assumes that theories can be evaluated independently of the wider scientific context. However, the Duhem-Quine thesis emphasizes the subjectivity and context-dependency of theory evaluation. The choice of which hypotheses or auxiliary assumptions to modify or discard when faced with conflicting evidence is influenced by various subjective factors, such as scientific judgment, theoretical preferences, axiology, and societal norms. As philosopher Bas C. van Fraassen argues, "The Duhem-Quine thesis highlights the subjective nature of theory evaluation and the fact that scientists make judgments about which components of the theoretical network to revise based on a range of subjective factors" (van Fraassen, B. C. (1980). The Scientific Image. Oxford University Press.). This subjectivity calls into question the objectivity and universality of falsificationism as a theory evaluation tool.

Refinement and Evolution of Scientific Theories:

The Duhem-Quine thesis encourages a more nuanced approach to theory evaluation, one that recognizes the frequentative and evolutionary nature of scientific inquiry. Instead of viewing the failure to falsify a hypothesis as a definitive rejection of a theory, the thesis suggests that scientific theories are subject to continual refinement and revision. As physicist and philosopher Nancy Cartwright argues, "The Duhem-Quine thesis promotes the idea that scientific theories are not static entities but rather dynamic frameworks that can be refined and modified in response to empirical evidence and theoretical advancements" (Cartwright, N. (1999). The Dappled World: A Study of the Boundaries of Science. Cambridge University Press.). Scientists have the flexibility to modify auxiliary assumptions, reformulate hypotheses, or even reconstruct the entire theoretical framework in light of new evidence and theoretical insights.

Counter Examples to Falsificationism:

With this in place let's consider a few examples that demonstrate why this view of theory comparison should be preferred over falsificationism.

Several historical examples provide concrete illustrations of the limitations of falsificationism.

The phenomenon of black-body radiation

One example is the phenomenon of black-body radiation, which posed a challenge to classical physics. Rather than abandoning the entire theory, physicists developed new auxiliary assumptions and theoretical frameworks, leading to the formulation of quantum mechanics (Planck, M. (1914). The Theory of Heat Radiation. P. Blakiston's Son & Co.)

In the late 19th century, physicists were attempting to understand the radiation emitted by an idealized object known as a black body. According to classical physics, the energy emitted by a black body should increase without bound as the frequency of radiation increases, which is known as the ultraviolet catastrophe (Planck, 1914). However, experimental observations contradicted this prediction and showed that the energy distribution followed a different pattern.

To address this discrepancy, Max Planck introduced a new auxiliary assumption in 1900 that revolutionized our understanding of black-body radiation. Planck proposed that energy could only be emitted or absorbed in discrete packets, or quanta, rather than continuously (Planck, 1914). This assumption, now known as Planck's quantum hypothesis, provided a successful explanation for the observed energy distribution and laid the foundation for the development of quantum mechanics.

The significance of this example lies in the fact that rather than rejecting the entire classical physics framework, physicists introduced a new auxiliary assumption that led to a paradigm shift in our understanding of the fundamental nature of energy and matter. This example demonstrates the flexibility of scientific theories to adapt and incorporate new insights, even when faced with conflicting evidence.

To delve deeper into the topic, a scholarly paper by Planck himself, titled "The Theory of Heat Radiation," provides an in-depth exploration of the development of his quantum hypothesis and its implications for our understanding of black-body radiation (Planck, 1914). This seminal work discusses the experimental evidence, the challenges posed by classical physics, and the formulation of the quantum hypothesis as a solution.

By examining the historical context and the specific scientific advancements related to black-body radiation, we gain a deeper appreciation of how the Duhem-Quine thesis challenges the simplistic view of falsificationism. Rather than outright falsifying classical physics, the observed discrepancies prompted the development of new auxiliary assumptions and theoretical frameworks, leading to a more comprehensive understanding of the physical phenomena involved.

Newton’s gravitational theory

Consider the case of Newton’s gravitational theory. Due to gravitational attraction, in 1821 Alexis Bouvard predicted that the orbit of the Uranus, known at the time as the planet that was farther away from the Sun in the solar system, would be such-and-such, but observations consistently showed that the actual trajectory deviated from this prediction. No serious scientist thought that a well confirmed theory such as Newton’s should be immediately rejected because of a failed prediction. Rather, many revised some of the auxiliary assumptions, including the one about Uranus actually being the planet farthest away from the Sun. Two scientists working independently, Adams and Leverrier, posited that there must be another planet whose position and mass was affecting Uranus’ trajectory. They calculated where this planet was supposed to be, and how massive it would be. Eventually, the planet Neptune was discovered by direct observation.

The solar neutrino problem

The solar neutrino problem also illustrates this point. Neutrinos are near massless microparticles which are only subject to weak forces –they only interact with protons—and can go through almost any massive object. Our sun emits a vast number of neutrinos from its core, and the analysis of this flux of neutrinos is the main way of studying the inner workings of the sun. In the 60s, given what they knew at the time regarding the sun and neutrinos, scientists predicted a given number of neutrinos coming from the sun, but experiments showed only about a third of this number. This discrepancy was known as the solar neutrino problem. The tested hypothesis was the Standard Solar Model, and the auxiliary hypotheses included knowledge concerning the nature of neutrinos, the instruments that measured the solar neutrino flux (which include washing-up liquid), and assumptions concerning the whole experimental set up. Scientists didn’t just reject the Standard Solar Model in the face of this discrepancy (after all, the Standard Solar Model was well confirmed by many other situations and experiments) but analyzed some of these other assumptions. They hypothesized that perhaps neutrinos were more complex than they initially thought, and that there may be more kinds of neutrinos, some of which were undetectable by the measuring devices used in the initial experiment. This hypothesis was confirmed in 1985.

Did Adams, Leverrier, and the scientists involved in the solar neutrino problem proceed in an unscientific manner by not dropping their theories immediately? Surely they didn’t! So, there must be something wrong with Popper’s falsifiability criterion. It is true that if a theory consistently fails to be confirmed then that would be a good indication that it should be abandoned, but that would take us away from Popper’s criterion. After all, how many disconfirmations would one need in order to reject a theory? That would of course depend on each particular situation, but we see now that, contrary to Popper’s suggestion, falsifiability by itself is not enough to reject or accept a theory.

Dark Matter

There are also modern examples that illustrate the application of the Duhem-Quine thesis in contemporary scientific inquiry. One such example is found in the field of cosmology, specifically in the study of dark matter.

Dark matter is a hypothetical form of matter that does not interact with light or other electromagnetic radiation, making it invisible to direct detection. Its existence is inferred from its gravitational effects on visible matter and the large-scale structure of the universe. Various hypotheses and theories have been proposed to explain the nature of dark matter, including the existence of new particles beyond the Standard Model of particle physics.

In the quest to detect and understand dark matter, scientists rely on a combination of theoretical models, astrophysical observations, and experimental data. However, due to the elusive nature of dark matter, the complexity of astrophysical systems, and the limitations of observational techniques, the identification and characterization of dark matter remains a challenging endeavor.

The Duhem-Quine thesis is relevant in this context because the evaluation of competing dark matter hypotheses and theories is not solely based on isolated empirical tests. Instead, it involves a network of interdependent assumptions, such as the nature of particle interactions, the distribution of dark matter, and the behavior of gravity on large scales. When a specific dark matter hypothesis is confronted with observational data, it is difficult to pinpoint which assumptions within the broader theoretical framework are responsible for any discrepancies. Consequently, alternative explanations or modifications to auxiliary assumptions are considered to account for the observed phenomena.

For example, in recent years, there have been intriguing discrepancies between the predictions of dark matter simulations and observations of certain galactic structures. These observations, such as the unexpected distribution of dark matter in dwarf galaxies or the "too big to fail" problem, have raised questions about the standard dark matter paradigm. Scientists have proposed alternative explanations, including modifications to the properties of dark matter or the incorporation of additional astrophysical processes, to address these inconsistencies.

The ongoing research on dark matter and the attempts to reconcile theoretical predictions with observational data exemplify the holistic and interconnected nature of scientific theories as highlighted by the Duhem-Quine thesis. It underscores the need for critical evaluation, revision of auxiliary assumptions, and refinement of theoretical frameworks in light of empirical evidence and emerging insights.

While the specific application of the Duhem-Quine thesis in the context of dark matter is complex and subject to ongoing debate, it serves as a modern illustration of the challenges and considerations involved in theory evaluation within a complex scientific domain.

How Does This All Tie In?

Now you may be wondering why this matters at all with regards to the debate on God’s existence and whether or not Christianity is true.

The Duhem-Quine thesis and its implications for theory evaluation, particularly its emphasis on holism and the underdetermination of theory by evidence, can be applied to the debate about the truth of Christianity. While it is important to note that matters of faith and religious belief extend beyond the realm of scientific inquiry, the Duhem-Quine thesis offers a framework that allows for a nuanced understanding of the complexity and flexibility of Christian theism in response to challenges and criticisms.

Problem of Evil:

One aspect of the Duhem-Quine thesis is its recognition of the holistic nature of scientific theories. Similarly, in the context of Christianity, the belief system encompasses a comprehensive worldview that includes theological doctrines, moral teachings, and explanations for the nature of God, humanity, and the world. When confronted with the problem of evil, which questions how the existence of a benevolent and all-powerful God can be reconciled with the presence of suffering and injustice in the world, the Duhem-Quine framework allows Christians to approach the issue holistically.

By considering the problem of evil within the broader theological framework of Christianity, believers can explore various interconnected aspects, such as free will, the consequences of human choices, the role of suffering in spiritual growth, and the ultimate redemption and restoration of creation. This holistic perspective enables Christians to address the problem of evil not as a standalone challenge to the truth of Christianity but as an integral part of a comprehensive theological narrative that encompasses the entire human experience.

Furthermore, the Duhem-Quine thesis encourages Christians to engage in critical reflection and revision of auxiliary assumptions within their theological framework. This can involve theological debates and discussions that seek to refine and develop responses to the problem of evil, drawing on diverse philosophical, ethical, and theological perspectives. The flexibility provided by the Duhem-Quine framework allows Christians to explore and consider different explanations and solutions, acknowledging the complexity and interconnectedness of their belief system.

Historical Debates:

The history of Christianity is replete with debates and disagreements about its truth claims, ranging from theological doctrines to historical events such as the life, death, and resurrection of Jesus Christ. These debates have involved the critical evaluation of various pieces of evidence, interpretation of historical texts, and philosophical arguments.

Applying the Duhem-Quine thesis to historical debates about the truth of Christianity allows for an understanding of the interconnectedness of different historical and theological claims. Rather than evaluating isolated pieces of evidence or events in isolation, the Duhem-Quine framework prompts scholars and theologians to consider the larger historical and theological context.

For example, the debate surrounding the historical evidence for the resurrection of Jesus Christ requires considering a network of interconnected beliefs, such as the reliability of the Gospel accounts, the theological significance of the resurrection, and the coherence of the overall Christian worldview. The Duhem-Quine framework encourages scholars to engage in a holistic evaluation that takes into account multiple lines of evidence, historical context, and theological implications, allowing for a more nuanced understanding of the debate.

Scientific Insights:

Beyond philosophical and historical debates, scientific discoveries and insights can also be viewed through the lens of the Duhem-Quine thesis within the context of Christian theism. As scientific knowledge advances, new findings may raise questions or appear to challenge certain interpretations or beliefs. However, the Duhem-Quine framework encourages Christians to approach these scientific challenges with a holistic perspective.

For instance, the theory of evolution is often discussed in relation to Christianity, particularly in the context of the creation account in the book of Genesis. The Duhem-Quine thesis invites Christians to evaluate the relationship between scientific theories and their theological understanding in a comprehensive manner. This involves considering the theological richness of creation accounts, the symbolism and metaphorical nature of biblical texts, and the compatibility of evolutionary theory with theological concepts such as divine providence.

By adopting a holistic approach, Christians can engage in a nuanced evaluation that recognizes the limits and strengths of scientific knowledge while exploring how scientific insights can enrich their understanding of the world and their faith. This allows for a fruitful dialogue between science and Christian theology, where both domains contribute to a deeper comprehension of the complexities of existence.

In conclusion, the application of the Duhem-Quine thesis to the debate about the truth of Christianity provides a comprehensive framework that acknowledges the complexity and interconnectedness of belief systems. It allows for a more nuanced evaluation, critical reflection, and flexibility in addressing challenges, historical debates, and scientific advancements within the context of Christian theism. This holistic and reflective approach does not render religious beliefs unfalsifiable or post hoc rationalizations but provides a robust framework for evaluating their plausibility and coherence in a thoughtful and intellectually rigorous manner.

Moreover, the Duhem-Quine thesis highlights the flexibility and explanatory power of Christian theism in response to historical challenges. By recognizing the interconnectedness of beliefs within the broader Christian worldview, proponents of Christian theism have the ability to revise auxiliary assumptions, reinterpret historical events, or incorporate new evidence while maintaining the core tenets of their faith. This flexibility allows for a dynamic engagement with historical debates and the incorporation of emerging insights and scholarship.

It is important to note that the application of the Duhem-Quine thesis to the debate about the truth of Christianity does not claim to provide a conclusive proof or disproof of religious claims. Instead, it offers a framework that acknowledges the complexity and interconnectedness of belief systems, allowing for a more nuanced evaluation, critical reflection, and flexibility in addressing challenges and historical debates within the context of Christian theism.

The argument that Christianity, or any religious belief, is "unfalsifiable" based on the Duhem-Quine thesis, confirmational holism, and Bayesianism is a misunderstanding of the nature of theory evaluation and the flexibility of these frameworks. While it is true that religious beliefs, by their nature, may not lend themselves to direct empirical testing in the same way as scientific hypotheses, it does not render them unfalsifiable or dismiss them as post hoc rationalizations.

Holistic Evaluation and Coherence:

Confirmational holism, as emphasized by the Duhem-Quine thesis, recognizes the interconnectedness and interdependence of beliefs and assumptions within a theoretical framework. In the case of Christianity, the evaluation of its truth claims involves examining the coherence and consistency of its various doctrines, theological concepts, and historical narratives. This holistic evaluation is not a post hoc rationalization but a rigorous assessment of the internal consistency and logical coherence of the belief system.

By considering the broader theological context, Christians can critically evaluate how different elements fit together, ensuring that their beliefs are mutually reinforcing and logically sound. In this process, potential conflicts or inconsistencies can be identified and addressed, leading to a more robust and coherent understanding of their faith.

Bayesianism and Reasoned Evaluation:

Bayesianism, a framework for probabilistic reasoning, provides a valuable tool for theory evaluation, including religious beliefs. Bayesianism recognizes that beliefs are updated based on the available evidence and the assessment of the likelihood of various hypotheses. In the context of Christianity, Bayesian reasoning allows believers to weigh the evidence, consider arguments from philosophy, history, theology, and personal experiences, and make reasoned judgments about the plausibility and coherence of their faith.

Contrary to the notion of post hoc rationalizations, Bayesianism encourages a proactive and critical evaluation of evidence, ensuring that beliefs are not held dogmatically but are open to revision in light of new information. It enables believers to assess the cumulative impact of various pieces of evidence and arguments, and make rational decisions about the credibility and coherence of their beliefs.

Exploratory Flexibility and Open Inquiry:

The flexibility inherent in the Duhem-Quine thesis and these frameworks allows for open inquiry and exploration of alternative explanations and interpretations. Rather than being a weakness, this flexibility is a strength that aligns with the practices of science itself.

In scientific inquiry, hypotheses are often refined, modified, or even replaced in response to new evidence and theoretical advancements. Similarly, within the realm of religious belief, Christians have the freedom to engage in critical reflection, refine their understanding of theological concepts, and explore different approaches to address philosophical and historical objections.

This flexibility does not undermine the credibility of Christian theism but fosters a dynamic engagement with intellectual challenges and encourages continuous growth and refinement of theological perspectives.

It is important to note that the evaluation of religious beliefs is multifaceted and extends beyond the boundaries of empirical testing. The frameworks of the Duhem-Quine thesis, confirmational holism, and Bayesianism provide a comprehensive approach that acknowledges the complexity of belief systems, encourages critical evaluation, coherence, and reasoned judgment, and allows for the flexibility necessary to respond to challenges and engage in open inquiry. This holistic and reflective approach does not render religious beliefs unfalsifiable or post hoc rationalizations but provides a robust framework for evaluating their plausibility and coherence in a thoughtful and intellectually rigorous manner.

So we just covered a lot of things here and you may have trouble tying it all together. Here is the basic idea. To recap, when atheists argue that Christianity is "unfalsifiable" or that responses to objections are merely "post hoc rationalizations," they overlook the nuances of theory evaluation and the frameworks we employ in scientific and philosophical discourse. The Duhem-Quine thesis, confirmational holism, and Bayesianism offer valuable insights into how we assess the credibility of beliefs, including religious ones.

First, holistic evaluation and coherence are key. Christians engage in a rigorous assessment of the internal consistency and logical coherence of their faith. It's not about post hoc rationalization, but about critically examining the interconnectedness of beliefs within a broader theological framework.

Second, Bayesianism and reasoned evaluation play a crucial role. Christians weigh the available evidence, consider arguments from philosophy, history, theology, science and personal experiences, and make reasoned judgments about the plausibility and coherence of their faith. This is a proactive and intellectually rigorous process, not mere post hoc rationalization in an attempt to dishonestly avoid disconfirmation.

Finally, exploratory flexibility and open inquiry are vital. Just as scientific hypotheses like the ones we covered earlier are refined and modified based on new evidence, Christians have the freedom to engage in critical reflection, refine their understanding, and explore alternative explanations. This flexibility fosters growth and refinement, rather than being an admission of unfalsifiability.


In conclusion, the charges of "unfalsifiability" and "post hoc rationalization" leveled against Christianity by atheists are misplaced. By employing the Duhem-Quine thesis, confirmational holism, and Bayesianism, Christians engage in a thoughtful and intellectually robust evaluation of their beliefs. They assess coherence, employ reasoned judgment, and embrace exploratory flexibility. This comprehensive approach allows for a more nuanced understanding of the faith and counters the misconceptions surrounding its evaluation.

In light of the Duhem-Quine thesis, falsificationism appears outdated and inadequate as a comprehensive theory evaluation methodology. The holistic nature of scientific theories, the underdetermination of theory by evidence, the subjectivity of theory evaluation, and the refinement and evolution of theories all challenge the notion that falsification alone can provide conclusive assessments of scientific theories. Embracing a more nuanced and holistic approach to theory evaluation allows for a deeper understanding of the complex dynamics within scientific inquiry and promotes the progress of scientific knowledge.

The inclusion of the counterexamples to falsificationism further strengthens the argument that falsification alone is insufficient for comprehensive theory evaluation and that we ought to follow the light of reason that the Duhem-Quine thesis, confirmational holism, and Bayesianism provide.

"The Structure of Scientific Revolutions" by Thomas S. Kuhn

`"Conjectures and Refutations: The Growth of Scientific Knowledge" by Karl Popper

"W. V. Quine: From a Logical Point of View" by W. V. Quine

"The Duhem Thesis and the Quine Thesis" by Pierre Duhem

"Confirmation, Empirical Progress, and Truth Approximation: Essays in Debate with Theo Kuipers" edited by Roberto Festa, Peer D. H. Grunwald, and Franz W.

"Holism, Entrenchment, and the Future of Empirical Theory" by Paul Hoyningen-Huene

"The Quine-Duhem Thesis: A Critical Appraisal" by Frederick Grinnell

"Theory and Reality: An Introduction to the Philosophy of Science" by Peter Godfrey-Smith

"Underdetermination: An Introduction" by Paul Hoyningen-Huene

"Inference to the Best Explanation" by Peter Lipton

Cartwright, N. (1999). The Dappled World: A Study of the Boundaries of Science. Cambridge University Press.

Hoyningen-Huene, P. (2006). Reconstructing Scientific Revolutions: Thomas S. Kuhn's Philosophy of Science. University of Chicago Press.

Hoyningen-Huene, P. (2013). Systematicity: The Nature of Science. Oxford University Press.

Lakatos, I. (1970). Falsification and the Methodology of Scientific Research Programmes. In I.

Lakatos & A. Musgrave (Eds.), Criticism and the Growth of Knowledge (pp. 91-196). Cambridge University Press.

Lipton, P. (2004). Inference to the Best Explanation. Routledge.

Planck, M. (1914). The Theory of Heat Radiation. P. Blakiston's Son & Co.

Popper, K. R. (1959). The Logic of Scientific Discovery. Routledge.

van Fraassen, B. C. (1980). The Scientific Image. Oxford University Press.

Recent Posts

An Introduction to the Contingency Argument

Than Christopoulos Exploring Reality An Introduction to the Contingency Argument What is the Contingency Argument? I find that the contingency argument tends to be the most misunderstood of the argume

Slides From Our Gospel Authorship Stream


  1. Shadows

    duhem quine thesis

  2. 预订 Can Theories be Refuted?: Essays on the Duhem-Quine Thesis

    duhem quine thesis

  3. PPT

    duhem quine thesis

  4. Pierre Duhem Timeline

    duhem quine thesis

  5. The Duhem‐Quine thesis revisited: International Studies in the

    duhem quine thesis

  6. (PDF) Does the Quine/duhem Thesis Prevent Us from Defining Analyticity

    duhem quine thesis


  1. Success story of Mr.Kishore S as a software Engineer Qspiders vadapalani Chennai

  2. Day 8 Epistemology THV: Duhem-Quine Thesis

  3. Gibbs Duhem Margules Equation

  4. Selections from Encanto

  5. African Symphony

  6. Jerusalema


  1. Duhem-Quine thesis

    In philosophy of science, the Duhem-Quine thesis, also called the Duhem-Quine problem, posits that it is impossible to experimentally test a scientific hypothesis in isolation, because an empirical test of the hypothesis requires one or more background assumptions (also called auxiliary assumptions or auxiliary hypotheses ): the thesis says ...

  2. Underdetermination of Scientific Theory

    A First Look: Duhem, Quine, and the Problems of Underdetermination. ... , and he goes on to insist that those who attribute dramatic significance to the thesis that our scientific theories are underdetermined by the evidence defend only the weaker versions of that thesis, yet draw dire consequences and shocking morals regarding the character ...

  3. PDF Chapter 5 The Quine-Duhem Thesis and Implications for Scientific Method

    Duhem-Quine thesis, to reflect the fact that Duhem preceded Quine). The Quine-Duhem thesis is one of the better-known views in modern philosophy of science, and so for this reason alone is worth looking into. But in addition, the Quine-Duhem thesis will give us an opportunity to better


    134-THE DUHEM-QUINE THESIS Finally, if the application of a criterion for truth to senten­ ces which are not logical truths (R. Ca1rnap would speak here of synthetic sentences, but as we know, the author of Ontolo­ gical Relativity rejects the dichotomy of analytic and synthetic sentences) were to make sense only when those sentences are

  5. Underdetermination Thesis, Duhem-Quine Thesis

    UNDERDETERMINATION THESIS, DUHEM-QUINE THESIS Underdetermination is a relation between evidence and theory. More accurately, it is a relation between the propositions that express the (relevant) evidence and the propositions that constitute the theory. The claim that evidence underdetermines theory may mean two things: first, that the evidence cannot prove the truth of the theory, and second ...

  6. Duhem, Quine and Kuhn, and Incommensurability

    Popper agreed and said, evidence refutes a theory to the contrary as long as it is endorsed. Duhem argued against the realist reading of theories as it is metaphysical. Quine and Popper disagreed with this. Kuhn renamed the thesis incommensurability, but he rejected both the realist view of scientific theories and the opposite view of them.

  7. The Q uine- D uhem Thesis and Implications for Scientific Method

    The Quine-Duhem thesis, and issues surrounding the topic of scientific method, illustrate some of the ways that issues in science and the philosophy of science are intertwined, and intertwined in complex ways. In many ways, Rene Descartes' view of the proper way to conduct science was similar to that of Aristotle. There is no question that the ...

  8. Underdetermination in Economics. The Duhem-Quine Thesis

    Extract. This paper considers the relevance of the Duhem-Quine thesis in economics. In the introductory discussion which follows, the meaning of the thesis and a brief history of its development are detailed. The purpose of the paper is to discuss the effects of the thesis in four specific and diverse theories in economics, and to illustrate ...

  9. Talking hypothetically: the Duhem-Quine thesis, multiple hypotheses and

    The Duhem-Quine thesisUnlike geographical sciences, other related disciplines—especially economics—have given a significant amount of attention in recent years to the role of the Duhem-Quine thesis as a logical context for the 'appraisal of theories' (Cross, 1982; Sawyer et al., 1997).

  10. Can Theories be Refuted?: Essays on the Duhem-Quine Thesis

    Some philosophers have thought that the Duhem-Quine thesis gra­ tuitously raises perplexities. Others see it as doubly significant; these philosophers think that it provides a base for criticism of the foundational view of knowledge which has dominated much of western thought since Descartes, and they think that it opens the door to a new and ...

  11. Duhem and Quine

    Duhem and Quine. Paul Needham*. Abstract. The rejection of the idea that the so-called Duhem-Quine thesis in fact expresses a thesis upheld by either Duhem or Quine invites a more detailed comparison of their views. It is sug- gested that the arguments of each have a certain impact on the positions maintained by the other.

  12. The Duhem‐Quine thesis revisited

    The Duhem‐Quine thesis is generally presented as the radical underdetermi‐ nation of a theory by experimental evidence. But there is a much‐neglected second aspect, i.e. the coherence or interrelatedness of the conceptual components of a theory. Although both Duhem and Quine recognised this aspect, they failed to see its consequences: it ...

  13. Two Dogmas of Neo-Empiricism: The "Theory-Informity" of Observation and

    Two Dogmas of Neo-Empiricism: The "Theory-Informity" of Observation and the Quine-Duhem Thesis - Volume 57 Issue 4 Skip to main content Accessibility help We use cookies to distinguish you from other users and to provide you with a better experience on our websites.

  14. [PDF] The Duhem thesis, the Quine thesis and the problem of meaning

    Through a detailed analysis of Duhem's writings some light is cast on the relations between holism, underdetermination and theory-ladenness of experimentation. The latter, which results from the need to interpret theoretically what is actually observed during an experiment, plays a key role in Duhem's analysis of the relation between observation and theory. I will argue that the theory ...

  15. PDF The Duhem Thesis and the Quine Thesis

    The Duhem Thesis and the Quine Thesis. In current writing on the philosophy of science, reference is often made to what is called 'tile Duhem-Quine thesis'. Really, however, this is some* thing of a misnomer; for, as we shall see, the Duhem thesis differs in many important respects from the Quine thesis. In this chapter I will expound the two ...

  16. PDF The Duhem Thesis, the Quine Thesis and The Problem of Meaning Holism in

    The thesis that no hypothesis in physics can be tested in isolation (also referred to as the Duhem Thesis or the thesis of holism in theory testing) will be shown to be a consequence of the theory-ladenness of experimentation. Duhem illustrates the latter by underlining the contrast between common-sense observation and experiment: an ...

  17. The Duhem-Quine Thesis

    The Duhem‐Quine thesis asserts that any empirical evaluation of a theory is in fact a composite test of several interconnected hypotheses. Recalcitrant evidence signals falsity within the conjunction … Expand. 95. PDF. Save. Beyond the Hoax: Science, Philosophy and Culture. A. Sokal.

  18. The Duhem-Quine Thesis

    As we noted in the preceding chapter, W. V. Quine rejects every construction of the nature of Carnap's concept of an observational language (either strict or extended) as substantively incorrect since—in his opinion—it is based on the false assumption that the vocabulary serving to verbalize our knowledge about the world also includes predicates, elsewhere referred to as primitively ...

  19. Popper, Basic Statements and the Quine-Duhem Thesis

    3. The Quine-Duhem Thesis and holism Quine (1953), following Pierre Duhem, and Lakatos (1970) have 13 Popper, The Logic of Scientific Discovery, p. 50. 14 Popper, The Logic of Scientific Discovery, p. 49. 15 Popper, The Logic of Scientific Discovery, p. 54. 16 Popper, The Logic of Scientific Discovery, p. 281.

  20. Philosophy of Science: Duhem-Quine Thesis

    I briefly explain in what the Duhem-Quine Thesis is, and argue that it overcomes a major objection, involving assistance from Quine's explanatory virtues.

  21. Quine-Duhem Thesis

    The Duhem-Quine Thesis is the claim that it is impossible to test a scientific hypothesis in isolation because any empirical test requires assuming the truth of one or more auxiliary hypotheses. This is taken by many philosophers, and is assumed here, to support the further thesis that theory choice is underdetermined by empirical evidence. ...

  22. (PDF) The Duhem-Quine Thesis Reconsidered

    so-called Duhem-Quine thesis. The aim of this paper is to reconsider whether. Duhem was right to argue that there are no crucial experiments in physics. In order to assess the validit y of the ...

  23. The Duhem-Quine Thesis and the Critique of Falsificationism: Rethinking

    The Duhem-Quine thesis, also known as the Duhem-Quine problem or the underdetermination of theory by evidence, is a concept in the philosophy of science that addresses the relationship between theories and evidence. It is named after the French physicist Pierre Duhem and the American philosopher Willard Van Orman Quine, who made significant ...