6. Underdetermination Now, why would anyone be a scientific antirealist? Obviously, if one is a metaphysical antirealist, then one is automatically a scientific antirealist as well, but we will focus on the argument specific to science. So the debate is not about whether or not we should believe in trees and rocks, but whether we should believe in unobservable/theoretical entities. Historically, one important motivation for antirealism has been the argument from underdetermination. Suppose that you see someone play the piano in one of the classroom here, but that the result is really bad. How would you explain it? (Bad player, badly tuned piano, good player with a hand injury?) The idea is that, often, as well, in everyday life as in science, there is more than one theory, explanation or law compatible with the evidence. Thus, why should we believe in one theory rather than the other? Instead, we should, the argument goes, suspend our judgement. 6.1.1 Weak underdetermination Here is the argument (p.163): 1) Suppose some theory T is known, and all the evidence is consistent with T. 2) There is another theory T* that is also consistent with all the available evidence for T. (T and T* are weakly empirically equivalent in the sense that they are both compatible with the evidence we have gathered so far.) 3) If all the available evidence for T is consistent with some other hypothesis T*, then there is no reason to believe T to be true and not T*. Therefore, there is no reason to T to be true and not T*. That two theories are underdetermined in this way happens all the time in science. So to provide a way to decide between T and T*, we need to collect further evidence, in particular evidence that will discriminate between T and T* by focussing on a difference in the predictions of the two theories. That’s the idea of using crucial experiment as we saw in chapter 1. However, we can only gather a limited amount of facts, while it is always possible that there is another theory that fits the available facts. Maybe we just haven’t thought of it yet. So, the underdetermination argument can be strengthened in this way: For any theory T, there is always another theory T’ such that: 1) T and T’ are weakly empirically equivalent. 2) If T and T’ are weakly empirically equivalent then there is no reason to believe T and not T’. One possible reply is to deny (2), that is, to reply that just because a weakly empirically equivalent theory T* exists does not mean that we have no reason to prefer T to T*. For instance, Popper argued that if T* is ad hoc and entails no other empirically falsifiable predictions, then it should be ignored. Of course, Popper would not allow belief in T either, but this strategy can be adapted to induction by saying that any rival theory should provide additional empirical content, not just explain the same facts that T does. Example: evolution theory (long time ago, fossils) vs. creationist theory saying that God created the world only six thousand years ago, but included fossils and other such things to make it look like the world was older and life subject to evolution. 6.1.2 Strong underdetermination However, there is also a strong version of the underdetermination argument. The gist of this argument is to say that all the evidence we can ever have is not sufficient to rule out an alternative hypothesis, and that if we cannot know that this hypothesis is false, then we cannot know what we think we known. Here is an example due to Descartes: 1) It is possible that there is an evil demon that is creating for you the illusion that there is real world in which you work, play, eat and sleep, in which you have friends and family, and so on. 2) If you cannot distinguish the possibility of being in an evil demon’s illusion from being in a real world, then it is possible that you are in an evil demon’s illusion now. 3) If you cannot eliminate the possibility that you are in an evil demon’s illusion, then you do not know that you are not in such an illusion now. The general form of the strong underdetermination argument is as follow (p.168): 1) We think we know p. 2) If we know p then we must know that q is false (because p implies that q is false). 3) We cannot know that q is false. Therefore, we cannot know p after all. The strength of this kind of argument rests on the fact that it is an extension of a kind of reasoning we use in ordinary life. For example: (1) you think you see your friend on the other side of the hallway; when asked why you think so, you say that the person you see has the same size, clothes, and hair colour than your friend. But, (2) if you really know that it is your friend you see, then you must know that it is not someone else with the same size, clothes and hair colour you see; (3) you do cannot know that it is not someone else with the same size, clothes and hair colour you see; and so you cannot really know for sure that it is your friend. Now, of course, in ordinary life you can always go check whether it is in fact your friend, or ask her later whether it was her. In other words, there are additional evidences that can be collected. But the strong underdetermination argument talks about the cases where there is no such additional evidence to be have, or rather, that any new evidence will be compatible with both theories. That is, p and q are perfectly indistinguishable. One way to reply is to say that we should only consider relevant alternatives. It makes sense to reserve judgment on the identity of the person on the other side of the room if all you have to work with is the size, clothes and hair colour, because there is a real possibility that it can be someone other than your friend. But if you go closer to that person, and even talk with her for a while, then surely you now know that it is your friend. It would be irrelevant to consider the possibility of a perfect clone or an extraterrestrial in disguise. But how do we decide which alternative is relevant and which one is not? One way is to use Occam’s razor, and to choose the less metaphysically extravagant alternative. If you recall, this is what Hume does to defend his view of causation. Hume argument is an instance of the strong underdetermination argument allied with Occam’s razor. Another option is to do what the logical positivists did and to rule out as meaningless any hypothesis which cannot be ruled out by experience. In other words, any empirically equivalent theories are in fact completely equivalent, for their parts that talk about unobservable entities are meaningless. Thus, it makes no sense to try to distinguish between the following: Everything that happens is a random result of physical forces. Everything that happens is designed for a reason by God. Everything that happens is determined by a previous cause. In any case, if scientific realists claim to have knowledge beyond the empirical facts, then they may be vulnerable to a strong form of the underdetermination argument. This is usually done by appeal to the Duhem-Quine Thesis. Strong underdetermination: The Duhem-Quine thesis How to use the Duhem-Quine thesis to yield strong underdetermination in science. The Duhem problem To derive a prediction from a hypothesis, one also needs numerous auxiliary assumptions. Therefore, only theoretical systems as a whole can be confirmed or falsified. (Also called confirmational holism). Can be solved by repeating the experiment while varying the auxiliary assumptions. E.g., from using different equipments and different scientists to varying the initial conditions and specific facts about the particular experiment. Only a weak form of underdetermination Quine’s holism More radical than the Duhem problem Quine argued that the Duhem problem can be extended so that mathematics and logic are included in the auxiliary assumptions. “any statement can be held true come what may, if we take drastic enough adjustments elsewhere in the system” (Quine, 1953) According to Quine, we rely on pragmatic considerations to solve this extreme underdetermination problem. E.g., it is very unlikely that we would ever change the laws of logic, because it would be too inconvenient. It is always easier to change other parts of the system. According to Quine, we rely on pragmatic considerations to solve this extreme underdetermination problem. E.g., it is very unlikely that we would ever change the laws of logic, because it would be too inconvenient. It is always easier to change other parts of the system. What about mathematics? There is actually one historical case where the evidence supporting one physical theory has been taken as thereby falsifying a part of mathematics! General relativity vs. Euclidean geometry That is one reason why radical confirmation holism is worth discussing. General relativity vs. Euclidean geometry Euclidean geometry Thought to be the a priori science of the structure of physical space. Its theorems deductively follows from apparently self-evidently true axioms. ►E.g., “between any two points there is a straight line”. Has been the paradigmatic model of science from the Ancient Greeks to Newton. Einstein’s general relativity Formulated in terms of Riemannian geometry so that the geometry of physical space (and time) is curved rather than flat. ►So it is false that “between any two points there is a straight line”. If the most empirically adequate physics does not employ Euclidean geometry, there is no good reason to regard it as a priori knowledge of space. Euclid’s revenge: Poincaré’s strategy Mathematician and philosopher Henri Poincaré (1854-1912) offered a way to always preserves Euclidian geometry. To maintain empirical equivalence, just add forces acting on all bodies in such a way as to mimic the effect of a non-Euclidian theory. Our measuring instruments will also be affected. Poincaré’s strategy Seems ad hoc and contrived. How do we decide? Take the simplest. What is simpler? Simpler mathematics and metaphysics? Simpler space-time geometry? This is a paradigmatic example of strong underdetermination in science. Strong underdetermination argument for scientific theories (p.174) ► (i) For every theory there exist an infinite number of strongly empirically equivalent but incompatible rival theories. ►(ii) If two theories are strongly empirically equivalent then they are evidentially equivalent. ► (iii) No evidence can ever support a unique theory more than its strongly empirically equivalent rivals, and theory-choice is therefore radically underdetermined. Responses to the strong underdetermination argument 1. The strong empirical equivalent thesis ((i) above) is incoherent. 2. The strong empirical equivalence thesis is false. 3. Empirical equivalence does not imply evidential equivalence ((iii) is false). 4. Theory choice is undetermined ((iii) is true). Reductionism, Conventionalism, or Antirealism 1. The alleged incoherence of the strong empirical equivalence thesis Three ways to try to do this. A) The idea of empirical equivalence a clear description of the observable consequences of a theory. However, there is no non-arbitrary distinction between the observable and the unobservable. In other words, there is a grey area between what is clearly observable, like a tree, and what is clearly unobservable, like an electron. Two replies: 1) It needs not be sharp to be non-arbitrary, as long as there are some clear cases. 2) It can be made sharp in other ways, for example by treating as observable the terms of a theory understood independently of this particular theory. B) The observable/unobservable distinction changes over time, and so what the empirical consequences of a theory are is relative to a particular point in time. E.g., Ptolemaic and Copernician astronomy between 1560 and early seventeenth century; wave and particle optics in the eighteenth century. Two replies: 1) This only means that we must relativise the notion of empirical equivalence, so that we have synchronic but not diachronic empirical equivalence. Still, at any given time and for any theory, there will be a empirically equivalent theory. 2) We can understand ‘theories’ to mean total (as opposed to partial) theories; i.e., ones that predict all the phenomena, not just those in one area of science. Then there will be no change over time. It is a problem for scientific realism if even a total theory has empirically equivalent but incompatible rivals. C) Theories only have empirical consequences relative to auxiliary assumptions and background conditions. So the idea of the empirical consequences of the theory is itself incoherent. Theories by themselves have no definite empirical consequences. By changing the auxiliary assumptions we use, we can show that two theories are not empirically equivalent after all. Reply: If we reformulate the argument so that ‘theory’ refers to empirically equivalent total theories, then all the possible auxiliary assumptions will be taken into account. The open question left is whether there are any examples of empirically equivalent total theories. 2. The strong empirical equivalence thesis is false. There is no reason to believe that there will always be strongly empirically equivalent rivals to any theory, because: a) Cases of strong empirical equivalence are rare, and b) The only strong empirically equivalent rivals available are not genuine theory. True, we can easily generate a theory that is empirically equivalent to a theory T by adding propositions to T that entails nothing empirical. However, this seems artificial, a ‘cheap trick’. The result, arguably, is only a pseudo-theory. E.g.: given any theory T, let T’ be the assertion the empirical predictions of T are true, but the theoretical entities posited by T do not exist. But why should we not accept these theories as acceptable rivals? After all, they are assertible, they have a truth value determined by truth conditions, and they have empirical content making them testable like any other theory. To reject them anyway implies that there are non-empirical yet rational grounds for preferring a theory to another. 3. Empirical equivalence does not imply evidential equivalence Many realists argue that two theories may predict all the same phenomena, but have different degrees of evidential support. There are rational non-empirical features (called superempirical virtues) of theories such as non ad-hocness, novel predictive power, elegance, and explanatory power, that give us a reason to chose one among the empirically equivalent rivals. There are many historical cases where scientists have justified their preference for one theory over their empirically equivalent rivals by appeal to some superempirical virtues, like simplicity, explanatory power, or coherence with other parts of science. However, there is no generally accepted way to rank these virtues, nor agreement about how to proceed when they pull in different directions. All theories have both superempirical virtues and vices. Bas van Fraassen argues that superempirical virtues do not give us reason for belief (they are not epistemic), but merely reason to adopt a theory for practical purposes (they are pragmatic). So, to defend scientific realism, one needs to explain why we cannot treat the superempirical virtues as merely pragmatic. 4. Theory choice is underdetermined If we accept the conclusion of the underdetermination argument that theory choice is underdetermined, how do we choose which theory to adopt? 1) Reductionism: If two theories are observationally equivalent, then they are simply different formulations of the same theory. That is what the logical positivists wanted to do. 2) Conventionalism: The choice between observationally equivalent theories is made by convention. Just like it does not matter whether we drive on the left or on the right, as long as everybody follows the same convention, which theory we choose is not important as long as we choose one. Option unavailable to the realist. 3) Antirealism: Social constructivism: The choice between observationally equivalent theories is made not by superempirical virtues, but by social, psychological and ideological factors. Denying that we should believe in our scientific theories (i.e., the epistemological component of scientific realism). Atheism or agnosticism Constructive empiricism The view advocated by Bas van Fraassen. E.g., his book The Scientific Image, 1980. Van Fraassen is responsible for renewing the debate about scientific realism. Van Fraassen accepts semantic and metaphysical realism, but rejects the epistemological component of scientific realism. (Epistemic requirement: Truths about S are knowable and we do in fact know some of them, and hence the terms of S successfully refer to things in the world.) For van Fraassen, the aim of science is only to be empirically accurate, that is, to fit with the observable facts, not to give us a literally true story of what the world is like. Also, to accept a theory commits us only to the belief that the theory is empirically adequate, not that it is true. By observables, van Fraassen means all the phenomena that could be observed, not just those that have been observed so far. Thus, to accept a theory as empirically adequate is to believe in something that goes beyond what logically follows from the data. In short, for van Fraassen, science is concerned only with what is observable, not with what is unobservable. To believe in a theory is to believe that it fits with what is observable, but it does not mean that you believe in the theoretical/unobservable entities posited by the theory. Science does not need to explain the regularities we observe. Objections to constructive empiricism 1) The observable and the unobservable Constructive empiricism (CE) seems to say that what exists is what is observable. But the line between the observable and the unobservable is vague and the two domains are contiguous with one another; moreover, what we can observe depends on accidents of human physiology (e.g., we could have evolved a natural electron detector, but we did not). Hence, CE gives metaphysical significance to an arbitrary distinction. Van Fraassen’s reply to (1): True, what exists does not depend on what is observable, but what we can know does. Thus, Van Fraassen’s antirealism is epistemological, not metaphysical. Furthermore, by ‘observable’, van Fraassen means ‘observable-to-us’. Our own observational capacities are relevant to our own epistemology. (2) Acceptance and belief Van Fraassen accepts that (a) all language is theory-laden to some extent; (b) even the observable world is described using terms that putatively refers to unobservable (e.g., micro-wave oven); (c) acceptance of a theory involves a commitment to interpret and talk about the world in its terms. Critics argues this makes CE incoherent, saying that there is nothing more to realism than accepting the world-picture of science. In other words, the criticism is that there is no real difference between believing a theory and accepting it. “Believing a theory is nothing over and above the mental state responsible for using it.” (P. Horwich, 1991) Van Fraassen’s reply to (2) Scientists use theories without believing in them all the time (e.g., Newtonian physics), so there is a difference between using/accepting and believing in a theory. (3) Arbitrarily selective scepticism The underdetermination problem is the only positive argument for adopting CE instead of realism; but all the data we presently have underdetermine which theory is empirically adequate (problem of induction), just as they underdetermine which theory is true, and so CE is just as vulnerable to scepticism as scientific realism. Thus, van Fraassen’s advocacy of CE demonstrates an arbitrarily selective scepticism. Realists use inference to the best explanation (IBE) in order to solve the underdetermination problem. So either van Fraassen also use IBE to do so, and so it is arbitrary to only use it for future observable events, and not for unobservable, or else he does not use IBE at all, and he cannot solve the underdetermination problem. Van Fraassen reply to (3)? Before we see van Fraassen reply to (3), we need to say more about explanation and inference to the best explanation (chapter 8).