SEXING ARGUMENTS

advertisement
SEXING ARGUMENTS
Whether an argument is deductive or inductive depends only on purely
formal features of the argument, and how it is to be evaluated depends
only on what kind of argument it is. I call this the logistical approach
and it is the approach that is embedded in tradition. An opposing
approach basing the distinction on psychological features of the
situation will be shown to be involved in studying a different thing and
as having different aims; if it weren’t for the fact that they also want to
use arguments as traditionally described and laid down as a set of
propositions whose premises are in some relation or other to their
conclusion, there would be no conflict between them. A different
conceptual apparatus is required to model psychological and epistemic
features and “arguments” should be left to logicians. It is, however,
conceded that logic gives only a partial account of justification.
What is an argument?
There have been two rival theories of how best to make the distinction between
deductive and inductive arguments, although there is a broad consensus over what
such a distinction needs to do and what conditions it needs to satisfy: these are that
every argument should be able to be uniquely designated as a valid deductive
argument, an invalid deductive argument, a valid inductive argument, or an invalid
inductive argument. Whatever the distinction is, it must be mutually exclusive and
exhaustive. I suggest in addition that these designations should accord with a certain
degree of intuitiveness. The methodological principle here is simple: if it seems
strange, or counter-intuitive, or absurd to assert or to judge that p, then the most likely
1
explanation is that p is false, and we should choose the most likely explanation in so
far as a rival account has the burden of proof and must show that in some way the
likely explanation does not satisfy the other conditions or falls short of its own stated
aims, or achieves them only by making ad hoc exceptions. Presuming that the most
likely explanation of why some particular argument is judged to be, e.g., deductive, is
because it is deductive, I will show in turn that the most likely explanation of why it is
deductive is a logistic explanation; other explanations do not meet the burden of proof
and furthermore involve counter-intuitive consequences.
Just as a chicken-sexer distinguishes baby chicks into male and female without
being able to tell how, so also we distinguish between good and bad arguments,
between arguments that are deductive or inductive, without necessarily knowing how
we do this, and without the theoretical apparatus needed to describe what we are
doing even if we did know. In many contexts it is enough of an explanation to say that
the reason p is judged to be false is because p is false, but a philosophical context
demands an account of those cognizable conditions that sexers’ intuitions track.
What is being sexed? The obvious answer is “arguments”. The sexer need say
no more than this since he is entitled to take “argument” as a mere name for an
unstructured and unclassified entity. The analyst, on the other hand, is immediately
faced with an ambiguity in this answer; like the term “statement” that might mean the
act of stating or what is stated, and “prediction” that might mean the act of predicting
or what is predicted, so also “argument” can mean the act of arguing or what is
argued. Is what is being sexed the object/product of the act, or the act itself? The
exploration into the underlying structure, and the kind of analytical enterprise
enjoined, will vary greatly depending on how we answer.
Logicians have always been concerned with the actual structural relations that
2
obtain between the premises and conclusion of the object of argument. The premises
and conclusion by themselves individuate the argument from others and determine the
logical nature of the relation, i.e., whether the argument is deductive or inductive, and
thereby what norm is appropriate for evaluating the argument and whether that norm
is satisfied. It is these norms, I claim, that the sexer’s intuitions are tracking in sorting
between good and bad arguments.
It has recently been claimed (Goddu 2002) that distinguishing between
deductive and inductive arguments is an unnecessary distraction, and that all that
needs to be evaluated is the strength with which the premises support the argument
and the strength with which the premises are required by the context to support the
argument, bypassing the deductive/inductive distinction altogether. His claims vary
from a modest deflationary position that the inductive/deductive distinction is
unimportant – “I see no reason even to attempt to divide arguments into deductive and
inductive kinds. The work we wish to accomplish with arguments can be achieved
without appealing to this distinction” – to the more extreme position that it doesn’t
exist when only a few sentences later he says more hyperbolically, “without appealing
to some mythical distinction between inductive and deductive arguments” (Goddu
2002, 15 [my italics]. See also pages 12-13 where he accuses everybody from Locke
to Hempel of making a distinction that doesn’t actually exist). This equivocation
infects his entire paper. It is also unclear whether he thinks that it is mythical on the
grounds that there is no distinction on a purely conceptual level and that the concepts
themselves are incoherent in some way, or whether he believes a distinction can be
made conceptually but that such concepts do not actually apply to what we usually
call arguments, or do not apply in any way that is useful.
Goddu bases this contention on cases like where the standard of strength for a
3
jury’s finding a defendant guilty in a criminal trial is higher than for a civil trial. The
argument “E; so, C” is held by Goddu to be potentially a “good” or “adequate”
argument in a civil trial but a “bad” or “inadequate” argument in a criminal trial; the
context determines what standards need to be met, and then we evaluate whether they
are met. But what is C here? Goddu (2002, 6) suggests that it is simply that the
defendant is guilty, but this is not the case since all the members of a jury can believe
that a criminal is guilty and yet still deliver a verdict of “not guilty” because they find
reasonable doubt.1 There is a difference between believing somebody to be guilty and
“finding” them guilty in the legal sense; the context determines which conclusion is
legally relevant, so in this case the conclusion should be “There is no reasonable
doubt about the guilt of the defendant”. The premises support a set of distinct but
often related conclusions, e.g. {“The defendant is guilty”, “There is reasonable doubt
that the defendant is guilty”, “The defendant is more likely to be guilty than not”}
from which the context selects the one that is appropriate, rather than having one
conclusion (“The defendant is guilty”) and the context determining the strength. The
strength of the argument for each of these conclusions is determined solely by the
premises and is either strong or weak without bringing any issues about context into
the equation. Where we have different contexts, as is the case in criminal versus civil
trials, we simply have different demonstranda, not different “strengths”.
But do juries evaluate the conclusion at all? Discursive dilemmas have
revealed the dangers of this kind of procedure. Often the rules of inference to be used
in reaching the conclusion have already been agreed to or otherwise stipulated. The
issue for a jury is only indirectly to reach a conclusion but is directly to do with
evaluating the premises. If all members of the jury agree as to the facts of the case
(normally these are something like “Did the defendant perform the act?”, “Is the act
4
correctly described?”, “Was the defendant aware that what he was doing was so
describable?”, “Is the responsibility of the defendant diminished in some way?”) then
the conclusion follows by a mechanical application of the rule.2 The great advantage
of deductive rules of inference is that once the premises are accepted and the rule
applied, the conclusion cannot be denied without self-contradiction. But to know that
a deductive rule is appropriate one needs to know first that the argument is a
deductive argument.3 The issue is whether we can tell that something is a deductive
argument without being told, or having it stipulated or contextually determined in any
way, beforehand; in other words, after it has been translated into a formal language.
Sometimes, this is all that we have to go on.
Someone trying to dispense with the distinction may say at this point “OK,
maybe context cannot entirely determine which norm to use, but it can restrict them,
and anyway, how many norms are there to choose between? Just evaluate by all of
those not otherwise eliminated. We can call some norms the norms of deduction and
others the norms of induction, but this is unnecessary verbiage.” Admittedly, the
logician could proceed by trial and error: he could evaluate the argument as if it were
deductive and check if the relation is deductive validity, and if it is deductively valid
then this was the right norm and the argument was a deductive argument, and if it is
not deductively valid, then he can check if it is a probabilistic relation. In other words,
he would evaluate validity first, and then classify the argument as an afterthought.
There are two problems with this approach. The first is that being unable to
prove that an argument is deductively valid is not to prove that it is deductively
invalid. The second is that this makes all inductive arguments into a type of invalid
deductive argument. We could get around this by saying that everything that is not a
deductively valid argument is not a deductive argument, period. This would mean that
5
there are no deductively invalid deductive arguments. Alternatively we might say that
all arguments that are not deductively valid deductive arguments are simply
deductively invalid deductive arguments. This would mean that there are no inductive
arguments.
What we want is a way of making a distinction between inductive and
deductive arguments prior to evaluating them so that we evaluate them according to
the validity norms appropriate for that type of argument, i.e., we apply deductive logic
to deductive arguments and inductive logic to inductive arguments. This seems to be
the activity of the argument-sexer.
How do we make the distinction between inductive and deductive arguments?
Unfortunately, the logical approach struggles to make much sense of inductive
arguments as a class of their own, and seems rather to think of them as bad deductive
arguments. This is not surprising since the logician’s resources for making a
distinction are meager, having only the premises and conclusion to go on. One
traditional thought is that deductive arguments always argue from the general to the
particular while inductive arguments always argue from the particular to the general.
A general statement is taken to be a statement containing a (un-negated) universal
quantifier. Otherwise (i.e. it contains an (un-negated) existential quantifier or no
quantifiers) it is particular. Let us call this Logical Criterion 1, or LC1. But although
this is often the case, it is not always. Consider the argument
All Xs are Y
All Ys are Z
All Xs are Z
which is a deductively valid deductive argument if anything is, but consists of only
6
general statements. Or consider
This X is this Y
This Y is this Z
This X is this Z
which apparently consists of only particular (in this case identity) statements.
Perhaps the logician could amend his criterion to
(LC2) If the argument argues from the particular to the general then it is inductive.
Otherwise, it is deductive.
Are there counter-examples to this? A counter-example would be either:
i.
A deductive argument that argues from the particular to the general.
ii.
An inductive argument that argues from the particular to the particular.
iii.
An inductive argument that argues from the general to the particular.
iv.
An inductive argument that argues from the general to the general.
Let’s look at some candidates for (i):
“My dog is a Dane. Therefore, anyone who feeds my dog will be feeding a Dane.”
“One is a lucky number. Three is a lucky number. Five is a lucky number. Seven is a
lucky number. Nine is a lucky number. Therefore, all odd numbers between 0 and 10
are lucky.”
The common feature of both is that they contain classes that are completely
enumerated in the premises – the conclusion of the first is false if I own more than
one dog, and the conclusion of the second is false if there is an odd number between 0
and 10 which is not 1, 3, 5, 7, or 9. So the arguments are incomplete as they stand.
What do we need to add to make them deductively valid? In the first case, something
like “There does not exist a dog that is mine and not a Dane”. This is a general
statement, the negation of an existentially quantified being equivalent to a universally
7
quantified sentence. Similarly, the second requires “There does not exist an odd
number between 0 and 10 that is not 1,3, 5, 7, or 9.” Although in this latter case the
unexpressed premise turns out to be an analytic truth, the form of the argument is
nonetheless from the general to the particular; hence, it is no counter-example to
LC2.4
What about (ii)? It could be argued that all inductive arguments from the
particular to the general can also be thought as from the particular to the particular,
since it is an attempt to infer what will happen on the next occasion. For n
observations of black ravens, the inference to “All ravens are black” and the inference
to “The next raven will be black” might be considered to be on a par. But I think the
illusoriness of this becomes evident as soon as you consider probabilities less than
certainty. If only m of the observed ravens have been black then you can infer that
m/n ravens are black but you cannot infer that the next raven will be black, or even
that it has a probability of m/n of being black.
How about inductive argument that argue from the general to the particular
(iii). Here is a candidate from Weddle (1979, 3):
(CE)
It is likely that all As are Bs
X is an A
It is likely that X is a B
This looks like an inductive argument stating that X’s being A offers some, but not
conclusive, support for the conclusion that X is B. The presence of the linguistic
indicator “likely” seems to be significant, but this need not be so, for the very similar
(CE*)
As are likely Bs
8
X is an A
X is a likely B
appears to be deductive.
It should be noted, though, that there is a genuine logical difference between
these two uses of “likely”. In CE “likely” denotes a metalinguistic operator that
qualifies the strength of the relation between the premises and the conclusion and can
be written as:
x. A(x) L B(x)
A(X)
L
B(X)
Call this the metalinguistic reading and note that the superscript L occurs twice, both
times connected to an inference (firstly the conditional, then the entailment).
In contrast, “likely” as it occurs in CE* is a modal operator that operates on
the statements rather than the relation and can be written as:
x. A(x)  L(B(x))
A(X)
L(B(x))
Call this the modal reading.
Weddle argues that CE can always be transformed into CE*, metalinguistic
readings into modal readings, by a process he calls “hedging.” He says (1979, 3):
The . . . inference above stated a probabilistic connection between its premises
and rain [the conclusion]. But the arguer only said that it was likely to rain.
The connection between those premises and the likelihood of rain is not
similarly probabilistic. We could not reasonably grant those premises . . . and
yet deny that it is likely to rain.
9
Therefore, Weddle says, CE* is deductively valid, and since CE* is a transformation
of CE, CE is deductively valid. All arguments that are deductively valid are deductive
arguments. Therefore, CE is a deductive argument. If hedging is an acceptable
procedure then CE – and any other argument from the general to the particular
containing probabilistic inferences – does not constitute a counter-example to LC2.
The disputants seem to grant that CE* is deductively valid and on what makes
an argument deductively valid. The possible objections remaining seem to be:
Objection 1: CE* is not a transformation of CE (hedging should not be allowed)
Objection 2: A transformation of CE is not identical to CE, so it may not have the
same logical properties.
Objection 3: Not all deductively valid arguments are deductive arguments!
I will be treating these in reverse order.
When is a deductively valid argument not a deductive argument?
Objection 3 is almost guaranteed to leave the logician (and, I suggest, the argumentsexer) nonplussed. On the face of it the objection seems absurd. However, this
absurdity, Bowles (1990) claims, is due to the fact that our previous judgments are
informed by the logical theory anyway and that it is begging the question to state that
it is absurd to have deductively valid inductive arguments. Likewise, Vorobej (1992,
108) writes: “To insist that every instance of modus ponens, say, must be a deductive
argument is simply to beg the question.” Sorting arguments into inductive or
deductive, they suggest, depends on whether the person making the argument
intended5 it to be inductive or deductive. Hence, even though an argument may be
deductively valid, this does not mean that it is a deductive argument, the relation
actually obtaining between the premises and conclusion of the argument being no
10
longer all-important. Wilbanks (2009) calls the psychological approach the speakerdetermined thesis (SDT) and the logistical approach the speakerless thesis (SLT).
A number of related objections to the SDT can be considered at this point: the
speaker may not have a determinate strength of support in mind; by temperament
some arguers will under- or over-estimate the strength of the support; arguers may not
have the conceptual resources to make the distinction between a logically necessary
connection and a probabilistic one, or even between different degrees of probabilistic
support; or if they have the conceptual resources, they may still lack the linguistic
resources to make their intentions explicit in the argument itself.
I do not think that these objections are very strong. Being indeterminate
between two things is not a third thing that falls outside of a deductive/inductive
distinction altogether. Also, as argument-sexers, we should not be required to be able
to make all our beliefs and intentions explicit. There may be a problem in interpreting
from the argument-as-product what the speaker takes the relation to be, but this shows
the limitation of looking at arguments and arguers as static objects instantiating
particular relations rather than dynamically as part of an ongoing inter-subjective
process, with arguers as players in a game or series of games in which they will make
many moves. It is vital to note that the SDT’s emphasis, unlike the logician’s, is on
the act rather than the product of arguing, and it conceives the aim of the
inductive/deductive distinction as allowing an analysis of whether the arguer has
satisfied the felicity conditions, rather than the truth conditions, of the speech acts
constituting his arguing.5 It is conceded that language may be misleading with regard
to the strength of support, but rather than trying to weigh up different linguistic
indicators that may well be misleading anyway, Vorobej suggests that the analyst try
to work out one thing alone – the speaker’s belief – which although difficult is already
11
presupposed by a view that sees argument as process and is concerned with whether
speakers satisfy rules of discourse, describing as “a principal tenet of both rationality
and ethics . . . that others have a personal point of view that first of all deserves a
hearing, and second is something from which we as a community could possibly
benefit” (Vorobej 1992, 107). Over time the analyst will learn the particular idioms,
temperaments, and dispositions of the arguers, and when unsure should adopt the
Principle of Charity and make the argument as strong as possible. Bowles (1994)
deals with these, and many more objections, in a way that is mostly satisfactory.
It is difficult to deny that the SDT provides a theory that is consistent with its
proponents’ own admitted methodological aims, and they do not hide the fact that
their definitions will not match the judgments given by the logicians or by tradition
(which they will argue is the same thing). They do provide a distinction between
deductive and inductive arguments that is mutually exclusive and exhaustive, thus
satisfying the first conditions. The question is whether they have satisfied the
additional condition of explaining away counter-intuitive results such as the claim that
a deductively valid argument might not be a deductive argument. Remember the
methodological principle introduced at the beginning: if it seems strange, or counterintuitive, or absurd to assert or to judge that a deductively valid argument is not a
deductive argument, then the most likely explanation is that it is false. If the
argument-sexer classes a deductively valid argument as a deductive argument
irrespective of what the arguer thinks about it (supposing that the sexer is even in a
position to make some kind of educated guess at this) then the theorist must have a
better explanation of why this should be wrong than simply “it is begging the
question”. Theory informs our judgments, but judgments also inform theory.
12
I submit that the SDT does not meet the burden of proof. One indication that
the burden of proof is not met is that Wilbanks, who is sympathetic to the SDT,
nevertheless tries to justify making an exception in the case of deductively valid
arguments. Her approach combines the SLT and SDT by claiming that the speaker
generally determines the deductive/inductive distinction and that the valid/invalid
distinction depends on a match between the actual relation between the premises and
the conclusion and the relation that the speaker claims or believes to obtain. She
makes an interesting suggestion to give different judgments for when the speaker
over-estimates the support her premises offer than for when she under-estimates the
support. If the actual relation between premises and conclusion are weaker than
claimed, then the argument is invalid, but if it is stronger than claimed, then it is nonvalid.
Although generally the speaker determines whether the argument is deductive
or inductive, there is an exception: “The speaker does not claim that the conclusion
follows necessarily from the premises but claims that it is rendered probable to some
degree by them; nevertheless, the conclusion in fact follows necessarily from them”
(Wilbanks 2009); hence, this argument is deductively valid. This is a deductive
argument for Wilbanks, but is neither valid nor invalid but non-valid. This
accommodates the counter-intuitiveness of the idea of deductively valid inductive
arguments (although a non-valid deductively valid deductive argument is perhaps
only marginally less counter-intuitive) and would annul Objection 3. What is
interesting here is simply that a need is felt to make such an exception at all.
There is evidence that the proponents of the SDT are putting into the argument
things that do not belong there. Toulmin (2003) does this by complaining that modal
terms do not make some statement about the probability of something, but rather
13
inform the hearer that something can be taken in such and such a way. This results in
distinguishing ‘warrants’ from their ‘backing’. However, the fact that the use of
modal terms in speech acts such as the uttering of predictions has certain felicity
conditions and perlocutionary effects has nothing to do with the meaning of the
terms6, and it is the meaning we must evaluate if we are to know what to do. It is this
that we want to know most of all – whether we should take an umbrella when we
leave the house, whether we should really add that extension – and not whether all the
rules of discourse have been followed. Now, although the general statements that
occur in the premises, unless they are themselves necessary truths, will require
‘backing’, this does not mean to say that this backing should be made explicit in this
argument, or even belongs with this argument. The truth of the premises of an
argument is not in question with respect to the argument in which they occur as
premises, and it is irrelevant to the argument whether they are logically necessary
truths, factual truths reached by valid inductions, or happy guesses. Of course, there
could be a further argument in which some premise of the first argument is the
conclusion and which if invalid renders the first argument unsound, but this is another
argument and another story.
In summary, although I do not say that the SDT is inconsistent, I do believe
that it is insufficiently motivated. We need much better reasons to accept something
as strange as Objection 3 than that to deny it is to “beg the question”. Ultimately their
hopes rest on the accusation that the logicians simply cannot sort arguments in a way
consistent with their aims. This brings us back to Weddle’s Claim and Objection 2.
When are two arguments the same?
What about Objection 2? The SDT has rather too easy an answer to the fact that
14
arguments of one type can be transformed into arguments of another type. This is
because of the individuation conditions for arguments that comes from their view is a
case of individuating between acts, and provides in addition to the premises and the
conclusion the relation that the speaker attributes to obtain between them – or rather,
not the type of relation attributed, but the actual act of attributing. So, when an
argument that looks inductive is rewritten so as to look deductive, as CE is rewritten
as CE*, it is open to the objector to say “Suppose that you can do that. What is that to
do with me? When you attribute a different degree of support, eo ipso you produce a
different argument.”7
Consider what happens to the argumentation-structure when we consider the
results of such transformations as distinct (and in so doing consider a different way of
making the deductive/inductive distinction): “The distinction then is that in a
deductive argument the premises need to be taken together to constitute a reason,
whereas in an inductive argument a combination of reasons is needed to make the
conclusion more or less probable” (Henkemans, 109). Once you have one deductive
argument for a conclusion, the conclusion becomes detached from its premises,
rendering any other arguments, whether deductive or inductive, superfluous. This is
the feature of deductive arguments known as monotonicity – that the removal of
superfluous premises or addition of further premises cannot affect the truth of the
conclusion. The argumentation-structure reached thereby is called subordinative to
indicate the fact that there is a single, conclusive chain of inferences from the
premises to the conclusion.
In contrast, no argument can ever become superfluous and conclusions are
never detached from their premises when supported by induction, irrespective of
whether each premise, or subset of premises, supports the conclusion independently in
15
a convergent argumentation-structure or when combined with the other premises in a
coordinative argumentation-structure. Inductive arguments are non-monotonic. The
reason for this is that a conclusion can be highly probable with regards to one
reference class but highly improbable with regards to another. Let us illustrate this
with a counter-example to our counter-example:
(CE-CE*)
Cs are unlikely Bs
X is a C
X is an unlikely B
This modal reading seems as deductive as CE* did, but if we combine them we get:
As are likely Bs
X is an A
Cs are unlikely Bs
X is a C
X is a likely B  X is an unlikely B
CE* and CE-CE* give opposing verdicts, leading to a contradictory conclusion. It is
for this reason that Hempel, in his account of the Inductive-Statistical model of
explanation, insisted on a metalinguistic reading. There is no inconsistency between
CE and
(CE-CE)
x. C(x) 0.01 B(x)
i.e. 1% of Cs are Bs
X is a C0.01
It is unlikely that X is a B
i.e. relative to the fact that 1% of Cs are Bs,
X is not likely to be a B
since here the conclusion in not detached from its grounds, the probability being
16
attached to the relation. This is usually called logical probability and will be discussed
in greater detail when we deal with Objection 1.
For the moment, I just want to consider two possibilities. The first is a
convergent argumentation-structure with inductive supports I1, I2 and I3. Suppose
that we transform I1 to a deductive support D1. Now I1 conclusively – rather than
probabilistically – supports D1 and D1 conclusively supports the conclusion, making
a subordinative structure so peculiar as to be almost unintelligible. If we are consistent
with what we said above, I2 and I3 have now become superfluous and should be
eliminated, destroying the convergent characteristic of the structure completely. On
the other hand, we could have transformed I2 and eliminated I1 and I3. The whole
notion of argumentation-structure seems to become unintelligible if we allow
inductive arguments to be transformed into equivalent but token-distinct deductive
arguments. It is much more intelligible to say that the argument was never inductive
in the first place but a (possibly enthymematic) deductive argument.
Suppose that we transform each of I1, I2, and I3 into D1, D2, and D3
respectively. Is the demand to eliminate all but one of the deductive supports wellgrounded? Consider
D1
D2
D3
1
pq
rs
p
2
qr
sr
ptonkr
3
p
s
4
q (1,3 M.P) | pr (1,2 H.S)
r (1,3 M.P) | r (2,3 M.T) r (T.E)
5
r (2,4 M.P) | r
(4,3 M.P)
The chain of inferences for D1 can be carried out the way I have done on the left-hand
side – with two application of modus ponens – or on the right-hand side – with one
17
application of hypothetical syllogism and one of modus ponens. Suppose that the
person faced with whether or not to accept the conclusion r has, for whatever reason,
an aversion to hypothetical syllogism, and is not as confident in it as he is in modus
ponens. Therefore, eliminating the left-hand side derivation will make the support for
r weaker for the person concerned; their inferring may not be doxastically justified.
The rule that says to eliminate alternatives in a monotonic system considers only
truth, and runs roughshod over this kind of justification. I suggest that this is unfair,
and that the general principle that the more ways that you can reach a particular
conclusion then the more confidence you have in it – like the more witnesses testify
independently to some fact then the more readily it should be accepted as true – need
not be abandoned, even in cases like this where we are only considering different
derivations from the same premises.
Likewise, if the person believes that some particular lemma, say q, is false,
then he might lose confidence in the truth of the conclusion, or he might lose
confidence in the truth of the premises, in the validity of the rules of inference, or in
the correctness of his application of the rules of inference. On the other hand, if he has
D2 and D3 both supporting r, then his confidence in r becomes stronger again, as
might his confidence in the troublesome lemma. Thus, it may turn out that the person
believes the conclusion more on the basis of invalid deductive arguments (D2 affirms
the consequent on the left-hand side and denies the antecedent on the right-hand side,
while D3 uses the improper logical connective tonk which has the introduction rule of
 and the elimination rule of ) than on valid deductive arguments. Should he notice
this, at this point the person may lose all confidence in his ability to transform
inductive supports into deductive ones and trust only in circular arguments.8
What this seems to suggest is that the monotonicity of deductive arguments is
18
only an idealization, as is Toulmin’s distinction between warrant-using and warrantestablishing arguments which is also sometimes used to make the inductive/deductive
distinction (e.g. Yezzi 1992). All arguments both use and establish their warrants. To
say that an argument is deductively valid is not to say that its conclusion is true, only
that it must be true if the premises are true. If we do not believe the conclusion, then
we are free to abandon one or more of the premises or one or more of the rules of
inference. What road we take must be determined at least in part by other arguments
for the same conclusion, or arguments that utilize some of the same premises or rules
and seem to lead to acceptable conclusions, and such like. This suggests one way in
which evaluating deductive arguments is not as different from evaluating inductive
arguments as we might like to think.
So monotonicity is not a norm for the evaluation of deductive argumentation. I
think the matter goes deeper than this. Consider the so-called inclusion fallacy.
Reasoners have been confirmed to assent more readily to the inference
Robins have an ulnar artery
Birds have an ulnar artery
than they are to the inference
Robins have an ulnar artery
Ostriches have an ulnar artery
despite the fact that ostriches are included in the class of birds – which we can
formalize as the categorical conditional x. (Ostrich (x)  Bird(x)) – and logically
should be an easier condition to meet (Osherson et al. 2008, 325). This itself is called
an inductive argument from classification, but contains another categorical
conditional
x. (Bird(x)  HasUlnarArtery(x))  y. (Ostrich(y)  HasUlnarArtery(y))
19
What is striking about this fallacy is that as soon as you change the “all” to any
probability, however high, this ceases to be a fallacy at all. What these results show is
that we treat the second categorical conditional as probabilistic, viz.
x. (Bird(x) L1 HasUlnarArtery(x)) L2 y. (Ostrich(y) L3 HasUlnarArtery(y))
and argue in the following way. Whatever general features are distributed through
birds are distributed to a higher extent in those that are more typical, like robins, and
to a lesser extent in those that are less typical, like ostriches. This is just what it means
to be “typical”. Since the distribution is higher in birds taken as a whole than in
ostriches, it is easier to assent to the inference from robins to birds than from robins to
ostriches. This suggests that we do not generally give any special significance to “all”,
and by any pragmatic standard there is no reason why we should since what we need
most is guidance on what to expect.
A similar analysis seems to work for what is called premise non-monotonicity
(Osherson et al. 2008, 325):
Flies require trace amounts of magnesium for reproduction
Bees require trace amounts of magnesium for reproduction
is assented to more readily than
Flies require trace amounts of magnesium for reproduction
Orangutans require trace amounts of magnesium for reproduction
Bees require trace amounts of magnesium for reproduction
After adding a premise, the inference is sometimes withdrawn, hence it is nonmonotonic. Normally, adding information should make the strength of an argument
stronger or the same; at first sight it seems odd that the argument become less strong.
Again, the issue is typicality. Flies are typical insects, so any feature distributed
through flies will be expected to be distributed to a lesser extent through insects as a
20
whole. But the information about orangutans implies that the distribution through the
population of insects may not be relevant, and it is the distribution through some
larger class that is relevant to the inference – a class, moreover, of which the fly is not
typical. Therefore, the strength of the inference is downgraded accordingly. The point
is that talk of differing distributions through populations makes no sense at all if the
“all” is taken in the way familiar from syllogistic reasoning rather than
probabilistically.
It should be noted that the arguments above argue from the general to the
general. It was these types of arguments that caused us to abandon LC1 for LC2, and
now they seem to pose a problem for LC2 as well because they look like inductive
arguments, are evaluated as if they are inductive arguments, and have the form of type
(iv) counterexamples, but LC2 says that they are deductive arguments. I will argue in
the next section that they are deductive because circular. It will be shown that the
answer to Objection 1 will also answer this problem.
What is a prediction?
This leads us finally to Objection 1. I have already more or less said what the problem
is here, which is whether the modal reading can get around the reference-class
problem. There are some situations where it can, namely those where the requirement
of total evidence is satisfied, i.e., where the premises have all the evidence possibly
relevant to the conclusion. For instance, if in CE-CE* being an A and being a C are
the only factors affecting the likelihood of being a B then it is a simple mathematical
calculation to determine the likelihood of Xs being B given the ratios of their
observed instances in A and C, and the argument is once again deductive. Possibly it
is this that Weddle (1979, 3) alludes to when he asks what “prevents [the arguer] from
21
providing the conclusive grounds of deductive arguments? Now of course poor
arguments called inductive, based on insufficient evidence, will give only some
grounds for their conclusions. But is this the case for the careful ones?” If the
argument is good, then the “hedging” Weddle suggests seems admissible.
Freeman (1983) objects that the “hedged” modal reading does not reflect what
is really being stated and gives the example of John, who is usually happy but not too
keen on parties, about whom it is said “If John comes, he is usually unhappy about
something”. The “usually” is better read as stating something about the relation
between John’s coming to the party and his being unhappy, than about his being
unhappy simpliciter. This may be so, but it is not clear how much of a difference there
really is here. Suppose that we are at the party and we see John enter. Accepting the
conditional premise, we draw the conclusion, in the absence of any information to the
contrary, that John is unhappy, and the same conclusion is drawn on both the modal
and the metalinguistic reading. The issue is really that the modal reading depends, as
the metalinguistic reading does not, on additional information that we may not have.
We may be able to fill in all the unexpressed premises so that the requirement
of total evidence is satisfied, but why should we assume this? 9 Note that what we are
talking about when we consider unexpressed premises is not the mere logical
possibility of filling them in in such a way as to make the argument deductively valid
– this is always trivially possible, e.g., by making the unexpressed premise a
contradiction from which anything at all can be entailed. To not express something is
an aspect of performance, perhaps a ‘negative’ speech act; it is pragmatic rather than
logical. Why should we even assume (if Cartwright is right and some laws are
inherently ceteris paribus) that it is satisfiable even in principle? Why should we
assume that the inductive argument is a good one? I maintain that we need support for
22
any assertion that total evidence has been supplied, but getting this support is just as
difficult, if not more so, than the original argument was to evaluate without it.
Freeman’s approach is different though. He seems to challenge the very
intelligibility of the modal reading, on the grounds, I think, that it amounts to a
singular probability. The choices for a modal reading are between subjective
probability and relative frequency, so if these are shown to be unintelligible as
singular probabilities then the metalinguistic reading, in which this singular
probability is a logical probability, is the only live option and wins by default. I will
not be discussing subjective probability but only Freeman’s account of relative
frequency.
The relative frequency theory works by extrapolating from a finite series to an
infinite series. Of course, a person may have more or less confidence that what he has
observed so far is a representative sample, i.e., has converged on the same value as
the infinite series will in the long run, but I will argue that this has nothing to do with
the singular probability in question, which is what is predicted (the object), but only
with whether he is entitled to utter that prediction (the act); it is an assertibility or
felicity condition. The object of prediction is explained by the relative frequency
theory as being elliptical for a restatement of the results of our frequency series.
Reichenbach insists that we need only a class-meaning for probability
statements, and singular probability statements have the same class-meaning as its
associated statement about the class. Although such statements are not, for
Reichenbach, true or false, we can deal with them as if they were true or false. What I
think this leads towards, although it is never explicitly stated by Reichenbach, is that
singular probability statements have the same meaning but different performative
functions than the equivalent statements about the class. The performative function of
23
a prediction(-act) is to get the listener to take the prediction(-object) as being true and
a guide for action; when the meteorological office tells you that it is likely to rain
tomorrow, then you had better not leave home without an umbrella. However, the
meaning of the prediction is nothing more than the series of observations known to
the meteorological office concerning the frequency of rain in different referenceclasses of relevant conditions.10 To give a simple example, the argument “This is a
fair die; therefore, the next throw will probably be greater than a two” says no more
than “This is a fair die”, although it might direct the audience’s attention to “look out”
for numbers greater than two, despite the fact that logically speaking looking for these
numbers is no different from looking for any of the other possible outcomes. A
prediction is more like a promise than a statement.
A similar analysis holds for
Flies require trace amounts of magnesium for reproduction
Bees require trace amounts of magnesium for reproduction
The only reason for making the inference from flies to bees is that they are both
insects; hence we should add the unexpressed premise to give
Flies require trace amounts of magnesium for reproduction
Insects require trace amounts of magnesium for reproduction
Bees require trace amounts of magnesium for reproduction
along with background information (ultimately in the form of frequency series) that
flies and bees are both subsets of insects, and what it means for something to be
typical. The conclusion says no more than this when looked at, as the logician does, in
the context of an argument-as-object. In the context of an argument-as-act, the speech
act of uttering a prediction has all kinds of conditions of satisfaction that are unrelated
to its truth, and has all kinds of effects. Toulmin (2003) correctly notes these, but errs
24
in making this a part of the argument.
Let’s take stock of where we have got to. We were considering a criterion
(LC2) that claimed that all arguments that argued from general premises, whether
these were universal or probabilistic, were deductive arguments, along with
arguments from the particular to the particular. The only arguments that are inductive
argue from particular premises to general statements, i.e., they have the logical forms
F(a)  G(a)
F(a)  G(a)
F(b)  G(b)
F(b)  G(b)
F(c)  G(c)
F(c)  G(c)
x. F(x)  G(x)
x. F(x) L G(x)
On the left, a non-probabilistic relation is claimed on the basis of the evidence, while
on the right a probabilistic relation is claimed on the basis of the evidence. A
probabilistic relation can obviously be claimed when there is counter-evidence, e.g.,
F(d)  G(d) or F(d)  G(d). Possible counter-examples were considered and
rejected. LC2, then, makes the deductive/inductive distinction, and the deductively
valid/deductively invalid distinction was never in question. What remains to be made
is the distinction between inductively valid and inductively invalid arguments.
When is an inductive argument valid?
What do we want from the concept of inductive validity? The same thing that we
wanted from the concept of deductive validity, which is to be able to say that a
person’s belief that some proposition p is true is a good reason for their believing that
some other proposition q is true. This is an argument from sign and is clearly satisfied
if we believe that the conditional “If p, then q” is true. The truth of this conditional
gives us an incontrovertible warrant for inferring q. But is the belief in the truth of the
25
conditional justified? When the belief in some singular propositions gives us a good
reason for believing that the conditional is true, we say that the argument is
inductively valid to some degree and that the truth of some singular proposition is a
good sign of the truth of the conditional, after which we can say that the truth of the
antecedent (in some particular substitution-instance) is a good sign of the truth of the
consequent. The converse is not typically true, and it is usually further inductions that
tell us which way round to write the conditional – being a raven is a good sign for
being black, but being black is not a good sign for being a raven. Something being a
sign for something else is always in virtue of those things matching a description, and
for this we need predicate logic.11
We can not only talk about one thing being a sign for another in virtue of their
descriptions, but one thing cohering with another in virtue of their descriptions. I will
not attempt a full-blown account of coherence here. I suggest that the more coherent a
set of propositions is, the greater the set of questions to which it can give
unambiguous answers. Obviously, singular propositions can only answer questions
concerning the objects named in them; they only cohere with each other in the
negative sense that they must be logically compossible in not making contradictory
statements about the same object. My observation of a black raven does not support
your observation of a black raven directly, but only in so far as they confirm an
empirical generalization that generates automatically answers to questions about
ravens.
The set of propositions gains coherence in being able to answer questions even
about unobserved ravens. Often it is held that one proposition cannot support another
unless at least one of them is an observation statement or inferred from an observation
statement. I disagree. A logical consequence (such as a substitution-instance) of a
26
generalization supports that generalization even as the generalization supports the
consequence. It is qua logical consequence of the generalization that my observation
statement coheres with the set of propositions as a whole, and it is qua a contradiction
of a logical consequence of the generalization that my observation statement makes
the set of propositions as a whole incoherent.
Questions are our instruments for measuring coherence and have different
values. Coherence, I suggest, is connected to what Bromberger (1992, 152) calls the
“Machian” or “added” value. Questions have high values because their answers
enable us to answer other questions and to generate more questions. Our scientific
theories have such great coherence, in my sense, because the questions they answer
have this kind of value. This is especially the case where quantitative causal laws are
used. For example, from the equation for the period of a pendulum an answer to the
question “What is the length of this pendulum?” will also provide an answer to “What
is the period of this pendulum?” and vice versa. This is true not only for this
pendulum, but for any pendulum influenced only by gravity (Bromberger 1992, 138).
Incoherence arises when we get different answers for a question. This could be
because a question is answered by more than one generalization and these answers are
different12 or it could be because of an observation statement: the raven is white, the
period of the pendulum has been measured and is not what was expected. What are
we to do when this happens? One fairly obvious move, which is however rarely made,
is to change from a categorical to a probabilistic conditional. We are not often
prepared to do this when we are dealing with causal sequences and theoretical kinds –
we shy away from genuinely indeterministic laws of nature. But we are at least
equally reluctant to reject the observation, since this would imply that the presence of
a black raven only caused me to have the impression of a black raven in a certain
27
percentage of cases, even in good light etc., and in other cases caused me to have an
impression of a white raven. The reliability of our senses (whether assisted or
unassisted by instruments like microscopes) is our best confirmed empirical
hypothesis and the one we always appeal to in the final analysis. So, for the sake of
argument we can rule out the move from x.R(x)  B(x) to x.R(x) L B(x)13 and
the move from R(a)B(a) to R(a)B(a).
There are two other options. We could say that ravens cause a tendency for an
observer to have an impression of blackness, which tendency may be interfered with,
and the answer the generalization gives is the answer that would be true in the
absence of this interference. We see this more with theoretical kinds and causal laws:
magnets cause pieces of iron to tend to move towards the magnet, an object has a
tendency to move in a straight line and constant velocity, even when it isn’t so
moving. By modifying the predicate (“attracts iron”) this way the observation (of a
piece of iron not moving towards a magnet) no longer falsifies the law.
Properly formulated laws will already be interpreted this way, so this may not
help us. In this eventuality, we simply deny that what we saw was a raven, i.e., we
move from R(a)B(a) to R(a)B(a), again avoiding falsification of the
generalization. If we have what seems like a magnet and yet it does not attract iron,
then we do not think that the law “All magnets attract iron” is false, or probabilistic,
but that what we have is not really a magnet but only seems to be. At this point the
empirical law has become more like an analytic statement or convention. I suggest
that at this point the set of propositions are well on their way to being strongly
coherent.
What I am trying to get to is an objective correlate in terms of coherence of
that point in our process of inquiry when we are not prepared under any circumstances
28
to give up a general statement. I formulate this as follows:
(Inductive Validity) An inductive argument is inductively valid if our model of the
world (the set of all accepted propositions) would still be more coherent with the
generalization (the conclusion of the argument) than without it, even if this meant
denying that there is anything currently instantiated in the world that matched its
descriptions.
To put it slightly differently, the coherence has become independent of the universe of
discourse; it is rigid in the sense that propositional logic, being a logic of meanings
that apply in any possible world, is rigid but extensional logic, tied to an extensional
interpretation of its terms in some particular world, is not. It is not only this raven that
is not a raven, but nothing that we identified as a raven was really a raven, and not
only this magnet that is not a magnet, but nothing that we identified as a magnet was
really a magnet.
This is a sufficient but I do not claim that it is a necessary condition of
inductive validity – I think that is possible that there are different norms of inductive
validity, and there may not be any non-disjunctive set of conditions that will be both
necessary and sufficient. Philosophers who hold that there are non-demonstrative
arguments like abductive and conductive arguments that are neither deductive nor
inductive may take heart from this and incorporate their views as different norms of
inductive validity instead of different forms of argument. The only problem would be
if they were to find a form of argument that did not seem in one way or other to be an
argument from sign.
Conclusion
Is the argument-sexer a proto-logician or a proto-psychologist? I have argued, and I
29
suspect those taking the psychological approach would not deny, that he is taking a
logistical approach, that he looks at the argumentation as the finished product rather
than as the process from which the product emerges. This is a very practical decision,
because in many cases the product is all that he has to go on. Perhaps the defender of
the speaker-determined thesis would say that this is a false consciousness, that the
sexer’s better instincts are being subverted by the hegemony of classical logic. This
could be true; the hegemony of Aristotelianism held back scientific progress for
centuries. But it is not the most likely explanation. The most likely explanation is that
the sexer’s judgments are right in the majority of cases – that a deductively valid
argument is a deductive argument – and whatever account of the deductive/inductive
distinction we adopt should be able to account also for this fact.
The rival criteria for the deductive/inductive distinction are as follows.
Inductive arguments move from the particular to the general. Everything else is a
deductive argument. This implies that all arguments from the general to the particular
are deductive. Some arguments that look inductive because they use words like
“probably” are actually deductively valid, but is this true of all of them or is there a
further distinction to be made within this class of arguments? It is here that I believe
looks are deceptive, and it has been my primary aim in this paper to justify the
decision to call all such arguments deductive.
Alternatively, the speaker determines what kind of argument it is. If the
speaker thinks that his premises establish his conclusion conclusively then it is a
deductive argument. The speaker cannot be mistaken about what kind of argument he
is giving, although he may over- or under-estimate the actual strength of the relation
between his premises and his conclusion. He may offer an argument that is
deductively valid, but think that the support offered by his premises is only
30
inconclusive, in which case it is a deductively valid inductive argument. This frankly
revisionary theory is consistent within itself but is insufficiently motivated. There is
no need to hijack the logical concept of argument and put into it things that do not
belong there in order to further their own genuine concerns. The place to model
psychological or epistemic factors is in the argumentation-structure.13
The logistic approach should only be abandoned, I believe, if it is proven to be
inadequate to the task of distinguishing arguments across the deductive/inductive and
valid/invalid distinctions in a way that is exhaustive and mutually exclusive. I have
argued that LC2 makes the deductive/inductive distinction, and the deductively
valid/deductively invalid distinction was never really in dispute.
Entire libraries could be written about what makes some inductive arguments
valid and others invalid. I have offered some very brief and speculative hints about
the way I think this research might continue. Firstly, I believe that inductive validity
might be a disjunctive concept, and that we should work from particular cases towards
sufficient conditions for their validity, rather than starting with a theory and working
from there for a unified account intended to work for all cases. I believe that the
argument-sexer is basically as sound in his inductive practices as he is in classifying
arguments. Few people really doubt that induction works – the philosophical problem
has always been formalize the conditions under which it works, and to justify why it
works.
31
ENDNOTES
1.
In Scottish law there is even a “not proven” verdict that they can deliver in these
kinds of cases. A stronger argument is required to find a defendant guilty than is
required to conclude that he is guilty.
2.
It is mechanical in the normative sense; it will tell you in every case what the
jury’s verdict should be. In real life, juries can be “nullified” and may disregard,
misunderstand, or misapply the rule. The rules are normally conveyed in the
judge’s instructions before the jury retires to consider the verdict. The judge can
also err by conveying the wrong rules or conveying badly the right rules.
3.
Of course, contextual information can sometimes indicate what kind of rule or
norm is appropriate, but this says no more than that in a logic class the teacher
may say “Here are some deductive arguments. Prove them.”
4.
I owe these examples to an anonymous reviewer, who attributes the second
example to Skyrms.
5.
Not all proponents of the SDT privilege the act over the object, an anonymous
reviewer tells me. If so, then the interest of the SDT escapes me. Why should we
care whether an arguer thought he was offering a different kind of argument than
he actually gave unless it is on the grounds that by misspeaking in this way he
failed to successfully perform some speech act, that is to say, unless evaluation
of the argument leads to evaluation of the speaker?
6.
It may be one of the conditions of satisfaction of the speech act that the speaker
intends what he says to be taken in a certain way, but this is not the meaning of
what he says. Similarly, when I make a prediction it is one of the conditions of
satisfaction that I have a high degree of confidence that what I predict will turn
out to be true (a prediction which I believe will be falsified being a kind of
32
contradiction in conception) and that I intend the hearer to take my utterance in a
certain way, but the meaning of what I say is not that I have such confidence but
will rather refer to my evidence. This point will come up later when I discuss
predictions. I think it is wrong to give in to a temptation to put this the other way
around and claim that what I am saying is that I have this degree of confidence
with my reasons or evidence for having this degree of confidence being the
condition of satisfaction. I am entitled to assert something if I think it is true
even if it is not and even if I am epistemically blameworthy in thinking that it is.
7.
It seems to suggest also that the same relation can be attributed, and the same
words spoken, and yet the argument will be distinct. In a sense, this makes the
job of classification more difficult, because every token, although it may be
identical in all respects relevant to its evaluation, will have to be classified
separately.
8.
What are we to make of the predicament of the reasoner who does not notice?
His belief in the conclusion is propositionally justified in so far as it is the
conclusion of a logically valid argument. Not only is this argument accessible to
him, but he actually has it; hence, one would think that he is also doxastically
justified in believing what he does. However, his reason for thinking himself
doxastically justified is not in itself a good reason, but is based on bad reasons,
on logical fallacies. Does this kind of epistemic luck defeat his justification?
Does Descartes’ belief that he may have made a mistake in adding 2 and 2
defeat the justification for his belief that 2+2=4? It is beyond the scope of this
paper to discuss cases like this, but my opinion is that, given certain conditions,
what is defeated is not his first-order belief but his second-order belief that his
first-order belief is true.
33
9.
Yezzi (1992) suggests that we can include it as an assumption analogous to
assuming the meaning-invariance of the terms when we are dealing with
deductive arguments. But I don’t think that this is a fair analogy. A closer
analogy with total evidence would be if the domain were closed. But in such a
closed domain the fallacies of affirming the consequent and argument from
ignorance are not fallacies, e.g., if there is one and only one conditional with q
as a consequent and we know that q is true, then we know that the antecedent is
true also. Total evidence is a stronger assumption than anything connected to
deduction.
10.
Freeman (1983, 6) might object instead on more or less the same grounds on
which he objects to a subjective probability reading of the prediction:
[C]ould the 'likely' in the conclusion be interpreted as expressing a
purely subjective degree of actual belief? This seems unintuitive. For by
citing premises, reasons, isn't one trying to justify his conclusion
objectively and so give some objective evidence for his probability
statement? Is one merely suggesting how he came to hold a certain
belief? . . . When a weatherman says "it is likely to rain tomorrow,"
having just expressed his reasons, is he just expressing his subjective
degree of belief? This interpretation does not seem plausible.
It seems to me that Freeman has run together the expressing of an opinion and
the assertion that one has it. On the interpretation I am urging, the phrase
“objective evidence for his probability statement” is misleading; what the
probability statement states is the evidence, and his reasons are his beliefs that
this constitutes good evidence. Freeman complains that this is deductive but
trivial. This does not seem to be a good objection, since triviality is a
34
characteristic feature, and no defect, in a deduction. To reiterate, the various
things that a speaker might intend or hope to achieve in uttering a speech act are
not a part of its meaning, at least as far as logic is concerned.
11.
Although we have predicates, because they are interpreted extensionally those
predicates are mentioned but not really used in making inferences, and for this
reason I think first-order predicate logic does not give us what we want, which is
a notation for expressing relations between descriptions. Suppose that we
observe a black raven and symbolize this as R(a)  B(a). All that this really says
is that “a” is being used as a name of an object that is in the intersection of two
sets of objects, but it doesn’t even imply that this is because it shares with the
other members of those sets some common feature, i.e., the property of being a
raven or of being black. Once you have named an object, the truth of any
propositions involving it can simply be read off from the universe of discourse.
What we want is to be able to say that believing something to be a raven is a
reason for thinking it to be black, even, I would say, in universes other than this
one, that is, even if the subject term does not successfully refer. We want to be
able to get back to meanings as in propositional logic but still be able to make
the subject/predicate distinction. Quantifying over predicates may be a means of
doing this, but for the purpose of my argument I will simply stipulate that a
relation between descriptions is being expressed.
12.
This is basically the same as the reference class problem mentioned earlier.
13.
Since the relation between being a raven and being black is not a direct causal
relation, this move may not be so unlikely here. When two correlated effects can
be thought to have a common cause, or perhaps contribute to a common function
such as camouflage in insects or the correlation between sharp beaks and sharp
35
claws in birds of prey, we are more likely to take the probabilistic approach,
knowing that many things can interfere with a causal chain. It is a different
matter with theoretical kinds like pendulums and magnets whose definitions are
always partly functional and take their meanings from their use in the relevant
theories.
36
REFERENCES
Bowles, George. 1994. The deductive/inductive distinction. Informal Logic XVI.3
Bromberger, Sylvain. 1992. On what we know we don’t know. Chicago and London:
The University of Chicago Press.
Freeman, James B. 1983. Logical form, probability interpretations, and the
deductive/inductive distinction. Informal Logic vol. 5 no. 2
Goddu, G. C. 2002. The ‘most important and fundamental’ distinction in logic. Informal
Logic vol. 22 no. 1
Hanson, N.R. 1961. Good inductive reasons. The Philosophical Quarterly vol. 11 no. 43
Osherson, Daniel N., Smith, Edward E., Wilkie, Ormond, Lopez, Alejandro, and Shafiz,
Eldar. 2008. Cartegory-based induction. In Reasoning: studies of human
inference and its foundations. Ed. Adler, Jonathan E. and Rips, Lance J. New
York: Cambridge University Press
Snoeck Henkemans, A. Francisca. 2001. Argumentation structures. In Crucial concepts
in argumentation theory. Ed. van Eemeren. Amsterdam: Amsterdam University
Press.
Reichenbach, Hans. 1938. Experience and prediction. Phoenix Books. Chicago: The
University of Chicago Press.
Toulmin, Stephen E. 2003. The uses of argument. New York: Cambridge University
Press.
Vorobej, Mark. 1992. Defining deduction. Informal Logic XIV.2&3
Weddle, Perry. 1979. Inductive, deductive. Informal Logic vol. 2 no. 1
Wilbanks, Jan J. 2009. Defining deduction, induction, and validity. Argumentation.
[Available online].
37
www.springerlink.com/content/142161k11u1366j5/?p=b71977cc717442238e27
b7a894b999af&pi=0. Last accessed 13th November 2009
Yezzi, Ron. 1992. Practical logic. Mankato: G. Bruno & Co.
38
Download