Philosophers and linguists tend to agree that donkey sentences such as:
(1) If a farmer buys a donkey, he vaccinates it
are deeply problematic. What they tend to disagree about is how best to deal with them. My students sometimes wonder why (1) is thought to be problematic at all. It just says that for all farmers x and all donkeys y, if x buys y, then x vaccinates y, doesn't it? The problem with this reply is that it treats 'a farmer' and 'a donkey' (constructions that look like existentially quantified constructions) as universally quantified constructions.
There are numerous alternative proposals in the literature. And old and famous one is David Lewis'. (1) is to be understood as containing an implicit adverb (e.g. 'always', or 'generally'). The adverb is thought to bind both variables. It is a so-called unselective quantifier quantifying over pairs, triples, or whatever (in this case, pairs of farmers and donkeys standing in certain relations).
Lewis' view runs into trouble. Consider:
(2) If a farmer buys a donkey, he usually vaccinates it.
Given Lewis' view, (2) is true iff for most farmer-donkey-bought-by-farmer pairs, the farmer in the pair vaccinates the donkey. But consider now a scenario with 31 farmers and 130 donkeys. 1 rich farmer buys exactly 100 donkeys and vaccinates all of them. 30 poor farmers buy exactly one donkey each but do not vaccinate it. There are then 100 farmer/donkey pairs (of the right sort) where the farmer and the donkey stand in the vaccination relation. And there are 30 farmer/donkey pairs (of the right sort) where the farmer and the donkey do not stand in the vaccination relation. Since most farmer-donkey-bought-by-farmer pairs are such that the farmer in the pair vaccinates the donkey, Lewis' view predicts that (2) is true. but intuitively, it is false. This is the so-called 'proportion problem'.
There is also the so-called D-type account (defended by numerous people, including Stephen Neale). On this view, donkey pronouns go proxy for numberless descriptions. For example, in (1) the 'it' goes proxy for 'the donkey(s)'. The D-type approach predicts that (1) is equivalent to 'every farmer who buys a donkey vaccinates every donkey he buys'.
In my talk at the Central Meeting, I argued that a problem arises for this account (Neale acknowledges that this is a problem in a footnote in an article from 1990, and if I remember correctly, he also refers to a personal correspondence he had with Heim).
Consider a different scenario in which 10 farmers (the only ones) buy 10 donkeys each and vaccinate 9 of them. The majority of informants report that (1) is true in these circumstances. But the D-type approach predicts that it should be false. A similar problem arises in the following case:
(3) If a farmer buys a donkey, he sometimes vaccinates it.
The majority of informants report that (3) is true if the only 10 farmers buy 10 donkeys each and vaccinate 3 of them.
I argued that we need to treat donkey pronouns as plural definite descriptions (using plural variables, in line with Boolos), and I then appealed to an earlier paper of mine in which I argued that plural definite descriptions function as partitive constructions at a subsequent level of analysis [my paper is forthcoming in Mind and Language, and also contains a precis of the donkey proposal].
I then said that the force of the partitive construction will depend on the monotonicity properties of the adverb (or initial quantifier in the case of relative clause doneky sentences). As a rule, the partitive that goes proxy for the pronoun inherits the force of the adverb of quantification (or initial quantifier in the case of relative clause donkey sentences) when the quantifier is upward-entailing in the VP. When the quantifier is downward entailing in the VP, the partitive has existential force. I also said that this cannot be the whole story, because of so-called weak and strong readings. I won't go over that again.
Before continuing let me just summarize how to test for monotonicity.
Right-upward monotonic (upward-entailing in VP, or just upward-entailing, e.g., 'some', 'most', 'almost all', 'every', 'all')
DFs are Gs
All Gs are Hs
DFs are Hs
Left-upward monotonic (upward-entailing in NP, e.g. 'not every', 'some')
DFs are Gs
All Fs are Hs
DHs are Gs
Right-downward monotonic (downward-entailing in VP, or just downward-entailing, e.g. 'no', 'not every', 'at most 3')
DFs are Gs
All Hs are Gs
DFs are Hs
Left-downward monotonic (downward-entailing in NP, e.g. 'every')
DFs are Gs
All Hs are Fs
DHs are Gs.
Non-monotone: e.g., 'exactly 3'
What I want to focus on here is Jessica Rett's most interesting example:
(4) Not every farmer who owns a donkey beats it.
Rett took me to be saying that when a quantifier is upward-entailing in the NP the partitive inherits its force from the quantifier and when it is downward-entailing in the NP the partitive is existential. But that can't be right, as 'every' is not upward-entailing in its NP.
Anyway, if we follow me in taking the partitive to have existential force when the quantifier is downward-entailing in its VP, and if we follow Rett in taking 'not every' to be a quantifier construction, then we should expect (4) to be equivalent to:
(5) Not every farmer who owns a donkey beats at least one of the donkey(s) he owns.
This may or may not be a good reading of (4). I will get back to this later.
An alternative is to say that 'not every' is not a quantifier construction at all. We might treat 'not' as a sentential operator. 'Every' is then the quantifier, and the relevant quantifier in (4) is then upward-entailing in its VP. So, we'd get:
(6) Not every farmer who owns a donkey beats all the donkeys he owns.
So which one (if any) is the best reading of (4)?
I have now pilot-tested this on students. The majority actually got (5) as the most natural reading. Here is how I tested it (thanks to Barbara Abbott for helping me think through this -- most of what follows is from a correspondence with her, and for the most part I am using her formulations -- all mistakes are mine).
If 'not' is a sentence negation, then
(4) Not every farmer who owns a donkey beats it.
should be true if (6) is false:
(6) Every farmer who owns a donkey beats all the donkeys he owns.
E.g. if one farmer beat all but one of his donkeys (and all the other farmers beat all of them), then (4) would be true.
On the other hand if 'not every' is a quantifier, then (4) would be true if (5) is true (since 'not every' as a quantifier is downward entailing in the VP):
(5) Not every farmer who owns a donkey beats at least one of the donkeys he owns.
(5) would not be true in the circumstance given above (where all the farmers beat all their donkeys except for one, who spares just one). Instead, we'd need to have at least one farmer sparing all of his donkeys, and not just one.
In testing this I presented students with sentence (4):
(4) Not every farmer who owns a donkey beats it.
and asked: If one farmer beat all but one of his donkeys (and all the other farmers beat all of them), is (4) true? The majority answered 'no'.
So, interestingly, if my proposal is right, then it is indeed the monotonicity properties of 'not every' that matter in this case and not the monotonicity properties of 'every'. That is, (4) comes out as 'not every farmer who owns a donkey beats at least one of the the donkeys he owns'.
Thanks to Barbara Abbott, Jessica Rett, and the audience.
Thursday, May 03, 2007
Reflections on Donkey Sentences
Posted by Brit Brogaard at 1:06 PM
Labels: Conferences, Language, Papers
Subscribe to:
Post Comments (Atom)
7 comments:
Brit,
I haven't thought about donkey science in a while, but I recall that in the work of Makoto Kanazawa (L&P 17.2, 1994) there are connections drawn between the monotonicity properties of the main quantifiers and the interpretation of donkey pronouns. See also Bart Geurts' more recent paper on Donkey Business (L&P 25.2, 2002).
Hi Kai,
Thanks for the references! I will check them out.
Hello, Brit:
When I commented on the paper I was under the impression that you were representing donkey pronouns as patitives in the syntax. But in the following discussion it came out that you analyze donkey pronouns as partitives only at some pragmatic level. (Is that right?)
The question, then, is how such a strong dependency between (upward-monotonic) quantifiers and the corresponding partitive is established in the pragmatics (I think this was Jason's point, too). If there is no syntactic relationship between the matrix quantifier/adverb of quantification and the partitive, how does the one value the other?
Best,
Jessica
Hi Jessica
Thanks for your comments! I think Jason's criticism concerned "free pragmatic enrichment" of the sort posited by e.g. Kent Bach (impliciture) and Scott Soames in his latest work (assertoric content as opposed to semantic value/content). In my talks at the Eastern and Central Meetings I was advancing something like Scott Soames' line with respect to plural definite descriptions. So I was following Neale and others in taking D pronouns to go proxy for non-singular definite descriptions (in my case: plural definites) but was taking plural definites to function as partitives at a subsequent level of analysis. Zoltan Szabo was happy with the general idea underlying this proposal but was arguing that (i) plural definites have existential force, and (ii) what I call a 'subsequent level of analysis' is a pragmatic level rather than a level of assertoric content. When I revised the paper after the Eastern Meeting I didn't really push the semantic line but was open to the possibility that it is in fact pragmatic.
I think the view that the difference between weak and strong readings cannot be captured in the syntax is rather uncontroversial (though in his comments at the Eastern Zoltan did offer a proposal that would allow for a partitive structure in the syntax -- I am not radically opposed to that idea but I am not yet convinced that the partitive structure is syntactically real. Maybe I am too influenced by semantic minimalists and Greg Carlson's dissertation).
Furthermore, some of the latest research in cognitive science shows that monotonicity profiles play a crucial role in human reasoning. It has been argued, for instance, that natural reasoning utilizes monotonicity profiles, and that reasoners find it easier to perform inferences with certain kinds of monotonicity profiles (in spite of there being no relevant difference in the syntax). Monotonicity properties are not syntactic properties but logical properties (or perhaps semantic, depending on how you define 'semantic'). So it would not be surprising if monotonicity profiles were utilized at 'subsequent levels of analyses'. When confronted with a donkey sentence we first "assign" semantic values to its constituents. So, the occurrences of 'it' in 'every person who had a credit card paid the bill with it' and 'every person who had a credit card kept it in a safe place' are assigned the same semantic values, but our background info about credit cards then forces different partitive readings. Another relevant proposal in this regard is Jeff King's context dependent quantifier approach to anaphora. This proposal also utilizes monotonicity profiles (but in a rather different way).
Post a Comment