Evidence of Dependency Length Minimization.

Richard Futrell, Kyle Mahowald, and Edward Gibson have a new paper that looks intriguing: “Large-scale evidence of dependency length minimization in 37 languages” (published online before print PNAS, August 3, 2015, doi: 10.1073/pnas.1502134112); the “Significance” box says:

We provide the first large-scale, quantitative, cross-linguistic evidence for a universal syntactic property of languages: that dependency lengths are shorter than chance. Our work supports long-standing ideas that speakers prefer word orders with short dependency lengths and that languages do not enforce word orders with long dependency lengths. Dependency length minimization is well motivated because it allows for more efficient parsing and generation of natural language. Over the last 20 y, the hypothesis of a pressure to minimize dependency length has been invoked to explain many of the most striking recurring properties of languages. Our broad-coverage findings support those explanations.

There are popularized accounts of it by Michael Balter at Science (“All languages have evolved to have this in common“: “All [languages] have evolved to make communication as efficient as possible”) and by Cathleen O’Grady at Ars Technica (“MIT claims to have found a ‘language universal’ that ties all languages together“: “The idea is that when sentences bundle related concepts in proximity, it puts less of a strain on working memory”); thanks go to Paul, Trevor, and Peter for the various links. I’ll be curious to see what the folks at Language Log have to say when they get around to covering it.

Comments

  1. A lot has been written about a tendency in languages to place words with a close syntactic relationship as closely together as possible. Richard Futrell, Kyle Mahowald, and Edward Gibson at MIT were interested in whether all languages might use this as a technique to make sentences easier to understand.

    The idea is that when sentences bundle related concepts in proximity, it puts less of a strain on working memory.

    What’s the claim: close syntactic or close semantic relationships ? The article seems to be written to illustrate the fact that incoherent sentences put a strain on working memory. But we already knew that.

  2. Well, it looks as if Michael Balter, at least, didn’t do his homework.: “… a much smaller number, such as Arabic and Hebrew, use the VSO order.” The Austronesian language family is the largest in the world, and last I heard, those languages tend strongly toward VSO. (Just one of the things Chomsky didn’t know when he started, all of which I hope some day will be looked at.)

  3. @Ken Miner: Balter is correct that many fewer languages use VSO order than SVO or SOV. No amount of homework would have changed that.

  4. @Stu Clayton: sure, “we already knew that”. But we lots of things we intuitively “all know” about speech are wrong or more subtle than we think. Quantifying it is important both to check whether it’s correct, and to understand in more detail how it happens.

  5. The thing that irritates me about this isn’t the study: as Peter says above, nailing down the obvious is basic scholarly work. Sometimes the obvious is wrong, and sometimes the study yields new insights that weren’t obvious.

    The irritation is the overselling, as if this was a startlingly new insight.

  6. J.W. Brewer says

    I think the main article is paywalled so I don’t think I can see their explanation of exactly how they picked their sample of 37 “diverse” languages to work with, but 24 out of the 37 (I found a non-paywalled chart that shows them) are IE languages, and a decent number of the non-IE languages have had longstanding historical contact with IE languages that I would be skeptical should be presumed to have had no impact whatsoever on the phenomena being studied here. Any claim about a universal tendency that does not actually study every single extant language needs a theory as to why the subset studied is a good cross-section of, and thus proxy for, the larger whole, and . . . I’m skeptical. Of course this means someone else could pick a different 37 on rather different principles and with much less weight given to IE and see if you got a similar or different result.

  7. David Marjanović says

    9 % of all languages have VSO order, says the Pffft! of All Knowledge, citing sources and mentioning complications.

  8. Can we all just marvel for a moment at how probabilistic this proposed universal is? This coming from the camp that scoffs at experimentally observed gradation in, say, sentence acceptability, as being ‘just’ performance issues, not evidence about actual human linguistic competence.

  9. Just to clear up something that I suspect may be unclear to some of the commenters: the authors of the study are from MIT’s Department of Brain and Cognitive Science, not from MIT’s Department of Linguistics and Philosophy. So they’re not really from the “MIT” that so many of you loathe. If that helps you feel better about the study, makes you less confused about the ‘camp’ that they’re in, etc., then that’s probably good. Personally, as a member of MIT’s Dark Side, I think it’s an interesting study–but don’t let that bias you against it.

  10. It’s very odd to see this sold as evidence for anything Chomskian when the authors themselves (in the abstract, full text is hidden from public view) attribute the effect to characteristics/limitations of the brain as a whole and not a language-specific model of any sort.

  11. (model = module)

  12. Matt: Blame Ars Technica for that, not the authors. Statistical universals motivated by processing are pretty much orthogonal to the kind of universals Chomsky is looking for. This is more in the tradition of John Hawkins.

  13. Yeah, you’re right, it seems that only Ars Technica are invoking Chomsky, so presumably they came up with that interpretation themselves. (And yet the writer of that article “has a background in cognitive science and evolutionary linguistics”!)

  14. This paper looks at dependency length in classical Latin and Greek, and its evolution through time. I don’t think you can conclude anything, as this paper does, from so few data points without control for register. I fully agree, as anyone would who has ever tried to read any classical Latin, that “…languages do optimise dependency length to some extent as their dependency lengths are lower than random. However, they are also not too close to the optimal values.” The scrambling of dependent units in classical Latin is to me the hardest part about learning that language; it is, however, something that people can and do learn to do.

  15. marie-lucie says

    I am annoyed by teleogically oriented papers or theories which seek to demonstrate that languages evolve in order to optimize/maximize/minimize/etc some quality or other, rather than that speakers make use of language in order to effect modalities of communication which may (or may not) in turn lead to a change in permanent language features. Obviously, adults talking to a toddler tend to simplify their normal speech, but do not continue to do so throughout life, their own or the child’s. Competent speakers also simplify in trying to communicate with a foreigner ignorant of their language, and two-way simplification on the part of two language communities can lead to the creation of a pidgin, but such situations are different from the usual everyday exchanges between normally competent speakers of the same language. On the other hand, competent speakers can also deliberately seek to complexify for various purposes, such as entertainment (puns, “Pig Latin”, etc, which pre-teen children tend to be very good at), poetry, ceremonial speech, and other examples where minimization, simplification or improving understanding is not considered a suitable goal: just the opposite, a speaker will be praised for displaying excellence and virtuosity in producing complex, often multilayered speech, and a listener for meeting the challenge of understanding it. Latin poetry is one example in which the listener has to be particularly careful to keep in mind and unscramble the possible dependencies between words deliberately kept by the author at some distance from each other and mixed up with other “broken” dependencies.

  16. Excellent points all, and nice to see you back!

Speak Your Mind

*