JC sent me a link to Randy J. LaPolla’s Forward to the Past: Modernizing Linguistic Typology by Returning to its Roots, adding “I have a lot of time for LaPolla”:
This paper argues that linguistic typology, and linguistics more generally, got off to a good start in the 19th century with scholars like Wilhelm von Humboldt and Georg von der Gabelentz, where the understanding was that each language manifests a unique world view, and it is important to study and compare those world views. This tradition is still alive, but was sidelined and even denigrated for many years due to the rise of Structuralism, which attempted to study language structures divorced from their linguistic and socio-cultural contexts. The paper reviews the understandings the early scholars had and points out their similarities with cutting edge current views in cognitive linguistics, construction grammar, and interactional linguistics, which had to be rediscovered due to the influence of Structuralism for so many years. It then argues that we should make linguistic typology (and linguistics more generally) more modern, scientific, and empirical by returning to our roots.
I confess I was taken aback by the start of the Introduction: “There is often an assumption that the comparative study of unrelated languages, i.e. linguistic typology, began with Joseph Greenberg in the 1960’s…” There is?! I guess linguistics has forgotten its history even more than I thought. At any rate, thanks, John!
Hmmm, looks like I’ve already downloaded it twice. I really should read it.
You actually did a post on LaPolla a few years ago, although I can’t find it at the moment.
I’ve never been able to come to grips with LaPolla’s linguistics, and this paper is no exception. He argues against “structure/structuralism”, suggesting instead something that appears vaguely unstructured and therefore hard to pin down. I must admit that I prefer a structured approach. You can accept the existence of “structure” as a starting point in understanding what lies beyond — in fact, it is precisely by pinning down structures that you can understand what they don’t cover.
While I have trouble coming to grips with LaPolla’s linguistics, just as I do “Functional” or “Constructional” linguists, I do find it far more congenial than Chomsky’s linguistics. As LaPolla points out: “Noam Chomsky is the most extreme…, denying the relevance of communication to language structure. Chomsky is in fact more Structuralist than many of its earlier proponents, such as Charles Hockett, not just in divorcing structure from use, but also in the non-empirical assumption that there is a rigid, closed system of language. Hockett, towards the end of his career, said, “Beyond the design implied by the factors and mechanisms that we have discussed, a language has no design. The search for an exact determinate formal system by which a language can be precisely characterized is a wild goose chase, because a language neither is nor reflects any such system. A language is not, as Saussure thought, a system ‘où tout se tient’. Rather, the apt phrase is Sapir’s ‘all grammars leak’.”
I am given to understand that neither Chomsky nor his followers have much good to say about Hockett.
where the understanding was that each language manifests a unique world view, and it is important to study and compare those world views
Is this a kind of anti-Sapir-Whorf, where you only make ways to talk about the things you want to talk about?
(No insult intended, the technical terms are just a bit beyond me!)
It was that each language is the manifestation of the spirit of a people (Volksgeist). A few slippery slopes later, you end up postulating that an Aryan worldview is reflected in all Aryan languages and nowhere else… so the strength of the backlash is understandable, as that of “pots, not people” in archaeology.
Always happy to go along with those who see the study of language as integrally connected with the study of culture, but this is more tract than treatise.
It”a a pretty tendentious take on Structuralism, and particularly on Bloomfield (whose views on language are easy to misconstrue/deliberately misrepresent, as Charles Hockett himself points out in his introduction to The Menomini Language.)
More fundamentally, this is a thumping false dichotomy. Structuralist approaches are entirely compatible with a vigorous interest in the anthropological aspects of language. And the implication that structuralism represented a sterile false turn in 20th-century linguistics is frankly ludicrous.
You actually did a post on LaPolla a few years ago, although I can’t find it at the moment.
It’s this one, from 2016. I continue to find him interesting but annoying in exactly the ways y’all are mentioning; why does he have such a bug up his ass about structuralism??
I think the statement about Greenberg is both correct and far off the mark. He mentions his disagreement with Haspelmath but does not really do his position justice. Haspelmath and others in the modern typology field are far from unaware of the history but they do find Greenberg an important inflection point – and I think justifiably.
LaPolla is not wrong about the limits of structural description – I’ve disagreed with Haspelmath on this myself – but he conceives of structuralism too narrowly. I think Croft who he mentions is very much a structuralist but thinks about different level of magnification from the likes of Harris or Bliomfield – but not unlike Jakobson in some modes.
LaPolla actually cowrote Syntax. Structure, Meaning and Function with Robert D. Van Valin Jr, who is behind Role and Reference Grammar, a functional theory of grammar encompassing syntax, semantics and discourse pragmatics.
The book is an attempt to “provide a model for syntactic analysis which is just as relevant for languages like Dyirbal and Lakhota as it is for more commonly studied Indo-European languages” (Wikipedia).
To continue quoting, “Instead of positing a rich innate and universal syntactic structure, Van Valin suggests that the only truly universal parts of a sentence are its nucleus, housing a predicating element such as a verb or adjective, and the core of the clause, containing the arguments, normally noun phrases, or adpositional phrases, that the predicate in the nucleus requires”. The theory doesn’t allow abstract underlying forms or transformational rules and derivations.
The book dates back to 1997 and I suspect that LaPolla has moved on from that time, although still in a direction away from structuralism and Chomskyanism.
Escaping Eurocentrism: fieldwork as a process of unlearning by David Gil suggests that the subject-object structure is an import from European languages. He specifically deals with Riau, in which (if I remember rightly) ‘Eat chicken’ can mean ‘the chicken eats’, ‘eat the chicken’, ‘the chicken eaten’, etc., depending on the context. (Unfortunately I can’t find this online in its entirety but it is a wonderful paper.) I also remember people suggesting that Riau was similar to Classical Chinese.
LaPolla’s views are spelt out more fully in his review of The language myth: Why language is not an instinct by Vyvyan Evans.
For more on David Gil, Riau, and “eat chicken”, see discussion at Language Hat, 2004.
This is one of a number of papers fairly plausibly claiming that Gil has somewhat oversold the peculiarity of Riau Indonesian:
https://arts-sciences.und.edu/academics/summer-institute-of-linguistics/work-papers/_files/docs/2010-yoder.pdf
The question interacts interestingly with the matter of “free” word order that we were discussing elsewhere.
I was wondering…!
Valin’s “A Concise Introduction to Role and Reference Grammar” (easily found on the Internet) basically presents LaPolla’s ideas, minus the strong negativity towards structuralism:
Role and Reference Grammar [RRG] (Van Valin 1993b, Van Valin & LaPolla 1997, Yang 1998) grew out of an attempt to answer two basic questions: (i) what would linguistic theory look like if it were based on the analysis of Lakhota, Tagalog and Dyirbal, rather than on the analysis of English?, and (ii) how can the interaction of syntax, semantics and pragmatics in different grammatical systems best be captured and explained? RRG takes language to be a system of communicative social action, and accordingly, analyzing the communicative functions of grammatical structures plays a vital role in grammatical description and theory from this perspective. Language is a system, and grammar is a system in the traditional structuralist sense; what distinguishes the RRG conception of language is the conviction that grammatical structure can only be understood and explained with reference to its semantic and communicative functions. In terms of the abstract paradigmatic and syntagmatic relations that define a structural system, RRG is concerned not only with relations of cooccurrence and combination in strictly formal terms but also with semantic and pragmatic cooccurrence and combinatory relations. It is a monostratal theory, positing only one level of syntactic representation, the actual form of the sentence (cf. fn. 3). With respect to cognitive issues, RRG adopts the criterion of psychological adequacy formulated in Dik (1991), which states that a theory should be “compatible with the results of psycholinguistic research on the acquisition, processing, production, interpretation and memorization of linguistic expressions”(1991 :248). It also accepts the related criterion put forth in Bresnan & Kaplan (1982) that theories of linguistic structure should be directly relatable to testable theories of language production and comprehension. The RRG approach to language acquisition, sketched in Van Valin (1991 a, 1994, 1998) and Van Valin & LaPolla (1997), rejects the position that grammar is radically arbitrary and hence unlearnable, and maintains that it is relatively motivated (in Saussure’s sense) semantically and pragmatically. Accordingly, there is sufficient information available to the child in the speech to which it is exposed to enable it to construct a grammar.”
Interesting stuff, thanks!
I think that LaPolla sees Bloomfield and the other American(ist)s through a Saussurean lens and misses the huge differences between them: roughly speaking, Saussure does French rationalist Structuralism, and Bloomfield et al. do American empiricist structuralism (capitalization intentional), and unfortunately it’s the former that captured the imagination of intellectuals and made Structuralism a thing among non-linguists. I think it’s time to link Gopnik’s riff on fact checkers and theory checkers again.
Mais oui, it works in practice: but does it work in theory?
I have in actual fact noticed a distinct tendency for French-language descriptive grammars of exotic languages to err on the side of over-systematisation, where Anglophone productions are often less pretty but more realistic. (“All grammars leak.”)
I say tendency advisedly; there are plenty of exceptions on both sides. On the French side, Denis Creissels’ absolutely exemplary Mandinka grammar, for example … on the English, pick your own horrid examples. Mind you, most of the Anglophones present “fragments”, finding entire grammars too uninteresting, full of matter with no value to the Great Common Cause.
According to relatively recent edits on Wikipedia, Saussure is actually a “Swiss linguist and semiotician”. The fact that he has been taken over by the semiotics community does tend to underline the abstract nature of Structuralism.
The tag “structuralism” suggests that it was a relatively unified movement, but Wikipedia begs to differ. In fact, the Wikipedia article suggests that American structuralism wasn’t really structuralism at all, and puts it in inverted commas.
Structural linguistics begins with the posthumous publication of Ferdinand de Saussure’s Course in General Linguistics in 1916, which his students compiled from his lectures. The book proved to be highly influential, providing the foundation for both modern linguistics and semiotics. Structuralist linguistics is normally seen as giving rise to independent European and American traditions. (My emphasis).
European structuralism
In Europe, Saussure influenced: (1) the Geneva School of Albert Sechehaye and Charles Bally, (2) the Prague School of Roman Jakobson and Nikolai Trubetzkoy, whose work would prove hugely influential, particularly concerning phonology, (3) the Copenhagen School of Louis Hjelmslev, and (4) the Paris School of Algirdas Julien Greimas. Structural linguistics also had an influence on other disciplines of humanities bringing about the movement known as structuralism.
‘American structuralism’
Some confusion is caused by the fact that an American school of linguistics of 1910s through 1950s, which was based on structural psychology, especially Wilhelm Wundt’s Völkerpsychologie); and later on behavioural psychology, was nicknamed ‘American structuralism’. This framework was not structuralist in the sociological and Saussurean sense that it did not consider language as arising from the interaction of meaning and form. However, ‘American structuralists’ such as Leonard Bloomfield developed methods of formal synchronic analysis. The American linguist Charles Hockett also applied André Martinet’s structural explanation to the emergence of grammatical complexity. Otherwise, there were unsolvable incompatibilities between the psychological and positivistic orientation of the Bloomfieldian school, and the semiotic orientation of the structuralists proper. In the generative or Chomskyan concept, a purported rejection of ‘structuralism’ usually refers to Noam Chomsky’s opposition to the post-Bloomfieldian school, though he is also opposed to structuralism proper.
So we are again confounded by names, and the ex post facto tidying up of history so that ‘movements’ look like they were all of a piece.
So JC is right (I believe this has happened before …)
That would account for the peculiar mismatch between the bogeyman-structuralism that LaPolla complains of and the actual practice of Bloomfield et sim.
I suppose (to be perhaps unduly cynical) if you’re promoting a newish approach, you’ll want to stress what a radical departure it represents from its predecessors. To be fractionally less cynical, you will readily convince yourself that your ideas are more of a break with existing practice than, perhaps, may be the case.
Actually, appealing to the Wisdom of the Ancients to justify your supposedly radical new departures does have some quite respectable antecedents … (and has always needed taking cum grano salis. As the Ancients say.)
I am going to drop Bloomfield’s Language (by convention included the bibliography of absolutely all modern descriptive grammars) in favour of the Τέχνη γραμματική of Dionysius Thrax. Then I’ll get some respect.
Mind you, most of the Anglophones present “fragments”, finding entire grammars too uninteresting, full of matter with no value to the Great Common Cause.
They! They are neither fysshe nor fowle, and certainly no [Ss]tructuralists by anybody’s definition.
I think WP’s banishment of the Americans to Scarequotia is legitimate , given the wider use of Structuralism.
we are again confounded by names
Stat rosa pristina nomine; nomina nuda tenemus.
I’ve read LaPolla’s review that Bathrobe linked in July 4, 2020 at 2:29 am comment and is confused (as expected). LaPolla’s view is that language is just a part of communication (ok, maybe), but also comes with this specific definition or description “the abductive inference of a communicator’s intention in performing an act that the addressee can infer is done with the intention of the addressee inferring the intent behind it (inference of motivations for doing things is done all the time, but it isn’t communication unless the communicator intended for the addressee to infer the intention for the action)”. Can anyone explain what this is supposed to mean? Should I infer from this communication LaPolla’s intention to snow me as a reader with some strange combination of words and establish LaPolla’s intellectual superiority? Is this one of those instances when you cannot understand a statement unless you already know what it means?
I know there are lots of lovers of genetics around this parts (also, professional lovers) so I leave you with this pearl from the end of the review. LaPolla approvingly cites Evans
“…it is necessary for Chomsky to try to divorce language from communication because if language arose from communication, it would necessarily involve at least two people, and this would conflict with his idea that it was due to a single genetic mutation in one individual 60,000 years ago.”
It is easy to be confused because LaPolla seems to be desperately beating about the bush as he leans backwards to avoid “structuralism”.
As for the passage in question, I understood it as meaning that there is intentionality in communication. Just as children learn that a raised hand indicates the intent to hit, or a glare indicates disapproval, people learn language as a communicative process whereby they read people’s intentions rather than the formalities of their grammar. When we use language (or any other kind of communication), we do so with the understanding that the other person will attempt to read our intentions, just as we look for intentions behind what other people say.
This is related to the way people learn language, picked up bit by bit rather than as an overarching grammatical system. I think this is reasonable. As a way of looking at language I think it is superior to Chomsky’s assumption that language is a formal system hardwired in the brain taking the form of an ability to implement Merge. But I have no idea how LaPolla’s approach is supposed to relate to a theory of language (including the structure of syntax, etc.).
Thanks to ktschwarz for the link back to 2004 and David Eddyshaw for linking to that paper that confirmed the suspicions of commenters on that thread.
John Emerson (zizka)’s comments on Mongolian were accurate enough but I can’t see what is so special about it. Something similar happens in English with Latinate words: ‘differ’ ‘different’ ‘differentiate’ ‘differentiation’ ‘dedifferentiation’ ‘dedifferentiate’….
I had greater difficulty understanding the discussion of Inuktitut. I have the severe shortcoming that I am only able to understand things if there are lots of concrete examples.
That’s only true if Chomsky imagines language arose immediately once the first person with that mutation was old enough to talk.
Can anyone explain what this is supposed to mean?
I admit LaPolla’s sentence is jargon-ridden, but it talks about something inherently complicated: a situation where there are two levels of intention and two levels of inference of that intention. I unpack it as follows:
Alice communicates with Bob when Bob can infer₁ (specifically, make an abductive inference) that she intends₁ for him to infer₂ that she intends₂ to do something else. If Alice does not intend₁ for Bob to infer₂ her intention₂, then it is not communication. So if Alice has an angry face, Bob can infer₂ her intention₂ to do something verbal or physical that people do when they are angry, but it is not communication, because Alice does not intend₁ for Bob to infer₁ anything. Likewise, communication also fails when Bob is not paying attention to Alice, perhaps because he is preoccupied with Carol (inference₁ fails) or when Alice is speaking French and Bob has no French (inference₂ fails).
I hope that helps, but maybe not.
I am only able to understand things if there are lots of concrete examples
Me too, which is why people like Alice and Bob are so important to understanding. We domesticated primates are predisposed to be interested in gossip about how Alice and Bob desperately try to communicate their love to each other at a distance, despite the eavesdropping of Eve and the active interference of malicious Mallory and/or the opposition of not-so-malicious Oscar. The trusted Trent and the protective warden Walter, however, are on the side of the star-crossed lovers and want them to achieve a marriage of true minds despite impediments. And so on. (“Gossip is the human version of grooming.”)
the first person with that mutation was old enough to talk
Anyway, we know what happens to children who are raised in complete silence: they don’t learn to talk. I think we can exclude saltation here.
we know what happens to children who are raised in complete silence
They spontaneously acquire Phrygian. It’s been proven by experiment.
JC, thank you for the explanation, but I am sure I am missing something because it is awfully incomplete and so many smart people cannot possibly think that it is all there is to communication. If I say to someone “It’s a nice morning today” anyone can tell what is my intention? Is it really important for my listener to guess (abductively infer) my intention? Is it really essential that there is a desire on my part to communicate my intention? Do I really have to have any particular intention? I would say that the answer is “no” for all four. Yet, I have used language quite obviously.
It may or may not be true that language emerged as a means for people to communicate more clearly their intentions (is there any evidence?), but it has extended far and wide from there and there is about 0% chance that there is a single explanation for everything relating to language.
there is about 0% chance that there is a single explanation for everything relating to language.
I heartily agree. (Or, really, a single explanation for anything humans do.)
Or for anything that dogs do, or coronaviruses. Descartes claimed that dogs didn’t “do” anything, that they were machines (another simple explanation!).
There is possibly no single explanation for why so many people favor simple explanations.
If I say to someone “It’s a nice morning today” anyone can tell what is my intention?
It depends. Face to face, I would abduce that you mean to communicate friendliness or at least courtesy, as opposed to starting out with “Get out of my sight, you God-damned stinking son of a bitch.” (Karen Memory seems to have rubbed off on me.) Over the phone/email/text/whatever, it probably carries that meaning too, but also conveys a fact that you think I might like to know.
Is it really important for my listener to guess (abductively infer) my intention?
I don’t think you’d be saying it if you didn’t mean me to know what you meant by it. And thank you for not saying guessing wildly, even if abduction (not from the seraglio) has nothing to do with guessing.
Is it really essential that there is a desire on my part to communicate my intention?
I think so. If I see you creeping about behind the bushes, I abduce that you don’t want to be seen, but you certainly have no intention of communicating that to me: on the contrary.
Do I really have to have any particular intention?
Certainly not just one.
I’m not saying communication isn’t complex. Saying that Alice loves Bob if Alice is both loyal to Bob and devoted to Bob is a definition of love (given appropriate definitions of loyalty and devotion), but that doesn’t make love a simple concept.
If only Phrygian were that easy to revive.
But, JC, I can say “It’s a nice morning today” simply because I feel good and want to say it out loud. I can even say it to my neighbor even if what precipitated the conversation was him stopping by the mailboxes and preventing me from dragging my garbage can up the driveway, which is annoying, and what I wanted to say was whether he can move out of the way as fast as possible.
What I want to say probably is that intentions are often complicated, vague, and shifting. And explaining something complex like language with something even more complicated like intentions, which are not even observable, is a strange proposition.
But why would you want to say it out loud?
So that somebody hears it…
Fittingly, so is language.
But why would you want to say it out loud?
So that somebody hears it…
Not so. I frequently say things out loud just for the pleasure of it.
It’s amazing how doggedly people can cling to the “language is for communication” credo when it often so plainly isn’t.
@D.O.:
Thanks, Brett. I didn’t read Hobbit. Probably, should have just quoted it, if I had.
But why would you want to say it out loud?
So that somebody hears it…
Even when it’s true, I hardly have an intention to communicate that I can talk.
Fittingly, so is language.
But that’s why people search for an explanation. It sounds very postmodernist to explain something difficult to understand with something even more difficult to understand.
I suppose that, just as the function of Language is not necessarily to communicate, the function of an explanation is not necessarily to explain …
People certainly talk and even talk in language when they don’t intend to communicate. (I myself sing while walking down the street. Few hear me, because they have earphones on anyway.) But LaPolla’s concern was to say what communication is, not to claim that language is for nothing else.
The function of an explanation is not necessarily to explain …
Of course not. An explanation is a mechanism for stopping further thought. In Luhmann land we call it a Stoppregel. A good explanation causes the hearer to say “oh, I see!” – and that’s the end of that.
I hope what I just said is not taken to be a good explanation. It bears further thought.
I’m not sure that “communication” is such a bad explanation for the genesis and function of language.
Of course we talk out loud, to ourselves. So do infants, even if it’s just babbling. Children at play also love to vocalise, even if they are only talking to themselves, but they are using — perhaps amusing themselves with — language they’ve learnt from others. Do children brought up in silence (no input from other people) utter nonsense syllables? Do they talk to themselves at all? JC’s earlier comment seems to imply not.
Surely it is the need to communicate that causes people (babies) to learn their first language, at least. The fact that people might then use this tool to express their thoughts out loud, even to themselves, does not mean that the tool they are using did not arise from the need to communicate. Without language, which you’ve learnt from other people, you would not have all the paraphernalia in which you are framing those utterances (‘nouns’, ‘verbs’, ‘subjects’, ’emphasis’, ‘word order’). These are socially imposed, socially learnt from the act of communication, even if the particular speech act in question is not actually meant to communicate with other people.
But LaPolla’s concern was to say what communication is, not to claim that language is for nothing else.
It struck me that LaPolla was claiming that language is communication, and that “grammar” and all the rest arises from the process of learning language for “communication”. That’s behind the idea that ‘a theory should be “compatible with the results of psycholinguistic research on the acquisition, processing, production, interpretation and memorization of linguistic expressions”’ (as quoted from Dik).
It’s Chomsky who says that we don’t need to consider communication, merely the formal patterns of language hard-wired into our brains, which can ultimately be described by the operation “Merge”.
“Good morning!” is usually described as “phatic” (“a phatic expression /ˈfætɪk/ is communication which serves a social function, such as social pleasantries that don’t seek or offer information of intrinsic value but can signal willingness to observe conventional local expectations for politeness” — Wikipedia). As Bilbo showed, of course, it can be used with many other implications as well.
I think what I meant to say but circled around is: Language is learnt from interaction with other people, built up bit by bit. It can’t be described through a few overarching rules; it’s messy (grammar leaks). It isn’t necessarily explained by “hardwiring”.
As for intentionality, making judgements of intentionality (reading intentions) is crucial in interacting with the universe. This also applies to language, in that we attempt to understand the intention of the other person (“What do they want to say?”) in making sense of speech acts, and we assume that the other person is the same. This is perhaps why it is said to be disconcerting to talk to people who have psychological problems since they don’t play according to ordinary assumptions. (David Bowie described how phone conversations with his schizophrenic brother were strange because he got the feeling his brother was just talking to himself.)
If the neighbour fails to shift their car after you’ve said “Nice day today, isn’t it”, they have failed to read, and you have failed to communicate, your intent. A communication breakdown does not mean that the intent to communicate does not exist.
If the neighbour fails to shift their car after you’ve said “Nice day today, isn’t it”, they have failed to read, and you have failed to communicate your intention.
But I won’t fail to communicate! I don’t want to be rude to my neighbor and don’t want to make him feel bad. I am hoping that my presence will become more salient for him and he, the considerate person he is, will be more accommodating to my needs. But if not than not, I can wait.
But LaPolla’s concern was to say what communication is, not to claim that language is for nothing else.
As far as I understood him, language is just a form of communication and communication is formulating and reading out the intent. Each of these steps, it seems to me, narrows the scope of its subject considerably.
Surely it is the need to communicate that causes people (babies) to learn their first language, at least. The fact that people might then use this tool to express their thoughts out loud, even to themselves, does not mean that the tool they are using did not arise from the need to communicate.
Maybe, though the earliest stages of language acquisition resemble more the desire of babies to emulate adults in their life rather than pass a message, but in any case, if language became something more than is required by it’s original purpose, there is no need to go back and insist that the original purpose explains (take it, Luhmann!) everything.
the earliest stages of language acquisition resemble more the desire of babies to emulate adults in their life rather than pass a message,
mmm? single-word utterances/something approximating to ‘milk’ or to ‘more’ seem to me rather a closer-targeted form of yelling or screaming. The babies babbling bit (cue cute videos on youtube) are maybe trying to emulate adults, but they’re not language or pre-/proto-language because they’re not expressing thought (neither communicating anything).
Another literary riff on “It’s a nice morning”:
—Theodore Sturgeon, “The Girl Who Knew What They Meant”
It’s amazing how doggedly people can cling to the “language is for communication” credo when it often so plainly isn’t.
Yes. Or that some particular use of language is ‘primary’; others are derivative. As in the Chomskyan claim that representing thought is somehow more basic. I guess this gets them off the hook that deep-deep structure seems such a long way from any observed linguistic behaviour — including my own observation of my own thought processes. But of course that would be to mix up performance with competence.
Humans use rocks as slingshots, as markers (cairns), sharpened to skin/butcher animals or to split tree trunks. Is anybody going to claim one use is ‘primary’? As human tools go, rocks are pretty dumb compared to the Swiss Army Knife that is language.
There is an article on Babbling at Wikipedia. (I should have known.)
In their first year children babble before beginning to construct recognisable words. The specific sounds of their babbling are influenced by the linguistic environment (the language the parents speak). Incidentally, If a hearing infant has deaf and/or mute parents or parents who otherwise use a sign language, he or she will still imitate the signs that they see their parents displaying.
By 12 months, babies typically can speak one or more words. These words now refer to the entity which they name; they are used to gain attention or for a specific purpose.
The road from mindless babbling (imitation) to communication appears to be a gradual one.
The road from mindless babbling (imitation) to communication appears to be a gradual one.
Yes: that was always one of bits of evidence against Chomsky’s innateness arguments. You’d thank that if kids (or brains) come ‘pre-programmed’ for language, they’d pick it up quicker than they do. Especially since adults observably simplify and repeat for kids.
That’s contrasted to how quickly somebody can pick up a second language with intense classroom training and/or with a strong incentive to thrive in a foreign context. (I have a suspicion the reason Prof Mair keeps claiming putonghua is quick to learn by purely speaking is because he had a strong love interest; then learning via the written language was no part of the exercise.)
What I find interesting is kids who grow up in a multilingual environment: they typically take much longer to start to speak. But as soon as they ‘twig’ what’s going on, suddenly there’s an explosion in learning and they catch up with the monoglots. Again I don’t see how an innateness argument could explain that. I have an (adoptive) grandson of whom the father is British, the mother French; she insisted on speaking French all the time to him at home, despite living in an English-speaking country. The grandson seemed to take forever to produce anything recognisable as language. But suddenly in a few weeks he could command all the domestic conversation in French and all the outside-world in English. In a few weeks more he was parodying my (doubtless execrable) French accent. It warms your cockles how smart kids are.
Existential Comics: J.L. Austin helps Wittgenstein prove that a hotdog is not a sandwich because the speech act “Hand me that sandwich” failed to accomplish the speaker’s intention of obtaining a hotdog.
Because of examples like this, I would say that the most basic language does not communicate intention but is employed (a) to affirm, register, or warn of the speaker’s location or existence (like a birdcall) or (b) as part of mimicry, which itself can serve various goals, e.g., learning a skill, exercising and training a facility, empathising socially, etc.
“Most basic” = earliest in human evolution? No way to test that, afaik. Earliest in child acquisition? Social mimicry is definitely very early: infants start out babbling generically and then converge on their family’s phonology and intonation, even before they come out with any actual words. It demonstrates “I’m one of you!” and they get huge positive feedback for it, before they can use any words to express intentions.
single-word utterances/something approximating to ‘milk’ or to ‘more’ seem to me rather a closer-targeted form of yelling or screaming
I would have supposed that pre-language in the historical sense passed through such a stage: cries with more and more focused intentions. Other primates have the beginnings of it, don’t they?
Then when a critical mass of a community had Merge, they’d work out a way to string the cries together to express still more focused intentions.
Did they go through a stage where they actually said “VP!” and “NP!”?
Ha!
It seems to me that the primary purpose of rocks for humans is to sit on. This makes them quite different from bayonets.
I’m sure you do that with things like Russian poems, and to yourself, so you can experience and enjoy unusual arrangements of sounds more deeply. (I do that too, just never out loud.) But saying “It’s a nice morning today” to someone would clearly have the purpose of communicating that you, to quote from the same comment, “feel good”.
There are people who talk to themselves in the second person. That’s imagined communication, with themselves as the imaginary conversation partner.
Why do you meantion that few hear you? From my cultural background it seems clear: to clarify that your singing isn’t mistaken as an attempt to communicate with total strangers.
(There are culturally different places elsewhere in the US, I’ve been told, where people do spontaneously break into song in public.)
I never sing alone. I hum, whistle, and so on and so forth – but only when I’m alone or when there’s background noise that makes me inaudible outside my skull (and that I can accompany).
…but can communicate willingness to…
So you’re communicating your presence.
Yes. It’s successful communication in a guess culture.
It communicates “I’m one of you”…
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I wonder if it’s a symptom of the autism spectrum that I’m aware of all this communication. After all, I can’t do it all spontaneously and unconsciously; I have to consciously think about how to do most of it, let alone whether.
Edit: man kann nicht nicht kommunizieren.
Of course there’s a lot of communication, which is a prime purpose of language, but it’s not the only purpose, and you’re refusing to accept the exceptions.
I’m very specifically refusing to accept that “I can say “It’s a nice morning today” simply because I feel good and want to say it out loud” is not an attempt at communication.
Even if there’s no one around to hear it? Very odd.
Communication with oneself, perhaps? This is not the nineteenth century, after all. We know that the human mind is not a coherent conscious whole, but it possesses preconscious and unconscious factions—dark crannies, where we have misplaced forgotten information or shunted off things we would rather not actively think about.
Saying something aloud can have a observable effect on the speaker’s own mental state.
A Bishop named Berkeley said: “God
Must think it exceedingly odd
If he finds that this tree
Continues to be
When there’s no one about in the Quad.”
Dear Sir, Your astonishment’s odd:
I am always about in the Quad.
And that’s why the tree
Will continue to be,
Since observed by
Yours faithfully,
God.
I mixed up two sentences in the comment I linked to above, so I missed that saying it to someone wasn’t actually implied.
But… almost everything I think in language is part of an imagined communication, generally with someone specific (and in the appropriate language for that partner, as far as possible). If I suddenly switched my vocal cords on, that wouldn’t change.
Even when one is talking to oneself, it’s interesting that this can still take the form of a pretence at communication.
Before he met Bilbo, Gollum talked to himself (‘my precious’) all the time. There was no one to hear.
But even so, he did use language with all its communicative features, e.g., asking himself questions and then answering them, as though he were two people holding a conversation.
Me: baby babbling in its parents’ accent demonstrates “I’m one of you!”
David Marjanović: It communicates “I’m one of you”
Not by LaPolla’s definition of “communicates”: can a 10-month-old baby form an intention? Does a 10-month-old baby have a theory of other minds, such that it intends the others to recognize it as one of them? Seems unlikely; I can’t prove it doesn’t, but you definitely can’t prove that it *does*.
What we can be sure of (assuming, that is, that facial and body language are meaningful at 10 months) is that the baby enjoys hearing itself sound like its parents—it demonstrates to *itself* that it is one of them—and that the parents reward it with attention and approval, forming a positive feedback loop.
But… almost everything I think in language is part of an imagined communication, generally with someone specific (and in the appropriate language for that partner, as far as possible). If I suddenly switched my vocal cords on, that wouldn’t change.‘
Surely you do not take yourself as representing all of humanity. I assure you I talk out loud without any imagined communication.
I’m very specifically refusing to accept that [blah-blah-blah] is not an attempt at communication.
It almost always is. But what I tried to say, apparently unsuccessfully, is that it often doesn’t communicate anything specific, not for the person saying it and not for the hearer. Look at Bilbo/Gandalf dialog about “good morning!”. It’s all spelled out there in detail. And if you take the axiom “language = communication”, but then have to water down the notion of communication to the state were it’s ok to have communication where neither side knows what they are talking about, you basically is not setting up a useful framework for understanding language.
@Bathrobe: Curiously, in the original text of The Hobbit, Gollum explicitly refers to himself as “my precious.” However, when Tolkien heavily revised “Riddles in the Dark” to accord with The Lord of the Rings,* he stated equally explicitly that “his precious” was the Ring, a convention also followed throughout the sequels. Yet Tolkien did not take out the statement about Gollum referring to himself as “my precious.” This cannot have been accidental, but I have always found it distinctly curious. The two sets of statements are not exactly contradictory; presumably, the author wanted to indicate that Gollum thought of himself and the Ring as a single composite entity, that after so long as its thrall, Gollum could no longer even conceive of being separated from it. However, Tolkien did not continue to use this mixed usage in The Lord of the Rings; there, whenever Gollum talks about “precious,” it always means the One Ring, never himself. Perhaps after seventy-some years without the Ring, Gollum’s attitude had changed somewhat.
” There cannot be many other examples where a retcon to the text of a novel also exists in universe. There are some others, no doubt, but probably not many, and the only other one that springs immediately to mind feels rather like a cheat: Stephen King’s changes to The Gunslinger, when, many years later, he was making it a part of a longer series. The cheat is that the the retcon only exists in story because the multiverse of King’s The Dark Tower novels includes the real world, with King as a minor character, and events such as his being mowed down by a speeding minivan are alluded to in the narrative.
Why not?
The simplest theory of mind is just projection, which isn’t even limited to humans, so… why not.
In that case, it’s not a conscious attempt at communication – it’s an innate, evolved activity that ends up as communication (the feedback loop) and may well have been naturally selected because of that function.
You don’t? 🙂
In what ways? Only to enjoy unusual constellations of sounds, or are there others?
I don’t; it’s close to true, for a rather wide value of “communication” that includes purely social functions empty of “content”; but like everything else language has side effects from poetry to Chomsky’s I-language. What I’m trying to communicate here is that a lot more of language is communication than most people seem to be aware of.
~~~~~~~~~~~~~~~~~
Sapir & Whorf to the rescue! Before today I never imagined that “good morning” might be anything than a wish to another person (however sarcastic etc.). I never imagined it could be short for “this is a good morning”. That’s because guten Morgen is an unambiguous accusative. Of course English is not so constrained…
Sturgeon’s narrator:
David Marjanović:
This, also, is not communication by LaPolla’s definition: the narrator didn’t know (at the time) why he said it, so I have a hard time crediting him with an intention. Furthermore, he must not have intended for the woman to infer his motive, since he was surprised when she did. And then he has trouble inferring *her* intention from her reply, and has to spend some time figuring it out.
You, however, are counting it as communication whenever information is transmitted between people, and the dictionary agrees with that. LaPolla probably should’ve used a more specialized term, or else emphasized that he was using a technical definition and not the ordinary sense. He should’ve provided a definition of intention, too, since we’re having more and more trouble agreeing on what counts as one.
Fair enough, I may have been moving LaPolla’s goalposts. That said:
I read it the opposite way: he had a clear intention that he was aware of, but struggled, through all his emotions, to express it, and so ended up blurting out a random vaguely polite fixed phrase.
I read it as him hoping she’d infer his motive, but not expecting her to after he bungled the execution, and being very happy that she managed anyway.
OK, that’s also a plausible reading of the story! What would LaPolla say when different observers make such different inferences? I guess he might say that we’re not guaranteed to be right, or even to agree with each other, but that the nature of our interaction with language (and literature) is that we are always *trying* to infer intentions, and that linguistics should be centered on that, on what listeners are doing.
As D.O. says, intentions are vague and shifting and not observable—but we can observe how listeners attempt to infer them anyway. (I didn’t get that directly from LaPolla, it was floating around somewhere, but it seems to fit.)
You don’t?
No, I don’t. Of course I started out doing so, like all of us, but I’ve spent decades squeezing it out of myself the way Chekhov squeezed out his inner serf, reminding myself at every opportunity that there are an endless number of different ways of being, thinking, and doing, that mine is just one of them with no claim to specialness, and that my preference for the ways I’m used to is just that, a preference, and other people have just as much cause to prefer theirs. In the same way, I’ve accustomed myself not to favor my own linguistic habits. It can be done.
I once had an exchange on MetaFilter that stuck with me: I said “You shouldn’t present your own beliefs as fact,” and the other person said “If I say something, obviously I think it’s a fact.” But I don’t! For me, of course I say things because I think them, but I don’t assume (nor do I expect others to assume) that what I say and think is a fact, and it may well change as I learn more.
To put it another way, I truly believe that almost everything all of us think we know is wrong, including me.
Not sure if I can go along with you there, Hat …
That’s great! If everyone thought like me, what a dull world it would be.
To quote Samuel Johnson: “I dogmatise and am contradicted, and in this conflict of opinions and sentiments I find delight.”
Hat has the firmly held belief that all firmly held beliefs are wrong.
(Alternative version: Hat holds to the dogma that all dogmas are wrong.)
Since we are debating here what language is for, the opening words of the introduction to the Oxford Handbook of Cognitive Linguistics (which I just downloaded from Academia) are as follows:
Cognitive Linguistics as represented in this Handbook is an approach to the analysis of natural language that originated in the late seventies and early eighties in the work of George Lakoff, Ron Langacker, and Len Talmy, and that focuses on language as an instrument for organizing, processing, and conveying information.
Somehow it just makes you want to ask, “But what exactly is language for?
Or maybe we should go with Chomsky and say that language just is (due to a beneficial genetic mutation at some point in human evolution).
@ Brett
Thanks for pointing that out. I always remembered “my precious” as referring to the ring, but upon rereading parts of The Hobbit recently, I was slightly surprised to discover that it was meant to refer to Gollum himself.
a beneficial genetic mutation
Too early to say at this point.
Cognitive Linguistics … focuses on language as an instrument for organizing, processing, and conveying information.
That’s what makes this a subfield of linguistics: studying this function and not all the others.
“Language just is” — inb4 David M :), everything is the way it is because it got that way, and (as discussed in the Evans book) it got that way gradually, and at a high cost. We take years to learn to speak, and having an airway specialized for speech means we can choke on food and die. (I think I read there’s some other change to the jaw, too, that facilitates speech at the expense of increased infant death rates due to problems with teeth? Can’t find it.) Rather than what is language for, how about: what functions of language have enough survival value to be worth such a cost? Those are going to shape and constrain it.
Too early — well, if you guys from A Centauri still use it… but maybe you don’t?
@ ktschwarz
I’ve never seen those claims before. Very interesting.
So it wasn’t just one genetic mutation but several? Did Neanderthals lose out because they couldn’t talk? And was “communication” the one clinching purpose of language that made it worthwhile? The questions multiply…
I must say, I don’t think I’ll look at someone with a handsome jawline the same way again.
There seems to be some doubt about the validity of the claims about the larynx (or at least, the issues are not straightforward):
https://en.wikipedia.org/wiki/Origin_of_speech#Larynx
(Interesting article in general.)
Charles Hockett’s 1960 Scientific American article, linked as Note 1 in the WIkipedia article, is quite interesting too: especially where he starts talking about “blending” as part of the origin of distinctively human language.
“Blending” = “Merge”?
Shaken, not stirred.
Yet Tolkien did not take out the statement about Gollum referring to himself as “my precious.”
Surely Gollum, while he still possesses the Ring, calls himself my precious because he cannot distinguish between himself and his addiction, which is why he reacts to its loss like an amputation. But having lost it, he does begin to “revive” (Gandalf’s word) a little, and while he still craves the Ring, he can now tell the difference between it and himself: he is here, whereas the Ring is with Baggins, and so not here.
This cannot have been accidental
A great deal in Tolkien is accidental, or at least unavailable to Tolkien’s ego at the time when he wrote it, if not to his total personality. A locus classicus is this passage from the time when the Nine Walkers are about to leave Rivendell: “Bilbo huddled in a cloak sat silent on the doorstep beside Frodo. Aragorn sat with his head bowed to his knees: only Elrond knew fully what this hour meant to him.”
Now what is Aragorn pensive about? Many answers can be given: fear, apprehension, the knowledge of the perilous nature of the quest (a quest, as Shippey notes, not to find something but to get rid of something), the make-or-break nature of the quest for him personally (either dead or King of both Gondor and Arnor).
But in the earliest draft version we have of this passage, it simply reads: “Bilbo huddled in his cloak stood silent on the doorstep beside Frodo. Trotter sat with his head bowed to his knees.” So at this point Aragorn, or Strider, is a hobbit named Trotter, who wears wooden shoes (unlike most hobbits, who wear either nothing on their feet or boots for muddy weather), and Tolkien has no idea what his history or purposes may be. Yet the image of Trotter/Strider/Aragorn sitting with his head bowed to his knees is already present, despite a total lack of knowledge on Tolkien’s part of why he is doing it.
This surely is accidental, or “accidental”. “A chance-meeting, as we say in Middle-earth”, says Gandalf in a different connection. But in Tolkien’s worlds both primary and secondary there is no mere chance: all has been willed where what is willed must be.
It’s no secret that I know nothing about linguistics – on Structuralism, which was big in European architectural theory in the 1970s (and American, via Anthony Vidler) I do remember the thing about it being intended as a method-of-working rather than as a dogma in itself – but isn’t there a distinction to be drawn here? Whatever else it is, one thing language is for is verbal communication. That and the larynx drawings make me realise that phonetics is a subsection of language but the ‘voice’ has different, additional functions. The first voices might have made patterns of sound for another kind of communication: to get other creatures’ attention perhaps; or for musical (let’s say artistic) reasons (singing). My daughter found that when she plays a song called Gosh, her borzoi dog will start howling – it sounds like these two:
https://www.youtube.com/watch?v=j4REZBwRmJ0
https://www.youtube.com/watch?v=e8pCM3XdPNI
The borzoi, Moira, likes it when several of us howl together. The terrier, Jack, can’t or won’t do it. Moira never howls to Muddy Waters or to Mozart only to this track above by Jamie xx (xx is part of his name, I think).
It’s no wonder that track sets Moira a-howling. Music can be appreciated by many different kinds of animal – and abhorred too. It is universal in its ability to grate and ingratiate.
BTW, Sparky makes sounds like those dogs when he suspects one of us is about to go out without taking him along. They’re just not so LOUD since he’s a small dog.
Huh. Isn’t Sparky a terrier? I didn’t think they could do the Howling Wolf.
Probably they could; for the full story follow the link to the “Origin of speech” article and scroll down a little.
“…except the weasel.” Or the fallow deer in this case.
it wasn’t just one genetic mutation but several? — Many! One I was surprised to learn just now: monkeys can’t vocalize for more than a couple of seconds at a time. Our ability to speak whole sentences depends on complex breath control that other primates don’t have, requiring far more nerve density in the chest muscles and a wider vertebral canal. And the more sounds you can make in sequence, the more advantage to a brain that can recombine the sounds in different ways.
validity of claims about the larynx
The objection that speech post-dates the anatomically modern vocal tract really isn’t supported by the cite-note: it goes to a paper that not only uses the dubious method of phonemic diversity decay, it concludes (as also cited farther down the same page!) that language is at least 230-150,000 years old, or in their words “language appears early in the history of our species”, not late!
So given that our ancestors at 100,000+ years ago had vocal anatomy intermediate between us and apes, did they in fact use it for symbolic communication in a way intermediate between us and apes? Or was the vocal tract evolution driven by other functions, and language just hopped on the bandwagon at the end? (As writing did with our hands.) In either case, I’m impressed with the idea of language as essentially embodied, rooted in our motor cortex and its ability to coordinate muscle actions. It makes language part of everything else that’s human, and it suggests lots of experimental questions.
Wait, why is that a given?
Eh, “species” or “subspecies”… we separated from the Neandertalers + Denisovans some 600,000 years ago.
A given: Philip Lieberman says so, or at least that’s what I’m getting from him. He talks a lot about head-neck angle, oral-pharyngeal proportion, and the hyoid bone. Here he describes measurements of a Homo erectus, some Neanderthals, and the Skhul V human. You can judge better than me whether I’ve represented his point correctly.
Thanks, I’ve downloaded the paper and will read it.
Has this one been mentioned so far? The language faculty that wasn’t: a usage-based account of natural language recursion.
@bathrobe
One of the obvious sequence learning applications would be for dance (at least for social animals or specific encounters such as fighting or mating). Is there any corresponding research, i.e. do species or herds have a fixed dance “routine”? Presumably birdsong has already been studied in this context and the gene(s) is / (are) different.
I saw something about “entrainment” — synchronizing movements to a beat. Humans and parrots can do it, but “dancing” horses and dogs have just been taught to move rhythmically and the musician follows them, not the other way around. I don’t know if it’s connected to this idea, though.
The Frontiers journals are free, you can read them at the source without passing through the NCBI.
Me on July 13th:
I still haven’t, of course. 🙁
I’ve now read the whole Frontiers paper and highly recommend it. (Just ignore the sentence about the age of PIE.)
I agree.
The basic theses seem to be
(i) The thing that modern human beings are particularly good at is not recursion but sequence learning.
(ii) Human beings have not evolved to produce language: rather, languages have “evolved” to fit human sequence-learning abilities.
Fave quotes:
“There appears to be a qualitative difference between communicative systems employed by non-human animals, and human natural language: one possible explanation is that humans, alone, possess an innate faculty for language. But human “exceptionalism” is evident in many domains, not just in language; and, we suggest, there is good reason to suppose that what makes humans special concerns aspect of our cognitive and social behavior, which evolved prior to the emergence of language, but made possible the collective construction of natural languages through long processes of cultural evolution.”
“We suggest that, in light of the lack of a plausible evolutionary origin for the language faculty, and a re-evaluation of the evidence for even the most minimal element of such a faculty, the mechanism of recursion, it is time to return to viewing language as a cultural, and not a biological, phenomenon.”
And there is a respectful nod to St Ludwig!
Good stuff.
Sounds good to me.
(Just ignore the sentence about the age of PIE.)
Also ignore the opening sentence — at which I did a double-take:
Humans’ in general? I feel my language faculty has been expanding, not least thanks to the discussions Hat hosts.
I think they mean ‘Generativists’ claim as to the scope of the language faculty’. And the first sentence of the Abstract kinda hints at that. ‘language faculty’, being a technical term there, should be in scare-quotes or italics at least.
Christansen and Chater build up an impressive body of evidence; but it’s diffuse. I’m pretty sure Generativists would a) deny it counts as evidence; and/or b) repudiate it tout court on grounds it’s talking about performance not competence. I foresee another round of people talking past each other, with Generativists building another moat around their impregnable (to falsification) fortress.
I think the whole human — non-human dichotomy is quite redundant; a product of Kantian thinking (although I know people who will defend Kant no matter what). “Humans” are not special, what kind of cognitive dissonance do you have to keep up to think that?
” (ii) Human beings have not evolved to produce language: rather, languages have “evolved” to fit human sequence-learning abilities.”
And sequence-learning does not necessarily involve recursion.
People who will defend Kant no matter what:
1. His mother.
2. Isn’t it a bit late? I mean, quite a lot of people in art & architecture refer to the Critique of Judgement as a sort of benchmark but I wouldn’t say that’s exactly defending him.
3. Anyway, they’re all humans not spiders, so QED.
AJP Crown: Sorry, I never really got to talk about it with him. He’s a great guy! But he really liked Kant.
4: People who teach Kantian philosophy in southern US universities.
EDIT: Spiders?
EDIT2: You thought I was talking about Kant, not people I know teaching him? I was talking about a philosophy professor that I know who really likes Kant.
Just say “Transcendental!” and waggle your eyebrows.
Works every time. Of course, you’ve got to have the eyebrows for it.
That reference is lost on me, languagehat.
It’s a combination of Kant’s transcendental arguments and Groucho Marx. Nothing deep, I’m afraid.
All tautological.
I just learned of an exotic Kant defense gambit, in one of the essays by Blumenberg in Die nackte Wahrheit (no boob pics, sorry). He quotes from a letter by Kant to Moses Mendelssohn, in which K claims he deliberately fuzzes and beats around the bush in his books because the truths he has discovered would be too much for people to bear if explained clearly.
So a sow’s ear really is a silk purse in disguise !
Over recent decades, the language faculty has been getting smaller.
Natural attrition, no new professors were hired.
BTW, whatever happened to Comparative Literature ? It’s not in the news much nowadays.
Go, go, go, says Kant: mankind cannot bear very much philosophy.
Humankind.
(Not political correctness, rhythm.)
whatever happened to Comparative Literature ? It’s not in the news much nowadays.
I miss those headlines. Like that time when libraries gave up on figuring out if In Cold Blood should be classified as fiction or nonfiction. Remember that headline? TRUMAN DEFEATS DEWEY.
Ha!
It’s explicitly talking about both, and questioning the distinction – I thought you’d like it. 🙂
The term language facutly is so strongly associated with generativism that it isn’t necessary to spell that out.
I thought you’d like it
Thanks David, yes I did; but then I don’t need persuading that the claims for ‘language faculty’ are bunkum.
And it’s the Generativists who cling to the performance/competence distinction as the last fig-leaf of their defence; they’re not going to give that up despite a mountain of evidence nobody’s monkey-brain can process 6-ply embedding.
I was struck by the way they described one if their early examples, noting that center embedding is much harder to process than what they called “tail recursion.” Although they did not explicitly point this out, it seemed clear that the reason the tail recursion was easier to comprehend was exactly the same as the reason that tail calls in programming are easy to optimize—there is no need to keep the old frame around as you move to a new clause (language) or function (programming). Because you are not going to have pop backwards after the clause/function terminates, going back through the history of embeddings/calls, it is possible to drop a lot of extraneous tracking information that takes up valuable processor memory.
noting that center embedding is much harder to process than what they called “tail recursion.”
Thanks Brett, everybody calls it ‘tail recursion’, so no need for scare quotes. And everybody knows that is easier to process.
The interesting thing, going back to Pullum’s critique of Chomsky’s early papers, is that in Postal’s formalism (which Chomsky borrowed from, but never properly attributed), there’s no distinction between embedding vs tail recursion; they all get called ‘recursion’; you can use either mechanism to generate the same (infinite) set of sentences; then Generativists see ‘recursion’ as one phenomenon within the language faculty.
This wasn’t very apparent in Syntactic Structures [1957], because that didn’t carry the formal proofs, rather it cited Three Models for the Description of Language [1956]. Despite that alleged date, there were ‘technical problems’ with parts of it which mean it didn’t appear until nearly a decade later, and then turned out to carry much weaker proofs that failed to support the arguments in SS. Of course by then the Generativist horse had bolted.
One of Pullum’s papers presents several alternative formalisms that can distinguish between embedding vs tail recursion, and that seem to accord better with observed performance.
Recursion: see recursion.
Tail recursion: If you are sick of this, see recursion; otherwise, see tail recursion.
“Island constraints” have been a big thing in generative grammar for a long time. This paper On the Nature of Island Constraints seems to present a reasonably accessible introduction that is not angled from the point of view of an insider.
The problem is that certain structures interfere with “unbounded dependencies” (an example of an “unbounded dependency” being What does Gromit think that Wallace hopes that Wendolene likes?)
Three things struck me.
1. I have trouble agreeing with some of the examples. Especially:
Why did they say that nobody left?
meaning “What was the reason that they gave for nobody leaving?”
This seems fine for me, something I would say myself. And yet it’s given as an example of a syntactic environment “where extraction is reliably judged to be unacceptable”. Who are the people who make these judgements? Do they judge such sentences to be unacceptable merely because they WANT them to be unacceptable?
2. I can’t understand why they think it’s difficult to account for the following (asking “What was a man accused of stealing?”):
a. What did the journalist accuse a man of stealing? (This is ok)
b. * What did the journalist accuse a man who stole? (This is not ok)
I cannot help feeling that, being puzzled by this is only possible if you are playing around with formalisms, trees, embeddings, and movements in an abstract sense. “He accused a man of stealing a watch” and “He accused a man who stole a watch” have completely different structures. Is there a need to try to explain this “enigma” by positing the relative clause as an environment which inhibits wh extraction?
3. In the example of an unbounded dependency given (What does Gromit think that…, it seems to be “tail recursion” that is involved. Many puzzling examples do not involve tail recursion. Is this a case where generativism has got stuck on different kinds of recursion?
Are there any mathematically / logically / computer-minded people who could comment on this?
meaning “What was the reason that they gave for nobody leaving?”
I find that sentence acceptable. (Enter immediate caveats about ‘acceptable’ being a fairly useless concept.)
I don’t find it to have the meaning you give. Indeed I can’t really contrive a context where it would carry that meaning.
I think: ‘they’ said that nobody left (which might or might not have been true). The question is asking about the ‘they’: what was the motivation for them saying that?
I don’t find the question to be asking about the motivation of those who left. Indeed I don’t find any presupposition that anybody at all left. Perhaps the ‘they’ were lying, and the questioner knows that, and is asking why ‘they’ lied.
I presume this example is trying to demonstrate the meaning you give, but failing because of the island constraint. For me to express that meaning, I’d go round the houses:
What was the reason for leaving, did they (those who told you) explain?
Here we do have a presupposition that somebody left; and a presupposition that ‘they’ said so truthfully.
But yes wrt your ‘Spinning’: very rapidly if you ponder over these examples, you find anything acceptable, and you find you can contrive it to have almost any meaning. Don’t think we need to rehash the debate about Yes/No ‘acceptability’ being useless.
My interpretation, with rising intonation is something like:
Tell me again why, in their opinion, nobody left?
The other interpretation is also possible, but with different intonation.
Yes your Gromit sentence is tail recursion. (I’d use Montague grammar/lambda calculus to formalise the ‘hoisting’ of what Wendolene likes to the Wh- focus of the question.)
Gromit thinks Y
Y = Wallace hopes Z
Z = Wendolene likes x
What is the x such that Gromit … x?
The three clauses don’t exhibit embedding, because there’s no constituent of the outer clause that continues to the right of the inner clause. In particular, no agreement or co-ordination.
Other languages (especially with case inflection) might work differently — for example that the ‘target’ of a hope/belief/like must be in some sort of oblique case, and that that case attaches to the hoisted Wh-. I’ve tried using English pronouns in your frame, but I get nominative case all through, except for the object of Wendolene’s like — which case isn’t carried to the What.
“What did the journalist accuse the man of having stolen” is fine, innit? The “of” is obligatory to link the verb “accuse” to the misconduct alleged, which is one of the two complements of the verb accuse you’re generally interested in. Who was accused and what was he/she accused of doing? Obviously you can drag out the detail in multiple steps. What was he accused of? Stealing. Stealing what? A car. What kind of car? A Datsun. What color/year/model of Datsun? Etc. But it’s hard to have a good accusation without an “of,” because just saying “Alice accused Bob” will almost inevitably provoke at least the first stage “accused him of what” follow-up unless it’s somehow clear from context provided in the prior discourse. I think “What did the journalist accuse the man who stole of” would be syntactically okay because it gets in the mandatory “of,” but would be sort of bonkers on pragmatics grounds, for the same reason that the presumed answer “The journalist accused the man who stole of stealing” is sufficiently redundant that saying it is bizarre.
@AntC: I think the point about tail recursion is that is amenable to optimization or easy comprehension, but these are not automatic features. In an interpreted computer language, where the program is simply stepped through, line by line, tail recursion is not an improvement over normal recursion, since the interpreter has no way of knowing that it can drop the frame when it reaches a tail call; it does not know that nothing else is coming after the call. However, with a compiled program, it is possible (although not necessary) for the compiler to recognize tail calls and implement them so as to drop unnecessary frames when they occur. Yet it seems that our human language interpreter can recognize (probable) tail recursion based on their preceding context and thus process them in streamlined fashion.
@Brett I see no reason to suppose humans processing language bears any similarity to computer processing/algorithms. Exactly the same way I see no reason to suppose some ‘efficient’ description/formalism of a grammar by a linguistician bears any similarity to human mental processes.
There’s no evidence to talk in terms of stacks or ‘dropping frames’. No evidence to talk in terms of ‘top down’ or ‘bottom up’. No evidence to talk in terms of a lexing phase vs a parsing phase vs an interpreting phase. You won’t find in the brain a phonology department, a morphology department, a word-order department.
I see no reason to suppose any human mentally processes anything in “streamlined fashion” — it seems to me all terribly ad-hoc and error-prone. Also I see no evolutionary advantage in the sort of ‘efficiency’ compiler- or interpreter-writers worry about. Humans don’t seem to be constrained by the single-CPU impedance of the von Neumann-Turing model. In fact I’d say there’s strong evidence of massively parallel processing, with different processing pathways coming up with differing answers at about the same time, causing malaprops, false starts, contradictions, misapprehensions, etc, etc. I think there’s powerful abilities to backtrack and re-interpret long after the beginning of an utterance got parsed; whereas for computer-based parsers backtracking is anathema. I say the brain is a sophisticated adaptable organism, powerful in general but not tailored for anything in particular — especially not tailored for language. Very possibly the language production mechanism shares very little with the language comprehension mechanism, directly contradicting Chomskyan models.
I like the Christiansen and Chater paper because they look at signs/symptoms of humans processing language, including which brain areas light up. A plausible explanation for why humans seem so poor at processing embedding (as compared to computer-powered parsers) is that humans exactly don’t operate a call stack and don’t pile frames on to it. Perhaps the brain is just rubbish at rapidly allocating/deallocating chunks of memory. Perhaps everything in memory is distributed like a hologram.
After all, evolution is littered with species whose adaptation is not ‘efficient’ but merely ‘just enough’ to survive. It’s only when the environment changes or competition moves in that ‘inefficiency’ gets exposed; and even then what looked like inefficiency before can now look like adaptability.
I’m not saying there’s anything ‘wrong’ with aiming for ‘efficiency’/economy of description in a grammar for an observed natural language. I’m saying merely: don’t imagine you are thereby modelling something you can find inside people’s heads.
Why did they say that nobody left?
cp Tell me again why, in their opinion, nobody left?
Hmm. That latter is no longer a question, it’s an imperative. You can append a question mark and say it with rising intonation all you like, its grammar is different/it wouldn’t demonstrate the same syntactic phenomenon.
I’m not sure that latter exhibits the breaking of an island constraint. This is very difficult evidence to assess, because Generativists don’t want to say the original sentence is ungrammatical/it’s not a case of Yes/No. They want to say it’s not susceptible of one interpretation/parsing, even though it’s grammatical under another parsing.
The first formulation has two clauses/two verbs ‘say’ and ‘left’. My reading is that the ‘why’ can apply only to the outer clause/verb viz. ‘say’. The reading that the Generativists say is not grammatical, is for the ‘why’ to apply to the inner clause/verb ‘left’.
The latter formulation has only one verb/clause ‘left’ — if we ignore the imperative ‘Tell me’ appearing outside the ‘why’. If I change it to exhibit two clauses again:
For me now again the ‘why’ attaches to the ‘opine’/can’t attach to the ‘left’.
(To answer obliquely at first.) The reason I look to Montague grammar formalisms first, is because of one Montagunian who made a strong impression. He as a visiting lecturer gave more-or-less the same lecture to a Linguistics seminar then to a Philosophy seminar. This was late ’70’s when Chomsky ruled in one department and Wittgenstein/Austin/Grice ruled in the other. He was able to cut through a great deal of metaphysics from either source with a simple question:
What are the truth conditions? Irrespective of all sorts of subtleties of topic-comment/wh-movement, meaning-is-use, performatives, implicatures, etc: if two utterances in context don’t carry the same truth conditions, including the same presuppositions, they can’t possibly mean the same.
Then your latter ‘opinion’ reformulation to me doesn’t carry the same truth conditions as the earlier ‘say that’. Of course I mean in my idiolect. We need a survey of English speakers in general.
Do both formulations carry the presupposition that nobody left? Or does one carry a weaker presupposition that ‘they’ claim (or opine) nobody left (which could be a possibly deliberately false claim/claimed opinion)?
My reformulation was an attempt to explicate how I understood the meaning, not a challenge to find different presuppositions.
The rising intonation, rising to the very end of the sentence and not falling halfway through, is a kind of request for confirmation or clarification, or possibly an expression of incredulity. “They gave a reason for nobody having left, what was it again?” — rising intonation in the second sentence.
Here is a completely artificial hypothetical example because I couldn’t think of a better one.
Wife: He was complaining that he couldn’t get his father to lend him some money to buy a house.
Husband: Wait a minute, why d’he say he wanted to buy a house? (Rising intonation, incredulous and trying to reconfirm what’s going on.)
Wife: So he could get his father a place to live in.
Husband: (Even more incredulous). WTF. If he wants to buy his father a house, why is he trying to get a loan from his father? (Last sentence with falling intonation)
If you can’t play that in your head then maybe you just haven’t encountered this kind of question.
If you can’t play that in your head …
Deep into the paper you started this from, there’s a lengthy section on ‘satiation effects’. If the linguist or their language informants confront enough of these sentences, their brains eventually cave in and accept any old porridge.
For example, upon re-analysing informants’ responses, it’s noticeable that a higher proportion of sentences in the first half got rejected than in the second half, although the tests were designed with ‘random’ mixes throughout of dodgy sentences. That’s a familiar ‘pop goes the weasel’ effect in child development tests.
I experienced that myself: I came across sentences the paper claimed were unacceptable; but I judged acceptable/interpretable. Then the paper told me they were merely repeats of earlier sentences — back at a point when I agreed they were unacceptable.
This has lead some formalists to claim these sentences are all grammatical/there is no such thing as island constraints. (Especially because although all languages seem to exhibit island constraints, the objectionable vs acceptable structures seem to vary wildly.) Merely: people hit processing limits, as in the difficulties with embedding; and they can train themselves to supply more bandwidth to make a long-distance link across the island(s).
Or a cynic might say: people just stop objecting and wish the linguist would get on with it, so they can go get their free cup of tea and biscuit.
I daresay if somebody produced one of these sentences live to me, and with seeming honest intent that they wanted me to understand, I would cope — possibly after a bit of clarification. I’d readily put it down to performance limitations of speaker and/or hearer.
So I’ve given up counting the fucks I don’t give about island constraints. Did you have a substantive point?
I think this has zilch to do with ‘satiation effects’ and a lot to do with the fact that we speak slightly different varieties of English. I have no particular desire to belabour this any further since you’ve made it eminently clear, at some length, that you don’t agree.
This exchange has, however, left me wondering: If ‘satiation effects’ are an occupational hazard of linguists, is it possible that being heavily steeped in logic might also subtly skew linguistic intuitions?
What I took away from the paper (before I fell asleep last night) is that center embedding per se may be a second order effect of having both left- and right-branching structures in a system of sequences, and that what amounts to a game of Chinese telephone will tend to produce a system where one of those dominates. Which means that people’s sequence memorizing faculty will get very little training on center embedded structures.
Loose speculation follows: So if you (general) have to engage a slightly higher-level processing facility to parse a center embedding, maybe you can take one level in your stride, but anything funkier will punt to general cognition the first time. That you (individual) may quickly train yourself to parse them more easily (“satiation”), does not mean that you should expect success if you use them in an application letter (“acceptability”).
@Bathrobe: I have learned that I, who am very heavily steeping in logic, evidently have a completely different notion of how to judge grammaticality than AntC. I don’t feel grammaticality is something that can be decided by a snap judgement; thus, to me, a sentence may be “grammatically correct” even if it is not practical to parse it under realistic conversational circumstances.
I can only repeat my views: grammaticality means conformity to a grammar, and grammars are theories devised by grammarians. Acceptability judgments, as messy and unreliable as they are, are the raw data from which grammars of natural languages are constructed. Grammaticality judgment is a category mistake, unless indeed the grammar in question is so complicated that it takes the full power of human judgment to figure out whether a sentence conforms to it.
[Append discussion of judgment vs. inference here.]
“Why did they say that nobody left?” — I’ve been in conversations where questions of this form were misinterpreted, and it took some time to recognize what went wrong and backtrack, so now I try to be extra careful when I hear or say something like that.
if somebody produced one of these sentences live to me, and with seeming honest intent that they wanted me to understand, I would cope — possibly after a bit of clarification. — THIS! Because you are accustomed to figuring out speakers’ intentions, as discussed at length above. I bet that “satiation” also has a lot to do with guessing some context or intention for odd sentences: if you’ve done that once, it gets easier the next time. And you can see it happening in the response to Pullum’s conundrum of the stranded prepositions: Is “Who did you talk near today?” acceptable? At first glance, usually no, but then people start imagining contexts where it makes sense, and it starts to sound better. (I still doubt that it would be produced spontaneously, even in the right context; but that can be investigated empirically.)
I bet that “satiation” also has a lot to do with guessing some context or intention for odd sentences
I’m not so sure about this. I feel that one problem with linguistics (and many other approaches to language, too) is that, despite supposedly putting the spoken language first, it tends to deal with the written medium, since that is the principal medium in which linguistic matters are presented and discussed. It’s difficult to break the habit of falling back onto standard written English as the object of study. A simple example: when discussing the difference between “The dog is a wily animal”, “Dogs are wily animals”, and “A dog is a wily animal”, it’s easy to try to explain them through their respective nuances and forget that only two of the three are likely to be found in ordinary speech. Speaking for myself (because I am sure there are some people who would disagree), “The dog is a wily animal” is something I would most likely produce in ordinary speech if I were consciously slipping into a “book-like” style. That should be the first point in any explanation, not the last.
Since words on the page or the computer screen have no intonation, it’s also easy enough to fall back on ordinary unstressed, unmarked delivery. But to discuss utterances without reference to intonation and delivery is essentially to ignore the spoken medium. I was surprised on this or another thread when someone said, “Oh yes, it could have that meaning, but only if pronounced with a certain intonation” (or words to that effect), as though intonation were something extraneous or exceptional. It is as though the delivery of a nine-year-old reading out loud, or a speech synthesiser were the norm, and any attempt to impose intonation were somehow “exceptional”.
I would put it the other way round. Just as any actor who could only deliver written lines in the intonation of a nine-year-old or a speech synthesiser would be a failed actor, any linguist who could not imagine intonation in reading written sentences would be a failed linguist. I submit that the same goes for context — whether you are an actor or a linguist. It’s not “satiation”; it’s an essential part of being any kind of linguist at all.
@ Bathrobe,
A simple example: when discussing the difference between “The dog is a wily animal”, “Dogs are wily animals”, and “A dog is a wily animal”, it’s easy to try to explain them through their respective nuances and forget that only two of the three are likely to be found in ordinary speech. Speaking for myself (because I am sure there are some people who would disagree), “The dog is a wily animal” is something I would most likely produce in ordinary speech if I were consciously slipping into a “book-like” style
an example (but of what? I wish I knew in specific detail) might be the conservative policy intellectual Cliven Bundy speaking (not writing) the abstract language of conservative policy intellectuals from a century ago like Madison Grant or Lothrop Stoddard when he says,
“I want to tell you one thing more I know about the Negro.”
https://www.nytimes.com/2014/04/24/us/politics/rancher-proudly-breaks-the-law-becoming-a-hero-in-the-west.html
@Bathrobe I’m not so sure about this. I feel that …
All we’ve ended up at is that your idiolect/sense of acceptability differs from mine differs from others. Differences of judgments is in line with the surveys cited in that paper. We don’t need to overthink why. In ordinary life (whether spoken medium or written) we’re trying to communicate and understand. We don’t stop ourselves to think: that syntax was a bit odd, I wonder if I’m finding it acceptable. We don’t usually go to crowded theatres (when that was possible) to get ambiguous/dubiously syntactic utterances thrown at us at random — Theatre of the Absurd not excepted.
So if I were the questioner, faced with wanting to know about the situation you started with, I simply wouldn’t ask it that way. There’s plenty of ways to ask that avoid breaching island constraints.
Did they say why nobody left? — ‘why’ now inside scope of ‘they say’
Amongst those there, did any give a reason nobody left? — ‘they say’ out of the picture altogether
Never the less, if some questioner did put it with your wording, I think I’d treat it as if asked “Have you stopped beating your wife yet?” I wouldn’t go ahead and just guess what they wanted to know and answer that — as @ktschwartz says.
BTW I did not discuss “without reference to intonation and delivery”. I said quite clearly adding your suggested intonation didn’t make the utterance acceptable for me. You are not going to browbeat me into your judgment. I just disagree. Full stop.
it’s an essential part of being any kind of linguist at all Just stop insulting people. I don’t claim to be a professional linguist. I do claim to be a competent speaker of (my dialect of) English. And I’m not going to put up with you telling me what I find acceptable or interpretable: refraining from browbeating informants’ (linguistic) judgments is an essential part of being any kind of social scientist. Reasonable people can reasonably disagree.
I said quite clearly adding your suggested intonation didn’t make the utterance acceptable for me.
Just stop insulting people.
I wasn’t replying to you; I was responding to ktschwartz about “satiation”.
You are looking for intention where there was none.
(In responding to ktschwartz, I was disagreeing with the use of “satiation” as an easy way of discounting judgements. I’m well aware of “satiation”; I’ve referred to it at one of our threads without using that term. In the case we’re quibbling over, “satiation” is completely irrelevant since it wasn’t an example that became more and more acceptable the more I looked at it. It is acceptable to me without overthinking it.)
I have already agreed that we speak slightly different varieties of English. What else is there to say?
Neurophysiological dynamics of phrase-structure building during sentence processing
According to most linguists, the syntactic structure of sentences involves a tree-like hierarchy of nested phrases, as in the sentence [happy linguists] [draw [a diagram]]. Here, we searched for the neural implementation of this hypothetical construct. Epileptic patients volunteered to perform a language task while implanted with intracranial electrodes for clinical purposes. While patients read sentences one word at a time, neural activation in left-hemisphere language areas increased with each successive word but decreased suddenly whenever words could be merged into a phrase. This may be the neural footprint of “merge,” a fundamental tree-building operation that has been hypothesized to allow for the recursive properties of human language.
Thus far, a tentative conclusion is: some epileptic people build trees intracranially. So what else is new?
Seems to have nothing much to do with the distinctively Chomskyan concept “merge”; possibly the authors don’t realise its foundation-myth status as part of the whole elaborate Chomskyan edifice, and think it’s just a fancy way of saying that mainstream linguists pretty much all agree that human language involves nested constituents. Some neuroscientists seem to imagine that Chomskyanism is a sort of common-ground acquis communautaire among all mainstream linguists.
“Nested constituents” first struck me as a potentially fruitful notion for political humor. It then occurred to me that it’s just another way of talking about hierarchies, chains of command and demand etc. That’s acquis as can be.
https://tvtropes.org/pmwiki/pmwiki.php/Main/EpilepticTrees
Gosh. Some people here in the past have warned against tvtropes. Now I see why. It’s a landscape of rabbit holes for febrile imaginations.
Clearly you are Jesus in Purgatory, like everybody else.
The paper seems to take it for granted that “phrase-structure grammars” are the norm. Would it apply to dependency grammar, or is it an argument against it?
Seems to me that the only linguistic concept actually invoked for real in the paper is “phrase”, together with the scarcely remarkable idea that the brain actually recognises a phrase as such after the constituents have been heard rather than before (!) or during; accordingly any kind of grammar that recognises phrases as a thing would be consistent with their results. The linguistic discussion (such as it is) apart from that is flimflam.
Their only alternative model seems to be:
We also tested a variety of alternative models sharing the notion that probabilistic predictability, rather than phrase structure, is the driving factor underlying linguistic computations
No linguist AFAIK has ever suggested a model of grammar based on anything like this. That’s how Google creates its suggestions. Perhaps that’s how neuroscientists conceptualise language …
Moreover, Chomskyanism is fundamentally about imaginary models of utterance production and only rather peripherally about how people actually interpret utterances on the fly in real life. This sort of experiment really has no bearing on whether the (unfalsifiable) Chomskyan framework is of any value even in practical terms, let alone theoretical.
Inevitably, the example sentences (of which we are only given a few samples, not the complete set) were in English; it would be much more interesting to do this with Warlpiri (say), to see at what point the “Aha! I’ve parsed it!” drop in neural activation occurred. That even might be interesting in the context of dependency vs phrase-structure grammar. It seems unlikely that any of their English specimens would be of much use for this purpose.
It seems (sadly) unlikely that the researchers’ linguistic knowledge extends so far as would be needed for them to realise that languages other than English ought to be investigated at all, or indeed that there are other theories of grammar.
We also tested a variety of alternative models sharing the notion that probabilistic predictability, rather than phrase structure, is the driving factor underlying linguistic computations
I found the paper through this discussion on the “emergent vs generative” debate. So I guess it is explicitly arguing against the emergent approach.
https://www.glossa-journal.org/articles/10.5334/gjgl.500/#B52
It does seem, to me, to be how a lot of reading works, how writing works (many, probably most, typos are not phonological, but consist of writing more common instead of less common sequences), and how the parsing of German subordinate clauses often works before the speaker has arrived at the verb or the separable verb prefix.
Good point; though again, it relates to how we parse utterances, which need not be related to the actual structure of the utterances themselves in any simple way (and indeed, clearly isn’t.)
It occurs to me that their experiment is not going to distinguish between the drop in processing that occurs after you’ve processed a non-garden-path sentence successfully, from one where the end of the phrase is contrary to expectation; either way, the brain is going to have a wee rest after it’s solved a bit of the input before it ploughs on.
It won’t have occurred to them to actually use garden-path sentences in the input on purpose, any more than it occurred to them that they need to look at languages in which phrases can be discontinuous and/or don’t exist at all, like Bininj Gunwok.
But then they are really only concerned to prove, once again, the wonderfulness of the One True Theory, not to investigate meaningful alternatives. (A pointless exercise, after all, when the truth has already been revealed to us by the Master.)
A further confounding factor will be that in English, phrases are demarcated by suprasegmental cues of stress and pitch, so that you actually can tell in advance that there is more to come before a phrase is finished, and to some extent you can even tell how deeply nested the phrase is within the whole utterance as you go along. All of this will be missed if you think that the written form of the input adequately reflects the spoken input, as the researchers seem to have done.
(This is in repositorio-aberto.up.pt › bitstream, but you have to Google the title to find it)
Interview with Professor CEDRIC BOECKX
(Catalan Institute for Advanced Studies & Universitat Autònoma de Barcelona)
Cedric Boeckx is a Research Professor at the Catalan Institute for Advanced Studies (ICREA), and a member of the Center for Theoretical Linguistics at the Universitat Autònoma de Barcelona. Most recently he was an Associate Professor of Linguistics at Harvard University. He is the author and editor of various books on syntax, minimalism and language (from a biolinguistic perspective). He is also the founding co-editor, with Kleanthes K. Grohmann, of the Open Access journal Biolinguistics.
[….]
One thing that should be clear is that many linguists come into linguistics because of their love of languages. That’s an interest that they retain beyond the Chomskyan vision. So, what they are really interested in is still this love of languages but it’s not a love of language. I’ve never met someone who says: “I love languages because they are so similar”. It’s usually “I love languages, because they are so different”. So what you get is a focus on language variation, but variation of languages. And that love is much stronger than the interest in the biological foundations.
I haven’t done this very carefully but I think it’s true that if you look among the prominent departments of linguistics, I think that it’s the case that earlier on, that is, shortly after the cognitive revolution by Chomsky, Lenneberg, Halle and others, many of the young PhDs came from non-linguistic backgrounds; you had people coming from mathematics… but what’s interesting is that more recently almost all linguists came from linguistics departments themselves embedded in language departments, and I think that part of that means that the philological tradition is now much stronger than it was at the beginning of the cognitive revolution.
….the genius of Chomsky, at least the early Chomsky of, say, Syntactic Structures, was to essentially wed two traditions – or maybe more than two –, but certainly it was to use the tradition that, you know, is biological or philosophical, the Cartesian tradition, and the philological tradition, that is, early generative grammar was really about constructions and language specific, like passive in Hungarian, or so… and progressively, as we learn more about language, we have come to realize, perhaps implicitly, perhaps explicitly, that these two traditions can be studied independently of one another. That is, you can focus on the primitives, those are not language specific, or you can focus on passives in Hungarian. And there what you see is that most linguists actually are going back to, when faced with that choice, the constructions. It’s no accident that approaches like Construction Grammar [see Goldberg 1995] are very popular because it’s this philological tradition clothed in formal terms that appeals to many.
[…]
Well, I mean, one of the things that’s true is that even though deans and presidents of universities like to talk about interdisciplinarity, it rarely happens, or at least they rarely allow the structures for it to happen to be built. It’s all good to say, but it’s very difficult to make it happen.
So, I can’t predict the future but I think that as linguistics keeps pursuing current lines, the tension between biology and philology will become more and more manifest, and that people will have to choose whether they go to the biology department or to the languages department, and so you will find both, of course. The concern is not the language department because those will always exist. The concern would be for those students who go more into the biology of things, whether there will be a place for them to pursue that sort of work, that is: once they finish a PhD in linguistics or biolinguistics, can they apply for a post-doc position a biology department? That’s what we should, I mean, people who believe in that approach, that’s what we should try to guarantee somehow. We should try to convince deans and presidents to create a structure, an institutional structure that makes it possible for people who didn’t choose the philological tradition to have job prospects. That’s very hard. And, until now, it wasn’t so much of a concern, because so few actually took that biological path, because they were still in the transition period when the tension between philology and biology was not so apparent. I think that perhaps the main thing that minimalism in linguistics made apparent is that tension between biology and philology. Jan Koster, a good generative linguist, said that minimalism boils down to the following: linguistics is not “philology by other means” [Koster 2003]. I think that’s exactly right. But if that tension is only apparent now, we should worry about the next generation. Those that, until now, could still fall within a linguistics department that did both, or that tolerated the biology, will be told “well, ok, not in this philology department, but someplace else”. Where is that “someplace else”? We should worry about that. The only way to do that is to really talk a lot a more to the other disciplines. Otherwise, they won’t realize we exist.
The problem is that Chomskyans are prone to grossly overestimating the degree to which their sort of abstract theorising links up with biology at all. They’ve really never got away from imagining that they are describing an imaginary “language organ”; they’ve just got more sophisticated in imagining the character of the “organ.” This organ remains dissociated from real-world neurology for all but True Believers who assume that the One True Theory is proven beyond doubt already, so that any conceivable neurological observation simply confirms its Truth yet further.
In fact, the very last people biologists with an interest in language should get in contact with are Chomskyans, who are liable to present views which are in fact highly speculative and controversial as if they represented a settled consensus among all linguists worthy of serious consideration to other scientists.
Cedric Boeckx is Chomskyan after a fashion but only because he belongs to that generation. What he says is rather different:
Question: After more or less a century without discussing the issue of the origin of the human language – it was even declared as a kind of taboo among linguists – linguistics has recently turned itself again to this area of research. The biolinguistic program has played a decisive role in this shift. Moreover, it is a clear example of how linguists, biologists, anthropologists and other scientists can work together. We know that, right now, this is an increasing area of knowledge, with new discoveries being made on a regular basis. In your opinion, will it ever be possible to answer some classical questions such as knowing when and where did language appear or whether all languages descend from one common, ancestor language?
I would, if I may, disagree with part of the question or how it’s phrased. I think it’s not the case that people didn’t think or speculate about the origin of language for a while. It’s true that there’s the famous ban coming from France [in 1866, by the Sociéte Linguistique de Paris] and other linguistic societies about the origin of language but, if you look at the literature, there’s actually a fair amount of work that’s been going on, even during the supposed ban. Chomsky himself, actually – he’s often said to be one of the linguists who didn’t talk about evolution – has an interesting paper in the 1970s [Chomsky 1976a], at the time when it was not supposed to be allowed. Part of the reason why people went back to this topic has to do with a series of changes that took place in linguistics and allied disciplines in the mid-1990s and afterwards. Specifically the fact that people changed perspectives on various things made it possible to ask questions about the origin of language. For example, there’s been a shift, that’s now well documented, in comparative psychology, where people used to take different species and try not to compare them but to contrast them, say, “humans have language but our closest relatives don’t”. Things like that. For some reason, recently comparative psychologists have just started to approach the same question but differently, namely asking “if language is not this unique thing, if it’s like a decomposable entity, would it be possible for other species not to have language as a whole but certain parts of it?”, and that somehow revived the topic. In linguistics as well, there has been sort of a softening of the nativist position, where nativism is still, I believe, the norm, that is, it’s the case that humans acquire language, but not other species does. But people have softened this in saying perhaps not everything that enters into human language as a biological entity really is specific to language or specific to humans. Even the die-hard nativists have now allowed or even actively explored the possibility that a lot of what we thought to be highly specific to language and to humans may not be. That reopened a set of questions about the origin of language. A third factor, and I think perhaps a more important one, has been the shift in biology itself from a strict Neo-Darwinist position – the modern synthesis – to something broader, that many people call evo-devo [evolutionary developmental biology], although evo- devo is more like a bunch of fields as opposed to a unique one. For linguistics evo-devo has been good, because its philosophical roots are shared in an interesting way with the roots of linguistics, at least as a cognitive science. Chomsky has this important book called Cartesian Linguistics [Chomsky 1966], that traces back the philosophical conceptual history of the field, and if you look, the evo-devo literature goes back to roughly the same philosophical work, so there is a commonality there that can be exploited when it comes to asking about the origin of language [see Boeckx 2009, 2011]. I think that’s been a shift in biology itself, that’s been exploited in language but also in other cognitive areas, to approach the origin question. So, it’s a combination of factors that have led to something fruitful. It’s also been the work, I think, of people who have really studied this topic seriously for a long time, and for a long time were not very, perhaps, prominent, but then have gained prominence. I’m thinking of Derek Bickerton, who had important books, that now are on everyone’s reading list and citation list, but for while his work was isolated as an enterprise. Also, the group in Edinburgh, with James Hurford, has done significant work. They were probably the first to establish a program in evolutionary linguistics, and I think that the fruits of those efforts are now becoming apparent, even though the beginning of it goes way back. Now, to your question, or sub-question, whether we will ever know, it depends on what the questions are. I was heavily influenced by Richard Lewontin, at Harvard, who’s written this very pessimistic article saying that there are certain questions about the origin of human cognition that we’ll never be able to answer [Lewontin 1998], so let’s not ask them, because it’s a waste of time. I was influenced a lot by that, so I think that there are certain questions that we’ll never be able to answer scientifically [see Chomsky 1976b]. We may have interesting and coherent speculations about them, but answers that we can test… maybe not. There are certain questions that we’ll be able to answer. For example, the spread out of Africa may be able to tell us that in order for our story about the origin of language to be consistent we’d better say that there was just a single group of individuals in which language emerged and that from there has diversified, so that could be answered. There is a particular branch of biology, again specifically evo-devo, that has dealt with things like deep homology and genes that have been conserved for a while, and that may actually be the genetic equivalent of fossils, and may tell us a lot about language. Once we understand more about the genetic basis of human language we may be able to use those genes and see if we can actually use some of them as fossils in order to answer some of those questions. Maybe we won’t be able to answer the question about what was the thing that made language particularly adaptive at first, what was the function that made it stay, but other questions we’ll be able to answer. It will depend a lot on the information we get from the biologists. I think a lot of that information won’t come from linguistics, actually. It’s just us the linguists being able to exploit what they, the life scientists, can give us.
Bickerton is the bioprogram chap. His views are not very popular among creolists nowadays, with good reason.
We may have interesting and coherent speculations about them, but answers that we can test… maybe not.
I’m much more comfortable with scholars who say things like that. They may well be wrong, but they’re not crackpots.
Evo-devo is just a detail in the “strict Neo-Darwinist position”. It deals with how you get phylogeny from ontogeny: how evolution can and cannot happen by heritable modifications to development trajectories (truncating or extending them, speeding them up or slowing them down, or changing course altogether). That just happened to be underresearched before modern clearing & staining methods were developed (the older embryological literature is full of drawings that you can choose to believe or not) and especially before development genetics emerged.
@David Marjanović: My outsider impression is that describing evo-devo as in opposition to Darwinian synthesis is an (anti)-shibboleth—typically indicating that someone has gotten most of their information from the popular press and cultural osmosis, rather than from talking to people that know the field. It is rather like what I tell my electricity and magnetism students, that they can probably safely ignore anything the hear about E & M from someone who thinks “e.m.f.” stands for “electromagnetic field.”
Punk eek was similarly hyped in the popular press (not by Gould himself, IIRC) as massive damage to “the strict Neo-Darwinist position”. But you need to hold a microscope to the fossil record to even see the difference between that and traditional gradualism, and there was actually never a reason (neodarwinian or otherwise) to expect constant evolutionary rates – when the environment changes fast, you get strong directional selection, and when it’s stable, you get stabilizing selection.
In On the Origin of Species by Means of Natural Selection, Darwin himself wrote briefly about punctuated equilibrium. Since evolutionary rates were very hard to observe in practice, he did not have that much to say about the subject, except that it seemed a natural inference that species might be stable over long periods of relatively unchanging conditions, then evolve quickly when a change in circumstances produced strong selection pressures—that is, large numbers of non-random deaths.
I don’t think Gould ever overtly claimed that Darwin had not introduced the idea first, but he appeared happy to let the popular press hype him as the true originator of the idea. However, Gould was better about pointing out that punctuated equilibrium was absolutely not contrary to the original formulation of Darwin and Wallace.
gould’s final position (in Structure) was that taking punk eek as the statistically predominant mode of speciation was a significant revision to the structure that darwin had set up – a refining of darwin’s fundamental insights, but in no way anti-darwinian. the core difference, as he traces the history, is with darwin’s basic assumption about the status of the fossil record as testimony to change.
darwin insisted that the stasis that the fossil record reveals as the dominant condition of species should be dismissed, and the record interpreted as faulty, based on an axiom of constant incremental change. gould & his collaborators (with solid evidence from cases of unusually dense evidence) rejected that axiom, arguing that the fossil record should be interpreted as accurate (if clearly limited in any given site by the details of deposition &c), and the stasis it documents understood as an actual phenomenon that must be explained. punk eek is a central part of gould & co’s proposed explanation – of speciation, but, just as importantly, of stasis – which has held up quite well as more evidence and analysis have accumulated over the last 40ish years.
in Structure, gould gives a fascinating and dishy account of the development of the theory and the controversy around it. my sense is that both the idea that there’s something anti-darwinian about punk eek and that punk eek = gould come directly from the highly ad hominem attacks on the proposal from daniel dennett and his ilk (and i use the word ilk advisedly), which involved everything from red-baiting gould to misrepresenting his and his collaborators’ published views.
(i’ll freely admit some partisanship – gould lived next door to a childhood friend of mine, until gentrification displaced my friend. plus i think he’s right.)
I read Dennett’s attack on Gould as well
as two others by Maynard Smith and Conway Morris, and I came to the same conclusion you did.
I think that at the beginning Gould oversold the originality of his ideas, but in the end he moderated his position. I especially think he was right on the whole question of inevitability.
Bickerton may not be popular among other creolists, but his wikipedia bio has some amazing sentences which tend to suggest he was a much more interesting personality than Gould, such as:
“Bickerton also wrote several novels. His novels have been featured in the works of the Sun Ra Revival Post Krautrock Archestra, through spoken word and musical themes.” (NB: I was hitherto unfamiliar with the SRRPKA, despite being both a Sun Ra fan and something of a krautrock aficionado.)
and
“To answer questions about creole formation, in the late 1970s Bickerton proposed an experiment that involves marooning on an island six couples speaking six different languages, along with children too young to have acquired their parents’ languages. The NSF deemed the proposed experiment unethical and refused to fund it.”
I have also wanted to do that experiment. But The Man hates me because I’m “just some asshole on the internet”, so he won’t give me the finding I need.
“funding”
Me too. I wanted to document the long-extinct Phrygian language by repeating Psammetichos’s experiment, raising two babies in isolation until they start speaking Phrygian. Instead of a check, the NSF sent me back a drawing that kind of looked like “????+⚾️”. I don’t get it.
“Boltball”? What does that mean?
Oh. I thought it was “Boltbaseball”. I still don’t get it.
skrewball?
“Barbell-Ball” as pronounced by a non-rhotic drunk = “babel,” which may kinda sorta refute Psammetichos?
Maybe it means something in Phrygian.
Most of the fossil record still isn’t detailed enough to reveal this or any alternative. Where it is detailed enough, we see both in different cases.
The use of the term “species”, as always, assumes some species concepts but not others… most concepts allow for “cryptic species” that almost by definition can’t be distinguished in the fossil record.
Isn’t “species” pretty relativistic next now, with several or many definitions of the term as well as a multitude of arguments about whether a given organism really is a separate species?
I think D.O. was right.
By coincidence, I ran into an image of a manuscript letter for sale, written by Alexander von Humboldt, commenting on the Phrygian word βεκός, the one supposedly spoken by those children Psammetichos did his little experiment on.
“Βέκος, also βέκκος […] in Phrygian […] also in Cypriot […]” – interesting.
Yes. My point is that punk eek is about species & speciation only under some species concepts, not all.
My original deadpan “Boltball” was in response to seeing the comment on my phone. The first emoji was displayed clearly as a screw, oriented diagonally, and I was just riffing on the fact that it looked like a machine screw, not a wood screw. When I looked again later on my desktop computer, the emoji was rendered very differently, and I could see how it might look, a la J.W. Brewer, like a vertical barbell. Had I seen it depicted that way first, I would have been a bit slower getting the rebus.
That’s why God created Emojipedia.
(Wait, were they calling me a nutball? How rude!)
My wife makes a version of Viennese crescent cookies that are round rather than crescent-shaped, and we call them nutballs. They are a holiday special, so I am enjoying them in rotation with the Florentines she also makes.
Huh, Google has “Viennese crescent cookies” as a search suggestion, and using it shows me (currently sitting in Vienna) that they’re Vanillekipferl. I’ll help make some later today; this is the first time I see them associated with Vienna specifically. There are usually hazelnuts in them, but replacing them by ground almonds is no loss; the important part is the vanilla.
And putting egg in the dough is for losers. 🙂
My wife does not put egg in the dough!
She says she’s made them with hazelnuts as well as almonds, but usually the latter.
seal of approval
She says the seal of approval means a great deal to her, but wants me to correct my misstatement: she makes them with walnuts, not almonds. I was just copying your word.
That’s a genuinely novel creation, then.
J. Trabant, writing in International Encyclopedia of the Social & Behavioral Sciences (2001) about Wilhelm von Humboldt (1767–1835), claimed that Humboldt’s typology is “rather a myth than a creature of the author”:
All the languages of the world were to be described according to what Humboldt calls their ‘inner coherence’ (innerer Zusammenhang), their particular structure. The languages of the world had mostly been grasped according to the traditional categories of Greek and Latin grammar. But the structural properties of a language can only be captured through its own categories. Humboldt envisaged an encyclopedia of the languages of the world, a new Mithridates. He started a whole series of descriptive works on different languages of the world of which only the Mexican Grammar (Humboldt 1994) has been published up to now. A second kind of structural studies aimed at a revised form of general grammar (which was also based on the Greco-Latin grammar): comparative investigations of linguistic categories throughout the languages of the world. Humboldt’s article on the dual (1827) is an example of that research. After the completion of these structural descriptions, research into the parental relationships of languages (genealogy) or into structural similarities among languages (typology, classification) can be undertaken.
What is often considered as Humboldt’s main contribution to linguistics, the so-called Humboldtian ‘typology,’ is neither central to his linguistic aims nor did Humboldt propose typology as an alternative to the dominant linguistic school, that is, to the historicocomparative studies of languages. On the contrary, in his last book, his unfinished work on Kavi, Humboldt eventually envisaged a comparative study of the Malayo-Polynesian languages which would follow Grimm’s and Bopp’s work on the Germanic and Indo-European languages. Humboldtian typology is a linguistic myth created by superficial readings of his texts. Humboldt was very sceptical about the classifications of the Schlegels, and eventually he explicitly refuted the legitimation of classifying languages, since languages are individuals and should be described as such.
The summit or keystone of Humboldtian linguistics is therefore the investigation of the ‘character’ of languages. This is just the opposite of classification and typology, of grouping similar entities together. It is a tentative attempt to grasp the very individuality of each individual language. Humboldt was not sure whether ‘characterization’ was still a scientific task. But whatever it is, it has to be tackled since it is the ultimate aim of linguistics to see how the human mind uses the instruments it has created, to investigate the very end of language: speech and discourse. The—necessary—structural description of a language is only ‘comparable to its dead skeleton’ (1903–36, VI, p. 147), the life of a language is discourse (verbundene Rede), a life giving ‘character’ to that language. Therefore, only languages that have a multifaceted literature yield the possibility of being described as having characters.
Nearly two centuries after Humboldt’s sketch of a huge linguistic program, it can be said that the structural description of human languages he envisaged was the main task and achievement of linguistics in the twentieth century. It is not by chance that Humboldt is present in the seminal theoreticians of modern linguistics (Saussure (see Saussure, Ferdinand de (1857–1913)), Bloomfield (see Bloomfield, Léonard (1887–1949)), Hjelmslev). The nineteenth century was not very Humboldtian, with its emphasis on diachronical research as well as in its naturalistic research methods. Humboldt’s ‘keystone’ of linguistics, the ‘characterization’ of languages, was attempted by some literary-minded linguists (e.g., Vossler), but was discredited by nationalistic and ideological interpretations of linguistic structures. Since the structural descriptions of the languages of the world have now virtually been completed, it is not astonishing that in actual linguistics typological attempts are again very important. The so-called Humboldtian typology is often quoted in that context. But it is—even more than the other famous Humboldtian invention, ‘Humboldt’s University’—rather a myth than a creature of the author. Without any foundation in Humboldt’s work is also the widespread conviction that Humboldt discovered fundamental principles of Chomskyan linguistics, like the ‘infinite use of finite means.’ Humboldtian linguistics as a whole is in direct opposition to that kind of linguistic research: it is a hermeneutical research into the cultural diversity of human languages in historical communities, preferentially through (literary) discourse, and not a naturalistic investigation of the structure of the universal human mind considered as a universal grammar.
Very interesting, thanks!
Chomskyans, who are liable to present views which are in fact highly speculative and controversial as if they represented a settled consensus among all linguists worthy of serious consideration to other scientists.
Naturally. To a Chomskyite, there are no no-Chomskyite linguists worthy of serious consideration.
Bathrobe: “Neurophysiological dynamics of phrase-structure building during sentence processing
According to most linguists, the syntactic structure of sentences involves a tree-like hierarchy of nested phrases, as in the sentence [happy linguists] [draw [a diagram]]. Here, we searched for the neural implementation of this hypothetical construct. Epileptic patients volunteered to perform a language task while implanted with intracranial electrodes for clinical purposes. While patients read sentences one word at a time, neural activation in left-hemisphere language areas increased with each successive word but decreased suddenly whenever words could be merged into a phrase. This may be the neural footprint of “merge,” a fundamental tree-building operation that has been hypothesized to allow for the recursive properties of human language.”
I have developed sterss-induced epilepsy in my 30s; I have seizures every other year. This seems a lot like this — I am reading on my phone, and the grammar blurs, and then I know I will have a seizure, and I have to sit somewhere safe. The doctor I went to about it said she’d never seen anything like it, and wanted me for research, but I haven’t had one since. She said the MRI was weird and she had not seen anything like it.
My mother’s MRI revealed the signs of grand mal epilepsy, and indeed her grandfather often had seizures (which he interpreted as religious visions). On the other hand she never had a seizure, not once in 60+ years.
@jc
Maybe she only had them in her sleep and no one noticed (did she ever bite her tongue in her sleep?).
@John Cowan: It’s also not not trivial to determine what kinds of seizures somebody will experience just by looking at an MRI. While your mother may never have had a grand mal seizure, there are lots of other types that are less obvious and people may not realize they are having. For example, my father has always denied that he suffers occasional absence seizures.
She definitely never bit her tongue; it would have been discussed in the family. An absence seizure is possible, but usually they are noticeable to your intimates, who didn’t see anything. (At on point, I was briefly thought to have had one at school because I was too upset to talk but not showing my upset, but I told them afterwards that I remembered the whole thing.) From what my mother told me, the neurologist wasn’t too surprised: in the end, epilepsy is usually diagnosed from the patient’s reports, unless they happen to have a seizure in the office.
Gale was waiting in a neurologist’s office one day, and when the nurse called her name to be seen, another patient grabbed her purse by the strap and started trying to pull it off her shoulder. Gale squawked and pulled back to no avail: the patient remained silent and pulled all the harder. Finally Gale, the nurse, and the receptionist pried the strap out of the patient’s hand and Gale went into an examination room. When she came out, the receptionist told her that the other patient had had a seizure and had no idea what she was doing.