Or so say the authors of the study described by Michelle Starr:
Our Neanderthal cousins had the capacity to both hear and produce the speech sounds of modern humans, a study published in 2021 found. Based on a detailed analysis and digital reconstruction of the structure of the bones in their skulls, the study settled one aspect of a decades-long debate over the linguistic capabilities of Neanderthals. “This is one of the most important studies I have been involved in during my career,” said palaeoanthropologist Rolf Quam of Binghamton University back in 2021.
“The results are solid and clearly show the Neanderthals had the capacity to perceive and produce human speech. This is one of the very few current, ongoing research lines relying on fossil evidence to study the evolution of language, a notoriously tricky subject in anthropology.”
The notion that Neanderthals (Homo neanderthalis [sic: should be neanderthalensis) were much more primitive than modern humans (Homo sapiens) is outdated, and in recent years a growing body of evidence demonstrates that they were much more intelligent than we once assumed. They developed technology, crafted tools, created art and held funerals for their dead. Whether they actually spoke with each other, however, has remained a mystery. Their complex behaviors seem to suggest that they would have had to be able to communicate, but some scientists have contended that only modern humans have ever had the mental capacity for complex linguistic processes.
Whether that’s the case is going to be very difficult to prove one way or another, but the first step would be to determine if Neanderthals could produce and perceive sounds in the optimal range for speech-based communication. So, using a bunch of really old bones, this is what a team led by palaeoanthropologist Mercedes Conde-Valverde of the University of Alcalá in Spain set out to do. […]
Having the anatomy capable of producing and hearing speech doesn’t necessarily mean that Neanderthals had the cognitive ability to do so, the researchers cautioned. But, they point out, we have no evidence that the Sima hominins exhibited the complex symbolic behavior, such as funerals and art, that we’ve found associated with Neanderthals. This difference in behavior parallels the difference in hearing capacity between Neanderthals and Sima hominins, which, the researchers say, suggests a coevolution of complex behaviors and the ability to communicate vocally.
“Our results,” they wrote in their paper, “together with recent discoveries indicating symbolic behaviors in Neanderthals, reinforce the idea that they possessed a type of human language, one that was very different in its complexity and efficiency from any other oral communication system used by non-human organisms on the planet.”
You can read more details at the link; we’ll doubtless never know if Neanderthals actually used language, but it’s a perennially fascinating idea which we’ve chewed over before (2013, 2022). Thanks, Bathrobe!
Just gonna link the same Neanderthal high pitched voice theory I did last time:
https://www.youtube.com/watch?v=o589CAu73UM
If this turns out to be true, however, where does it leave Chomsky’s speculations about the origins of human language?
“The notion that Neanderthals … were much more primitive than modern humans (Homo sapiens) is outdated, and in recent years a growing body of evidence demonstrates that they were much more intelligent than we
once assumed.”1. primitive as opposed to intelligent. ha-ha
2. what do they mean “once assumed”?
“Once assumed”? It was conventional to assume that the Neanderthals were apes and brutes and Cro-Magnons were perfectly sophisticated. Actually it was “fashionable to assume” that the non-white / non-Aryan people … etc.
But did I get your question right?
@Dmitry, just kidding. It was not a serious question.
I think “Neanderthals are more intelligent” [than Homo sapiens, without “than we once assumed”] would sound better….
____
Now, treating it as a serious question… Well, not unlike the colonisers and colonised: some implied feeling of superiourity based on (a) presumably better tech, (b) the fact that homo sapiens “won’ (or contributed more in the gene pool of the humanity) and (c) that homo sapiens is “we”.
All of this makes the idea that “we are smarter” boring, the other option “interesting”. But I don’t think I ever heard anyone claiming that “we are smarter”.
There’s not any actual definitive evidence that homo sapiens circa 20,000 BCE possessed language either.
I propose the theory that the use of language was a consequence of the cultivation of cereal crops (and spread later to hunter-gatherers).
Homo erectus knew how to make boats. Could you teach someone to make a boat without language? Maybe, I suppose.
maidhc, that hypothesis is baffling, and frankly batty. When Europeans reached the southernmost point of South America they found groups with language in spite of all populations being hunter-gatherers for a long way north. Moreover, if Vajda’s Dene-Yeniseian hypothesis holds up, then that establishes language use across northern Asia and North America before populations in either place ever began cultivation.
Homo erectus knew how to make boats. Could you teach someone to make a boat without language? Maybe, I suppose.
IIRC in some corvid species there had been recorded incidents of teaching tool-making… maybe not as complicated as boats though? How complicated were boats in those days?
…OTOH I’m not sure if we necessarily know whether corvids actually have language. For all I know they do and it just hadn’t been deciphered yet because it works too differently from human language, and/or because it’s hard to get sufficient amounts of good data.
(Do we even have evidence for the boats’ existence other than “had to reach those islands somehow”? Indonesia isn’t that far off mainland and must have been less so in lower-sea-level times, and I imagine Homo erectus were probably hardier than modern humans – were they already pursuit predators by then? – so I wouldn’t necessarily rule out a 30-mile swim. A better question there is how they’d have known there was anything to look for.)
“Indonesia”
https://en.wikipedia.org/wiki/Sundaland
Did the mention widespread speculation that the language Neanderthals used was Welsh?
“Did the article” was what I meant to open the previous comment with. The editing button hasn’t popped up.
I wouldn’t necessarily rule out a 30-mile swim.
Of a whole population including women and kids, enough to make a viable breeding population on the other side?
Neither would I set out on a 30-mile canoe voyage without a crew I could yell at.
(Although, say, the straits to get across what is now the islands of Indonesia were much narrower due to sea level being lower; there’s a very deep channel through there (Timor Trough/Sunda Trench, uncrossable by pre-human land Fauna — Lydekker Line on @drasvi’s map), with the tide from Indian to Pacific Ocean tearing through. You couldn’t paddle/swim; a boat/raft couldn’t just drift. You’d need to understand tides and currents and prevailing winds and weather patterns. You’d have to explain why we’re going at dawn tomorrow, not just any old time.)
Am I missing something or do they have zero bone-based (or other) evidence of anatomical ability to talk versus just a suggestive inference from ability to hear in what they claim is the optimal frequency range for listening to (current) human speech?
There is no shortage of publications about their ability to both talk (have been discussed since 70s) and hear… this article is about hearing. The question is whether their observations are massively more sugggestive.
‘Importantly, because these consonants are voiceless, they do not propagate across the landscape and are limited to short-range intraspecific communication. Indeed, voiceless consonants may represent “…the evolutionarily oldest group of consonants” [37].‘
Sounds funny, but the ref 37 is Lameira, A. R., Maddieson, I., & Zuberbühler, K. (2014). Primate feedstock for the evolution of consonants. Trends in Cognitive Sciences, 18(2), 60–62. (sci-hub, academia) – a brief note attempting to draw attention to the following “Given their importance, it is surprising that comparative animal research has ignored voiceless calls (i.e., oral sounds produced without vocal-fold vibration) almost entirely and instead focused on voiced calls or ‘vocalisations’.“.
Indeed.
____
Some other claims:
“Of particular interest here is the ability to voluntarily control the anatomical structures for speech production (i.e., the larynx and the supralaryngeal vocal tract) to the point that organisms may learn to produce new calls. Although non-human primates seem largely incapable of vocal learning, vowel-like signals have been identified in monkeys [15], whereas evidence for consonant-like signals appears to be more typical in great ape communication. A preliminary conclusion is thus that the two main building blocks of speech – protoconsonants and protovowels – have evolved separately but that natural selection has favoured their joint use over the past 10 million years.”
“Particularly relevant is that, in great apes, there is good evidence for social learning, fine-tuning, and sensorimotor feedback in the production of voiceless calls, in contrast to voiced calls [6–9].”
“For example, at Suaq (Sumatra) and Sabangau (Borneo), orangutans produce raspberries during nestbuilding…” – What? I did not know “raspberry”:) But it is explained above: “The voiceless-call repertoire of great apes may be unusually rich and includes ‘clicks’, ‘smacks’, ‘raspberries’, ‘kiss sounds’, and ‘whistles’”
“By contrast, this ability to discriminate vowels and consonants appears to require very little
experience and is successfully achieved by human infants of less than 12 months [13 Lemasson, A. et al. (2013) Exploring the gaps between primate calls and human language.], indicating an important role for acoustic cues” – nothing new (though, depends on how they define “discrimination”), but the suggestion ([14] Caramazza, A. et al. (2000) Separable processing of consonants and vowels. Nature 403, 428–430) that vowels and consonants are processed separately in the brain and their own suggestion of independent evolution puts it in new light.
And here is the source of the funny phrase: “An intriguing possibility is thus that voiceless consonants are the evolutionarily oldest group of consonants that share a common evolutionary origin with the voiceless calls of modern great apes.“
Irony, incidentally, co-evolved with the ability to place the tongue firmly in the cheek.
because these consonants are voiceless, they do not propagate across the landscape
This will come as news to southern Ghanaians, who are accustomed to attracting attention (e.g. of waiters) with an (extremely propagating) hissed sssss.
If you’re going to spin just-so stories, it’s good to at least begin with facts that are actually, like factual.
That’s common in Morocco too
Somewhere a snake hissed…
Sima de los Huesos – bone cave, cave of the bones. Not to be confused with Peştera cu Oase (cave with bones), the same thing in Romania.
Well, “suggestive inference” undersells it; mammalian inner ears are very well researched, and computed microtomography lets you see the 3D shape of inner ears in any reasonably well-preserved skull (or just petrosal bone), so reconstructing the ability to hear voiceless consonants at the kinds of volume you find in ordinary conversations should be doable with quite narrow error bars.
Yes, the ability to hear voiceless consonants well must logically precede the ability to use them – but not by a lot of time. It could have evolved stepwise: maybe a small improvement made languages with one voiceless consonant phoneme possible, then the ability to distinguish between them improved…
How far does it reach? If you want to cover a distance, don’t you better yodel?
The sound power transmission curve for the Sima de los Huesos remains (fig. 2 of the paper, https://doi.org/10.1038/s41559-021-01391-6) falls off from the Modern/Neanderthal curve only at about 3.2 kHz. That accounts for the more dramatic “occupied bandwidth” numbers, in the graph shown in the popular press, but that is not so significant for speech comprehension. Burst spectra for [p], [k], [t] peak at about 300, 2500, and 4000 Hz, respectively. A Sima human would hear the difference, even if the [t] would sound muffled, similar to a modern human with slight aging-related hearing loss (or wearing a hat over their ears). Plus, formant reshaping near the stops would mostly compensate for that. Higher-frequency sibilants like [s] would sound muffled too, but that’s it.
So yes, Neanderthals could hear as well as us, but the Sima folks were not far behind, and could distinguish speech sounds well enough.
Anyway:
Stoessel et al., 2023, Auditory thresholds compatible with optimal speech reception likely evolved before the human‐chimpanzee split (here, Open Source):
By “suggestive inference” I had meant getting from (a) their ears are well-suited to hearing the frequencies characteristic of (modern) human speech to (b) therefore they must have been both uttering and hearing speech. But according to Stoessel et al. as block-quoted by Y, (b) doesn’t actually follow from (a) …
Interesting – but I don’t think this can compensate for the opposite properties of the inner ear.
Outer-Middle is what Conde-Valverde et al. looked at. I haven’t seen anything about what the inner ear of Neanderthals was capable of.
Oh… yes, they looked at the middle ear! I seem to have confused this with a talk on the inner ear (IIRC) I saw a few months ago. Haven’t had time to read the paper yet (or see if it’s accessible).
“e.g., short external ear canal, small tympanic membrane, large oval window”
Sounds like a description of someone’s dwelling. Large oval window….Someone who owns a tambourine, tympanon.
“Unexpectedly we find that external and middle ears of chimpanzees and bonobos transfer sound better than human ones in the frequency range of spoken language.”
Aha, unexpectedly wild animals hear better…. Perhaps they are more sensitive to smells too? (though, it seems humans can see relatively well)
“How far does it reach? If you want to cover a distance, don’t you better yodel?”
Towards an evolutionary history of yodel…
Could you teach someone to make a boat without language?
Gorillas can teach other gorillas to make nests without language.
My mother likes to tell the story how my little brother and I were watching handymen working at our house when we were boys. I was able to recount what they did, but couldn’t perform the work (I have two left hands still today), while my brother couldn’t recount it, but was actually able to do the same work they did after watching them.
I wonder if one ability actually contradicts the other.
Teaching people to do cataract surgery, I found that it was extremely difficult (and indeed, pointless) to try to put into words what I was doing right exactly and they were doing wrong: people just have to watch and learn, at least after being put through the basics (“Cut here.”) The vital fine points of technique just don’l lend themselves to verbal description.
For my own part, I never learnt anything of any value about microsurgical technique from a book; only from watching and imitating experts, and from constructive criticism from the rare few experts who actually could put into words what I was doing wrong, as opposed to just saying: “No! Not like that! Like this …”
The teaching experience often involved watching a trainee and thinking “that would be a lot easier if I were doing it”, without being able to consciously explain just why it would be a lot easier.
@de
If you had had the experience of moving furniture through a doorway (hint: just placing the furniture in a single orientation and pushing/pulling does not always work) with an Aspergers’ sufferer, you might be able to verbalise your surgical “solutions” (although maybe practicing with a dummy rather than live patient would be a good idea).
I shall pass your tip on to the Deanery.
It reminds me that one of the most skilful cataract surgeons I have ever known was a source of continual inspiration to me when I was studying for the Fellowship examinations. “Well, even X passed this exam. How hard can it be?”
I found that it was extremely difficult (and indeed, pointless) to try to put into words what I was doing right exactly and they were doing wrong
I recently expostulated on this, in terms perhaps too general to be immediately grasped – something like “words are useless for effective cognition transfer”. That includes conveying how to do something (I am talking along the lines of Ryle here. For “convey” think vermitteln, which does not suggest any kind of transportation).
As you see, the difficulty is only exacerbated at the meta-level, where you try to put into words something regarding the putting of things into words.
Part of the problem is merely that it takes time to produce words to good effect. Depending on the subject that time may be considerable. As a result the attention of your audience may flag, or you may yourself fall asleep from exhaustion.
I read about investor Charlie Munger, who cashed out his holdings a couple of days ago. In the 1950s he’d had a botched cataract operation, and due to continuing pain opted to have the eye removed. I wonder how common that was. I’m surprised that such an old and relatively superficial surgery could end so badly.
“Cut here.” … “No! Not like that! Like this …”
DE would respond properly to your surprise. Meanwhile: Due to the nature of cataract surgery, posterior capsule tears may occur at any point during the operation.
Here is a study that interfered with the object of the study: Capsular bag-related complication rates were reported in 0.36% of surgeries for senior and 7.03% for resident surgeons at the beginning of the study, compared to 0.32% and 1.32%, respectively, at the end of the study.
I wonder how common that was. I’m surprised that such an old and relatively superficial surgery could end so badly
More common then than now. Complications are actually fairly common, but in most cases if you handle them properly at the time the final outcome is still pretty good. (This is actually the best argument against handing off routine cataract surgery to trained nurses/technicians/whatever, a plan often favoured by administrators keen to cut costs and undermine clinicians: cataract surgery is all rather samey, until suddenly it isn’t.)
Removing an eye is pretty drastic even if it’s blind and painful. You can often relieve even intense pain better by deliberately damaging the relevant pain nerves.
In the old days, surgeons tended to remove eyes more readily because of the spectre of
https://en.wikipedia.org/wiki/Sympathetic_ophthalmia
but (a) it’s much less common than was once thought* and (b) if you do have the enormous misfortune to get it, the treatment is also a lot better.
I don’t know about “superficial.” The operation takes place entirely inside the eye (something I find is not actually common knowledge) and the eye is (technically) part of the central nervous system.
* There is, however, a famous legal case from Australia, where the surgeon was found liable when the unfortunate patient had the one-in-a-million (literally) bad luck of getting this, because the patient was able to show that he had specifically asked whether there was any risk to his other eye from the cataract surgery, and had been told (understandably), No.
The American historian William Prescott
https://en.wikipedia.org/wiki/William_H._Prescott
went blind from sympathetic ophthalmia. The perils of bun fights among undergraduates …
I’m listening to a presentation on the uses of generative AI and thinking of how tough it would be to generate ideas for how to do (uncommon, work-related) physical activities, or organize physical work, given the unlikelihood that relevant discussions are in the training data, and reading DE’s comment in that context really crystallized my sense of the issue.
By “relatively superficial” I meant, not going very deep in, which I assume gives less chance for incidental damage or infection.
Thanks @DE. I’m relieved to be hearing all this only now, rather than before my cataract surgery of a few months ago.
The surgeon is mad keen to operate on my other (dominant) eye. He was predicting I’d jump at the chance when I realised how much it had improved the sight in my clouded-over eye. (Which was indeed improved, but not so far as to be actually better than the dominant one — IMO, but of course my mere opinion doesn’t sway the fella.)
Complications are actually fairly common, but in most cases if you handle them properly at the time the final outcome is still pretty good. (This is actually the best argument against handing off routine cataract surgery to trained nurses/technicians/whatever, …
Yeah, similar arguments appear in the airline industry to suggest pilots are over-trained and/or most of the job could be automated … until an engine snuffs out just as you’re about to touch down in a gusty cross-wind.
I just read a paper, published today, that has a section:
By “relatively superficial” I meant, not going very deep in
There is no body cavity that cannot be reached with #14 needle and a good strong arm.
http://gomerpedia.org/wiki/Laws_of_the_House_of_God
which I assume gives less chance for incidental damage or infection
The eye has almost no defence against infection if it actually gets inside, which is fortunately very unlikely; unless some surgeon has just gone and made a hole in it.
https://en.wikipedia.org/wiki/Endophthalmitis
(This is one of the very few ophthalmic conditions, apart from major trauma, where the ophthalmic resident actually has to get out of bed in the middle of the night. By morning it will be too late.)
Thanks! I have now learned the word hypopyon, excellent for Scrabble.
I also learned a lot of other things, which sedation will keep me from thinking about if and when I get cataract surgery.
… which sedation will keep me from thinking about if and when I get cataract surgery.
The modern trend (sez the doc) is very local anaesthetic, which meant I was awake and thinking throughout. The hardest thing was to remember to keep still and not allow my eye to follow all the machinery swimming in an out of its visual field.
I found that it was extremely difficult (and indeed, pointless) to try to put into words what I was doing right exactly and they were doing wrong
I don’t know about that. On the teaching side, I find I agree, but I myself learn best from verbal explanations.
Complications are actually fairly common, but in most cases if you handle them properly at the time the final outcome is still pretty good. (This is actually the best argument against handing off routine cataract surgery to trained nurses/technicians/whatever, a plan often favoured by administrators keen to cut costs and undermine clinicians: cataract surgery is all rather samey, until suddenly it isn’t.)
Per contra, every bit of that is still true if we replace cataract surgery with delivering babies passim, and yet most of that is not done, or is no longer done, by surgeons.
delivering babies
Yeah. Just how did so many babies manage to get delivered over the — oh — hundreds of thousands of years of human history before there were surgeons or even trained midwives? (I’m calling to mind the rather graphic description of the delivery in Perfume. Or indeed the (alleged) advent in the stable.)
Cataract surgery in pre-history not so much. (Did the Surgeon-General collect statistics on adverse outcomes of ‘Couching’ in C6th BCE?)
Well, if there’s a complication that needs a surgeon, and surgeons haven’t been invented yet, somebody dies. Or goes blind. People do take risks knowingly. Neoteny reputedly developed to keep the numbers sustainable.
@David Eddyshaw: Although William Prescott* was not completely blind, he largely stopped traveling after his vision became very poor. He has subsequently been criticized for not visiting the sites in Latin America that he wrote about in his histories of the Spanish conquest, but to me that has always seemed an unfair slam on someone with such severe visual problems. Given his condition, how much value would there have been in his making an in-person inspection of Tenochtitlan?
* “Don’t throw your breadcrust until you can see the whites of their eyes.”
Yeah. Just how did so many babies manage to get delivered over the — oh — hundreds of thousands of years of human history before there were surgeons or even trained midwives?
I’ve read some descriptions (with photos…) of modern baby deliveries recently and now I’m almost legitimately wondering how did anyone manage to cut the umbilical cords in the (probably) millions of years before scissors became common. Knives? Sharp stones? Were the cords just ripped off with a strong yank? How do modern nonhuman great apes do it – surely they don’t use scissors?
(Trained midwives, and to a lesser extent surgeons, are some of the oldest professions in recorded human history, but of course recorded human history isn’t all that old in the first place. Unfortunately AFAIK midwifery doesn’t really leave archaeological traces, so it’s probably hard to tell how much older it is.
Surgery, at least, is unlikely to be pre-Paleolithic because of the required tools… much less requirement in midwifery. For all I know there could be midwife apes.)
The umbilical cord in other animals:
Knives? Sharp stones? Were the cords just ripped off with a strong yank?
In retrospect I have no idea why I hadn’t thought of the “teeth” option – or of the “it just falls off by itself eventually” option.
The second option is far less trivial.
PS. is placenta tasty and do humans eat it as well?
No. Sometimes placentas are given a decent burial. They are the spiritual doubles of the newborn.
i’m sure i know someone who’s eaten their (or their offspring’s) placenta, but i’ve got no reports on tastiness. this, however, is a lot more common in my neck of the woods.
There is definitely a placenta-eating subculture in the U.S., although I don’t have a good handle on size – quite possibly <1% of birthgivers. Probably smaller than the placenta-burying subculture. Before the birth of my third child I was attending a class on the https://en.wikipedia.org/wiki/Bradley_method_of_natural_childbirth, and the teacher at one point digressed into a very non-judgmental whatever's-right-for-you-is-right-for-you discussion of whether or not to eat the placenta.
Well, placenta does mean “cake” originally…
They say you can’t eject your cake and eat it too, but apparently that’s not true.
There is a special birthcake day for the shadow bro, if I remember correctly.
Well, my assumption was that a behaviour common for primates can also be common for one species in particular, that is, homo sapiens, and that perhaps western educated industrialized [rich democratic*] people are just weird, as usual.
Also my assumption was that if a monkey spontaneously wants to eat (and even share!) something, it must be tasty.
___
*not sure that “rich” has to do with what is meant by weird. Same with “democratic”. Also “democratic” sounds like a good thing, but in English it sometimes irritates me, because it totally sounds as a religious concept.
J1M: For all I know there could be midwife apes.
Now I want to know too!
Monkey midwifery has been observed. Here’s a detailed photographic report from Peking University. A quick search returns documented instances with other primates (such as Bonobos), though they seem to be rare enough each is still worth a scientific article.
Perhaps not so rare: all say that observing a daytime birth in the wird itself is a rare event (perhaps worthy of an article), apparently depending on s
My two links above are about the same topic (but those are journal publications. Accessible in sci-hub). The former is also 2014, but the team is from Xi’an (and monkeys are from the mountains to the south of Xi’an), not Peking, and they don’t reference each other.
The idea of cooking and eating the placenta was brought up in our childbirth classes twenty years ago. Some of the moms-to-be seemed to like the idea if reconsuming “their” placentas, but when I pointed out that the placenta was (overwhelmingly) part of the baby, not the mother, most of them suddenly found the idea kind of squicky.
Well, it’s a grey area (who does placenta belong to).
Genetically must be rather baby. But: the baby will consume whatever useful was there with the milk, no?
Sure, you could say it’s the mother in the, You are what you eat, sense, but genetically, the placenta is made up of the baby’s cells, which I think is more relevant.
Re:
“Declaration of AI use
We have not used AI-assisted technologies in creating this article.”
Nobody used spell check? Google? Autocomplete?
My understanding is that ChatGPT is not categorically different from those tools, it’s just had better marketing. We really need more clear terms for these things.
My understanding is that ChatGPT is not categorically different from those tools
Then your understanding is wrong. Spell check, Autocomplete essentially have huge dictionaries and cunning ways to look them up. Some have ways for the end-user to add their own dictionary entries. Nothing in those tools will try to check whether a proffered entry makes any ‘sense’ as a ‘word’/don’t check in any corpus to see if it appears. They don’t embody any attempt at ‘sense’.
Google isn’t merely a dictionary, but OTOH isn’t much more than a counter of occurrences and co-occurrences of words or phrases. It has no notion of ‘syntax’ beyond what it sees in common phrases. It has some small ability to detect morphology.
So if these tools appear ‘smart’, it’s only that they can do a huge number of very dumb tasks far quicker than can our monkey brains.
We really need more clear terms for these things.
“A.I.” has been bumbling along as an umbrella term since 1970’s — and doesn’t include all the technologies you’re wilfully trying to conflate. Wikipedia on ChatGPT offers three technical terms (each of different aspects of the technology) in its first paragraph. I don’t see a problem of terminology. (Which is not to say I don’t see a problem with ChatGPT in other respects.)
The distinction is that ‘chatbot’s — even back to the venerable ELIZA 1966 — continually scan the whole of the prior conversation, looking for repeating themes. (Where ‘theme’ is identified from common semantic collocations, not merely lexemes.) Spellcheckers (for example) don’t look back to see if the word you’re typing now has appeared before in the text/whether it might be a typo for a close homograph. Indeed if you start typing an absurdist script of non-sequiturs, it’ll care only about your spelling. (Hehe case in point: I just mis-spelled ‘typo’ there as ‘type’ — probably muscle memory. Not a peep from the wavy line, even though the sentence then made no sense.)
I’m wondering if your message was generated by some sort of chatbot?, wildly flailing about trying to make connections where there are none.
AntC: Spellcheckers (for example) don’t look back to see if the word you’re typing now has appeared before in the text/whether it might be a typo for a close homograph.
Actually, my understanding is that some current spellcheckers do operate this way.
AntC – OK, so imagine I said “grammar checker” instead of spellchecker and adjust your reaction accordingly.
How is ChatGPT on a different plane of “intelligence” than a grammar checking algorithm or an auto-complete algorithm?
It combs through what it’s been fed and assembles what it thinks are plausibly common groups of letters and words used by humans in similar situations. It doesn’t “know” the “meanings” of the things it outputs, it just knows that humans will be more satisfied by certain combinations than others.
I don’t think that’s an incorrect description, and therefore, I have to say, the insinuation that I’m an artificial lifeform seems unwarranted!
I think the bigger difficulty with these formal disavowals of such tools is that if you’re unprincipled enough of a researcher to use them, you’ll also be unprincipled enough to lie about it.
And with academic publishing/editting/critical review being cut to the bone, who’s going to hold you to account? Use of AI is going to get harder to detect for all except a tiny circle of experts in the field. There’s already enough dubious science with papers getting withdrawn long after publication in prestigious journals.
“grammar checker” instead of spellchecker
Notice the “checker” word. It’s not like you can throw random text at these tools and they’ll turn it into glittering Nobel-prize winning prose. They’ll just say ‘nope’.
ChatGPT/chatbots _produce text_ — fresh, never-seen-before text — even if it’s a sort of mashing together of other people’s text.
It doesn’t “know” the “meanings” of the things it outputs,
Indeed. And that’s why we’re all being careful to put that anthropomorphic terminology in scare quotes. But then any chatbot can mimic that practice. So I’m not yet convinced you’re not machine-generated. Realistically, folk are using the tools to generate a précis of domain knowledge; then lightly editting it. (Just as schoolkids cribbing a piece of homework.) How much human critical thought went into the production line?
My understanding is that ChatGPT is not categorically different from those tools, it’s just had better marketing. We really need more clear terms for these things.
The generic term is “neural network”, possibly with comments to the effect of “large” and/or “multiply recurrent”; the slightly more specific generic term for what ChatGPT is, that excludes (e.g.) Midjourney, is “large language model”.
But once we’re going into the language model territory, there’s probably no good way to distinguish what ChatGPT does from what (e.g.) Google Translate and Grammarly do, at least without going into the kind of arbitrary numeric borders that show up in legal acts (and/or similarly-arbitrary narrow-details-of-protocol borders, which also show up in legal acts sometimes, but not as often, because they tend to be too easy to get around), even though the former is supposed to be an “AI” tool and the latter two aren’t.
(As it happens, Grammarly, in particular, had been explicitly declared to be AI-assisted for ages, back when that was a positive buzzword to have.
It is also sufficiently common in article writing that I wouldn’t be at all surprised if some of their workers did use Grammarly and just didn’t happen to realize it probably counted as AI-assisted.)
Spellcheckers (for example) don’t look back to see if the word you’re typing now has appeared before in the text/whether it might be a typo for a close homograph. Indeed if you start typing an absurdist script of non-sequiturs, it’ll care only about your spelling.
Pretty sure the blue/green wavy underlines in MS Word (commenting on grammar – as opposed to the red ones for words not in their dictionary) had been a thing since the 1990s.
…but I’m not very confident because I can’t even recall properly if they were blue or green. Google says they could have been either in different contexts?
Use of AI is going to get harder to detect for all except a tiny circle of experts in the field
“Or else it won’t, you know.” —Thomas A. Cowan
It’s really just a matter of looking for signs of actual intelligence. Despite the marketing hype and the deliberate misdirection invoked by the name “Artificial Intelligence“, these systems have none whatsover. Nada. Zilch.
Confusion can only arise in domains where it is usual to accept responses which do not require actual intelligence at all. LLMs can potentially be useful in identifying these; they include things which have, traditionally, been supposed (wrongly) to require intelligence, like composing legal opinions, writing undergraduate essays, or composing papers for Nature explaining how Bayesian methods have solved comparative linguistics.
It will not (in fact) surprise those familiar with these genres to be told that such things can be manufactured automatically without the need for intelligence of any kind.
“Intelligence” aside, how useful are current AI systems, fed on words alone, and no other experiences?
Well, a person blind from birth would not be the best color consultant, but could still give some useful information based on reading alone: they could tell that rooms are better painted light colors than dark, that roses are red and violets are blue, and that primary colors are suitable for children but not for European adults, but mixing them with white makes pastels, which are suitable under some circumstances, etc.
I think current AI is up to this kind of inference, because color is well-described in the written sources on which it is based. It might trip on trick questions which require more than correlating word co-occurrences, say, “what color is intermediate between green and red?”
Autocomplete? For writing a scientific paper? It would drive you mad in half a minute because most of the words you need are too rare for it. I had to use a word processor with autocomplete once, and, well, there are good reasons why 1) MS Word doesn’t have one and 2) the one I used (something for Linux in 2001 or so) never got any noticeable market share.
Uh, what? Peer reviewers aren’t paid and never were. And readers of the published paper…
The purpose of the disclaimer is the same as usual: if you’re found to have violated it, you can’t plead ignorance. Instead, the publisher will “retract” the paper to preempt a scandal.
They’re still blue and still highlight perceived mistakes in grammar or punctuation – and they still routinely fail in long/complex sentences, showing errors where there aren’t any.
Me, I’m glad of them; I don’t take them as a personal insult, and they occasionally alert me to typos that might have slipped by me.
Green wavy underlines are grammatical errors. Blue straight underlines are wrong-word errors, like no for know. Blue wavy underlines are inconsistent formats.
OT: from a Canadian website:
#
in every year since 2009, the second Tuesday in October has been celebrated worldwide as “Ada Lovelace Day”.
#
The monthly security update release of Windows is published on the second Tuesday of each month.
Huh, interesting – I’ve never seen those, probably for lack of opportunities! I’ll need to try if I can trigger them.