Ted Chiang is not only a good writer but a sharp and interesting thinker, a combination that is sadly rare. This LARB interview with Julien Crockett (archived) is well worth reading in full, but I’ll pull out the passage about language:
Your work often explores the way tools mediate our relationship with reality. One such tool is language. You write about language perhaps most popularly in “Story of Your Life” (1998), the basis for the film Arrival (2016), but also in “Understand” (1991), exploring what would happen if we had a medical treatment for increasing intelligence. Receiving the treatment after an accident, the main character grows frustrated by the limits of conventional language:
I’m designing a new language. I’ve reached the limits of conventional languages, and now they frustrate my attempts to progress further. They lack the power to express concepts that I need, and even in their own domain, they’re imprecise and unwieldy. They’re hardly fit for speech, let alone thought. […]
I’ll reevaluate basic logic to determine the suitable atomic components for my language. This language will support a dialect coexpressive with all of mathematics, so that any equation I write will have a linguistic equivalent.
Do you think there could be a “better” language? Or is it just mathematics?
Umberto Eco wrote a book called The Search for the Perfect Language (1994), which is a history of the idea that there exists a perfect language. At one point in history, scholars believed the perfect language was the language that Adam and Eve spoke in the Garden of Eden or the language angels speak. Later on, scholars shifted to the idea that it was possible to construct an artificial language that was perfect, in the sense that it would be completely unambiguous and bear a direct relationship to reality.
Modern linguistics holds that this idea is nonsensical. It’s foundational to our modern conception of language that the relationship between any given word and the concept it is assigned to is arbitrary. But I think that many of us can relate to the desire for a language that expresses exactly what we mean unambiguously. We’ve all tried to convey something and wished there were a word for it, but that’s not a problem of English or French or German—that’s a problem of language itself. And even though I know a perfect language is impossible, the idea continues to fascinate me.
As for the question of whether mathematics could be a better language, the reason that mathematics is useful is precisely what makes it unsuitable as a general language. Mathematics is extremely precise, but it’s limited to a specific domain. Scientists who speak different languages can use the same mathematics, but they still have to rely on their native languages when they publish a paper; they can’t say everything they need to say with equations alone. Language has to support every type of communication that humans engage in, from debates between politicians to pillow talk between lovers. That’s not what mathematics is for. We could be holding this conversation in any human language that we both understand, but we couldn’t hold it in mathematical equations. As soon as you try and modify mathematics so that it can do those things, it ceases to be mathematics.
I grew up in a French household, and I often feel that there are French words and expressions that better capture what I want to express than any English word or expression could.
Eco writes that when European scholars were arguing about what language Adam and Eve spoke, each one typically argued in favor of the language he himself spoke. So Flemish scholars said that Adam and Eve obviously must have spoken Flemish, because Flemish is the most perfect expression of human thought.
For the Adam-and-Eve theory, cf. the first LH post; we discussed Arrival in 2016. And he has a lot of sensible things to say about LLMs and so-called AI; I can’t resist quoting this snippet:
LLMs are not going to develop subjective experience no matter how big they get. It’s like imagining that a printer could actually feel pain because it can print bumper stickers with the words “Baby don’t hurt me” on them. It doesn’t matter if the next version of the printer can print out those stickers faster, or if it can format the text in bold red capital letters instead of small black ones. Those are indicators that you have a more capable printer but not indicators that it is any closer to actually feeling anything.
My favourite word is “ineffable”.
I was kinda predisposed to sniff at this, because I reckon the McGuffin of “Story of your life” is basically higher technobabble (admittedly, definitely “higher”), but everything Chiang says in this interview is extremely sensible. Especially about “AI.”
The corpo-rat pushers of “AI” are not stupid, and although some among them probably do believe their own hype (I suspect that the less intelligent, like Eglon*, may really do so) I am sure that most do not. The hype is to distract people from what they are really up to with this.
* He thinks he’s the protagonist of a Heinlein novel based on an original idea by Ayn Rand.
My favourite word is “ineffable”
You don’t say!
That kind of language already exists: Ithkuil.
I don’t believe that it’s impossible for language to express exact thoughts; it’s just not efficient. It would require too much constant effort for very little perceivable gain. It’s easier to communicate faster and “good enough” and then repeat in other words if you notice that people misunderstood your point.
And people seem to forget that we’re not machines. We weren’t evolved to communicate abstract ideas in a precise manner, so much as social cues. Most of our communication throughout history has been non-verbal and about communicating stuff like “I’m trustworthy”, “I’m interested in what you’re saying”, etc.
A lot of communication is also about either simple stuff “please, pass me the mug” or an excuse to spend time with other people and tell them that you like them and dislike everyone outside your tribe.
The corpo-rat pushers of “AI” are not stupid, and although some among them probably do believe their own hype …
Indeed. Then this is a worry: AI should replace some work of civil servants, Starmer to announce. I’d be pretty certain that a technobabble Sir Humphrey would be less entertaining than Nigel Hawthorne.
I wrote the above before seeing this. Sometimes you can’t even make it up:
Replacing government workers with AI appears to be the endgame of DOGE as well. For whatever nefarious reason, the Trump administration doesn’t seem to be publicizing that aspect of the project. It is course quite possible that Trump is completely unaware of the details.
@AntC:
Too right about Starmer and AI.
But then, he replaced the head of the Competition and Markets Authority with a man who used to be head of Amazon UK.
There are some pretty suspect advisers in there …
(I am espousing the traditional Russian trope here: the Tsar loves us like his children, but he is poorly advised by bad counsellors.)
Cory Doctorow pointed out that one of the criteria for successful deployment of “AI” is that it shouldn’t matter when it fails, because the victims of the failure have no real options for redress. So it actually makes sense for a regime without any actual interest in governing in the interests of the people. But in the UK, this has to be attributed merely to culpable gullibility.
I don’t believe that it’s impossible for language to express exact thoughts; it’s just not efficient.
There is no such thing as “exact thoughts.” There are, of course, exact statements in mathematics, but we do not think in mathematics (except, of course, for mathematicians, who do not count as “we” because they are not here, they are doing mathematical thinking somewhere else).
I’d say it’s impossible in principle, no matter how sophisticated our thought processes might be.
The world itself does not obligingly divide up into elements which could all be labelled precisely. One of the many implausibilities of the Tractatus, even if one were to accept the picture theory of how language relates to the world, is that nobody can say what an “atomic fact” would actually look like.
Even the purportedly transcendent world of mathematics rests on philosophical foundations of sand, a fact that most mathematicians cope with by never thinking about it. (Much like the rest of us.)
a fact that most mathematicians cope with by never thinking about it.
He who doubts from what he sees
Will ne’er believe, do what you please.
If the Sun and Moon should doubt,
They’d immediately go out.
(except, of course, for mathematicians, who do not count as “we” because they are not here, they are doing mathematical thinking somewhere else).
I’m here! If a long-time-reader-first-time-commenter counts.
There was enough non-technobabble to hold onto in the description of Heptapod B in Story of Your Life that it was one of Sai’s inspirations for our joint conlang UNLWS. Admittedly, some of the more fanciful properties that Sai was keen on, like single strokes that wind up forming part of many different words, aren’t realised in UNLWS as more than poetic devices.
Excellent! I was hoping to lure stray mathematicians out of the woodwork. And I’m glad to hear Chiang’s story was useful.
Kleist comes to mind here:
“Nur weil der Gedanke, um zu erscheinen, wie jene flüchtigen, undarstellbaren, chemischen Stoffe, mit etwas Gröberem, Körperlichen, verbunden sein muß: nur darum bediene ich mich, wenn ich mich dir mitteilen will, und nur darum bedarfst du, um mich zu verstehen, der Rede, Sprache, des Rhythmus, Wohlklangs usw.”