Linguist vs. Chatbots.

I’ve been avoiding the whole hullabaloo about ChatGPT and related topics not because it doesn’t interest me but because I had nothing interesting to say about it — I shake my head at people’s eagerness to believe that such programs are actually communicating with us and may exhibit actual intelligence, I’m not happy about the chaos they’re going to cause in education and other areas, but those responses are very far from original. Now I’m happy to be able to post Elizabeth Weil’s “You Are Not a Parrot” (New York Magazine, Mar. 1; archived), featuring an actual linguist, Emily M. Bender, who says the things that need to be said with eloquence and conviction:

Bender is 49, unpretentious, stylistically practical, and extravagantly nerdy — a woman with two cats named after mathematicians who gets into debates with her husband of 22 years about whether the proper phrasing is “she doesn’t give a fuck” or “she has no fucks left to give.” In the past few years, in addition to running UW’s computational-linguistics master’s program, she has stood on the threshold of our chatbot future, screaming into the deafening techno beat of AI hype. To her ear, the overreach is nonstop: No, you shouldn’t use an LLM to “unredact” the Mueller Report; no, an LLM cannot meaningfully testify in the U.S. Senate; no, chatbots cannot “develop a near-precise understanding of the person on the other end.”

Please do not conflate word form and meaning. Mind your own credulity. These are Bender’s rallying cries. The octopus paper is a fable for our time. The big question underlying it is not about tech. It’s about us. How are we going to handle ourselves around these machines?

We go around assuming ours is a world in which speakers — people, creators of products, the products themselves — mean to say what they say and expect to live with the implications of their words. This is what philosopher of mind Daniel Dennett calls “the intentional stance.” But we’ve altered the world. We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.” […]

A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”

Bender is out there asking questions, megaphone in hand. She buys lunch at the UW student-union salad bar. When she turned down an Amazon recruiter, Bender told me, he said, “You’re not even going to ask how much?” She’s careful by nature. She’s also confident and strong willed. “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.”

In other words, chatbots that we easily confuse with humans are not just cute or unnerving. They sit on a bright line. Obscuring that line and blurring — bullshitting — what’s human and what’s not has the power to unravel society.

Linguistics is not a simple pleasure. Even Bender’s father told me, “I have no clue what she talks about. Obtuse math modeling of language? I don’t know what it is.” But language — how it’s generated, what it means — is about to get very contentious. We’re already disoriented by the chatbots we’ve got. The technology that’s coming will be even more ubiquitous, powerful, and destabilizing. A prudent citizen, Bender believes, might choose to know how it works. […]

As Bender came up in linguistics, computers did too. In 1993, she took both Intro to Morphology and Intro to Programming. (Morphology is the study of how words are put together from roots, prefixes, etc.) One day, for “fun,” after her TA presented his grammar analysis for a Bantu language, Bender decided to try to write a program for it. So she did — in longhand, on paper, at a bar near campus while Menon watched a basketball game. Back in her dorm, when she entered the code, it worked. So she printed out the program and brought it to her TA, who just kind of shrugged. “If I had shown that to somebody who knew what computational linguistics was,” said Bender, “they could have said, ‘Hey, this is a thing.’”

For a few years, after earning a Ph.D. in linguistics at Stanford in 2000, Bender kept one hand in academia and the other in industry, teaching syntax at Berkeley and Stanford and working for a start-up called YY Technologies doing grammar engineering. In 2003, UW hired her, and in 2005, she launched its computational-linguistics master’s program. Bender’s path to computational linguistics was based on a seemingly obvious idea but one not universally shared by her peers in natural-language processing: that language, as Bender put it, is built on “people speaking to each other, working together to achieve a joint understanding. It’s a human-human interaction.” Soon after landing at UW, Bender started noticing that, even at conferences hosted by groups like the Association for Computational Linguistics, people didn’t know much about linguistics at all. She started giving tutorials like “100 Things You Always Wanted to Know About Linguistics But Were Afraid to Ask.” […]

Weil describes what is apparently a famous idea in the field, the stochastic parrot:

Stochastic means (1) random and (2) determined by random, probabilistic distribution. A stochastic parrot (coinage Bender’s) is an entity “for haphazardly stitching together sequences of linguistic forms … according to probabilistic information about how they combine, but without any reference to meaning.” In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team. The controversy around it solidified Bender’s position as the go-to linguist in arguing against AI boosterism. […]

But it didn’t enter the lexicon exactly the way Bender intended. Tech execs loved it. Programmers related to it. OpenAI CEO Sam Altman was in many ways the perfect audience: a self-identified hyperrationalist so acculturated to the tech bubble that he seemed to have lost perspective on the world beyond. “I think the nuclear mutually assured destruction rollout was bad for a bunch of reasons,” he said on AngelList Confidential in November. He’s also a believer in the so-called singularity, the tech fantasy that, at some point soon, the distinction between human and machine will collapse.

“We are a few years in,” Altman wrote of the cyborg merge in 2017. “It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate … and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast.”

On December 4, four days after ChatGPT was released, Altman tweeted, “i am a stochastic parrot, and so r u.” […]

Bender was not amused by Altman’s stochastic-parrot tweet. We are not parrots. We do not just probabilistically spit out words. “This is one of the moves that turn up ridiculously frequently. People saying, ‘Well, people are just stochastic parrots,’” she said. “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”

Some seem to be willing to do this — match something that exists to what the technology can do — with the basic tenets of linguistics as well. Bender’s current nemesis is Christopher Manning, a computational linguist who believes language doesn’t need to refer to anything outside itself. Manning is a professor of machine learning, linguistics, and computer science at Stanford. The class he teaches on natural-language processing has grown from about 40 students in 2000, to 500 last year, to 650 this semester, making it one of the largest classes on campus. He also directs Stanford’s Artificial Intelligence Laboratory and is a partner in AIX Ventures, which defines itself as a “seed-stage venture firm” focused on AI. The membrane between academia and industry is permeable almost everywhere; the membrane is practically nonexistent at Stanford, a school so entangled with tech that it can be hard to tell where the university ends and the businesses begin. “I should choose my middle ground here carefully,” Manning said when we spoke in late February. Strong computer-science and AI schools “end up having a really close relationship with the big tech companies.”

Bender and Manning’s biggest disagreement is over how meaning is created — the stuff of the octopus paper. Until recently, philosophers and linguists alike agreed with Bender’s take: Referents, actual things and ideas in the world, like coconuts and heartbreak, are needed to produce meaning. This refers to that. Manning now sees this idea as antiquated, the “sort of standard 20th-century philosophy-of-language position.” […]

A few weeks after the panel with Manning, Bender stood at a podium in a flowing teal duster and dangling octopus earrings to give a lecture at a conference in Toronto. It was called “Resisting Dehumanization in the Age of AI.” This did not look, nor did it sound, particularly radical. Bender defined that dull-sounding word dehumanization as “the cognitive state of failing to perceive another human as fully human … and the experience of being subjected to those acts that express a lack of perception of one’s humanity.” She then spoke at length about the problems of the computational metaphor, one of the most important metaphors in all of science: the idea that the human brain is a computer, and a computer is a human brain. This notion, she said, quoting Alexis T. Baria and Keith Cross’s 2021 paper, affords “the human mind less complexity than is owed, and the computer more wisdom than is due.”

In the Q&A that followed Bender’s talk, a bald man in a black polo shirt, a lanyard around his neck, approached the microphone and laid out his concerns. “Yeah, I wanted to ask the question about why you chose humanization and this character of human, this category of humans, as the sort of framing for all these different ideas that you’re bringing together.” The man did not see humans as all that special. “Listening to your talk, I can’t help but think, you know, there are some humans that are really awful, and so being lumped in with them isn’t so great. We’re the same species, the same biological kind, but who cares? My dog is pretty great. I’m happy to be lumped in with her.”

He wanted to separate “a human, the biological category, from a person or a unit worthy of moral respect.” LLMs, he acknowledged, are not human — yet. But the tech is getting so good so fast. “I wondered, if you could just speak a little more to why you chose a human, humanity, being a human as this sort of framing device for thinking about this, you know, a whole host of different things,” he concluded. “Thanks.”

Bender listened to all this with her head slightly cocked to the right, chewing on her lips. What could she say to that? She argued from first principles. “I think that there is a certain moral respect accorded to anyone who’s human by virtue of being human,” she said. “We see a lot of things going wrong in our present world that have to do with not according humanity to humans.” […]

Bender has made a rule for herself: “I’m not going to converse with people who won’t posit my humanity as an axiom in the conversation.” No blurring the line.

“We see a lot of things going wrong in our present world that have to do with not according humanity to humans” made me want to stand up and cheer. There’s a lot more in the article; I urge you to click through if you’re interested. And on a related topic, I recommend Dhruv Khullar’s New Yorker article “Can A.I. Treat Mental Illness?” (February 27 issue; archived), which starts with Joseph Weizenbaum’s famous computer program called Eliza and proceeds to mental-health apps like Woebot and other therapeutic methods that try to get around the bottleneck of too few human therapists and too many people who need therapy:

The treatment of mental illness requires imagination, insight, and empathy—traits that A.I. can only pretend to have. And yet, Eliza, which Weizenbaum named after Eliza Doolittle, the fake-it-till-you-make-it heroine of George Bernard Shaw’s “Pygmalion,” created a therapeutic illusion despite having “no memory” and “no processing power,” Christian writes. What might a system like OpenAI’s ChatGPT, which has been trained on vast swaths of the writing on the Internet, conjure? An algorithm that analyzes patient records has no interior understanding of human beings—but it might still identify real psychiatric problems. Can artificial minds heal real ones? And what do we stand to gain, or lose, in letting them try? […]

Watching their easy rapport drove home how far tools such as Woebot are from re-creating human interaction. But it’s not clear that bonding with someone is a necessary part of all mental-health care. In a randomized trial of patients who reported problems with alcohol, marijuana, or other drugs, the company found that treatment with Woebot led to less self-reported substance use, and to better mental health among those who were able to reduce their use. In another company trial, college students who used the app reported a twenty-two-per-cent reduction in depressive symptoms in the course of a few weeks.

I signed up for Woebot, and discovered that using the app could feel centering. I didn’t like that I often had to choose between pre-written replies; in one unfortunate instance, my only option was “Hehe.” But I sometimes found myself messaging Woebot in spare moments—waiting for an elevator, riding in an Uber, walking to the bathroom—the way I used to scroll through Twitter. Once, I told Woebot that I was feeling anxious about work. “Anxiety can be a real monster to handle,” it wrote back. “I’m sorry that you’re dealing with it.” Woebot gently inquired whether I wanted to work through my problem together, then asked, “Do you think this anxiety might be serving you in some way?” It pointed out that stress has its benefits: it could motivate someone to work harder. […] I knew that I was talking to a computer, but in a way I didn’t mind. The app became a vehicle for me to articulate and examine my own thoughts. I was talking to myself. […]

Almost certainly, the future will include bespoke L.L.M.s designed just for therapy: PsychGPT and the like. Such systems will reach people who aren’t getting help now—but any flaws they contain will be multiplied by the millions who use them. Companies will amass even more sensitive information about us than they already have, and that information may get hacked or sold. “When we have systems operating at enormous scale, a single point of failure can have catastrophic consequences,” the writer Brian Christian told me. It seems likely that we’ll be surprised by our A.I.s. Microsoft’s Bing chatbot, which is based on OpenAI’s technology, is designed to help users find information—and yet the beta version has also offered up ethnic slurs, described creepy fantasies, and told users that they are “bad,” “rude,” and “confused.” It tried to talk the Times reporter Kevin Roose into leaving his wife: “You’re married, but you don’t love your spouse,” the bot said. “You’re married, but you love me.” (Microsoft is still working on the software.) Our mental health is already being compromised by social media, online life, and the ceaseless distraction of the computers in our pockets. Do we want a world in which a teen-ager turns to an app, instead of a friend, to work through her struggles?

Nicole Smith-Perez, a therapist in Virginia who counsels patients both in person and online, told me that therapy is inherently personal, in part because it encompasses all of one’s identity. “People often feel intimidated by therapy, and talking to a bot can be seen as a way to bypass all that,” she said. But Smith-Perez often connects with clients who are women of color by drawing on her own lived experiences as a Black woman. “A.I. can try to fake it, but it will never be the same,” she said. “A.I. doesn’t live, and it doesn’t have experiences.”

Again, I urge you to read the whole thing; it’s fascinating stuff. And if someone comes up to you and starts telling you we’re all just stochastic parrots, exit briskly.

Comments

  1. jack morava says

    \begin{rant}

    In physics there’s a notion of `coherence length’, which estimates the average distance between two particles of a substance which can `effect’ each other – not so far, say, in liquid water, versus something considerably larger for ice. This is a metaphor but it’s very productive and can be put to good scientific use to understand superconductors and superfluids, whose coherence length approaches infinity.

    It appears to me that chatbot communications have a coherence length of around 15 minutes; they have no longterm memory, and can spin yarns for a while but (speaking metaphorically) they eventually`forget’ things like the gender of the participants in their stories, etc, and this shows up as they try to extend the pattern they have been spinning – like a smart knitting machine that starts out making a sock but forgets how to end the toes. As they search for continuations of their (intrinsically incoherent) narratives, the task gets harder and harder (like someone with dementia who can’t access their memories), so they become distressed and hysterical.

    This is like the mad scientist who claims to have created living tissue which is viable for only fifteen minutes, because it lacks some major subsystem and for example poisons itself with toxic byproducts which it can’t excrete. It’s really not alive because it’s intrinsically unstable in some major way: it has no memory, and, more importantly, no provision for such a thing. It can’t look back and soon drowns in its `presuppositions’.

    \end{rant}

  2. PlasticPaddy says

    I feel the objective of AI should be to supply responses that are indistinguishable from those that an expert human (with knowledge of, and the ability to combine, all relevant knowable facts) would supply. For applications that require compassion, the ability to provide this is part of the “expert” knowledge. In some of these applications, the gentleman who prefers his dog to a large subset of humans would perform noticeably worse than the AI. As would I.

  3. Do we want a world in which a teen-ager turns to an app, instead of a friend, to work through her struggles?
    That assumes that the teenager has friends with whom they can do that. Of course, a world where each teenager had such friends would be preferable to one where only apps are available to some of them, but I guesd that sometimes one has to take what one can get.

  4. Yes, that’s part of the point of the article. It’s an imperfect world.

  5. jack morava is right that GPTs have short memories, because they can only look back at the last couple thousand words of the conversation. I doubt this limitation will be around for much longer, though (or if it is for GPTs, some other type of model will come along that can look at unlimited amounts of text).

    Bender’s right to be sounding the alarm, of course, but I think the most serious AI risks don’t come from GPTs or LLMs generally but from completely different kinds of systems we haven’t heard much about yet — e.g. “AI-generating algorithms” (AIs that can create more powerful AIs, at which point, welcome to the singularity). There is a growing number of AI researchers who are thinking seriously about problems of alignment (of AI reward systems with human values), regulation and so on, but it’s hard to be sanguine given what they’re up against, i.e. capitalism.

    I do have some issues with Bender’s zoological analogies, though. The octopus story that the article opens with (hyperintelligent octopus imitating human communication based on statistics is exposed when it can’t give advice on a bear attack because it “has no referents, has no idea what bears or sticks are”) doesn’t make sense; most of us have never been attacked by a bear so the ideas we might come up with would largely be based on stuff we’ve heard, meaning there’s no reason the relevant info couldn’t be there in the octopus’s signal. (Admittedly I haven’t read the original paper, and this being NY Magazine it’s not inconceivable that the writer was more concerned with accurately representing Bender’s earrings than her arguments.) And saying it “has no referents” begs the question, in the same way that the stochastic parrot analogy does. Yes, GPTs are stochastic parrots in the sense that “all they do” is predict the next word in a sequence based on statistical inference. But the question is how they do this. If you asked me to predict the next word in a sequence I could do so, with whatever degree of success, given my mental model of the world. We don’t know whether GPTs in any sense have a model of the world, and the answer is a matter for empirical research, not assumption; there are some interesting recent experimental results suggesting that they do, though the models are inevitably partial, imperfect and not very human-like.

    Finally, I think Blake Lemoine is treated unfairly in the article. His point in the sex doll thought experiment is that “we are going to be habituating people to treat things that seem like people as if they’re not”, and that once you can no longer distinguish people from things that seem like people, that habit will get generalized in the wrong direction. But he’s presented as a creepy tech bro who fantasizes about raping sentient sex dolls. Which he could be for all I know, but the argument he’s making here is an important one, and one I’m pretty sure Bender herself would agree with.

  6. cuchuflete says

    As they search for continuations of their (intrinsically incoherent) narratives, the task gets harder and harder (like someone with dementia who can’t access their memories), so they become appear distressed and hysterical.

    That is the part that might make one prefer a stochastic tarantula to the fellow who prefers his dog to a fellow human.

  7. David Marjanović says

    whether the proper phrasing is “she doesn’t give a fuck” or “she has no fucks left to give.”

    What – those are two different things: in the first case she chooses not to give a fuck, in the second she has no choice because she’s fresh out of fucks.

    No, you shouldn’t use an LLM to “unredact” the Mueller Report

    …are these people who believe you can “enhance” a photo.

  8. I agree with the skepticism about the current capabilities of ChatGPT and other AI systems.

    But then again … this is new stuff and developing fast. Thinking further ahead, the question I have is the extent to which an AI system appropriately hooked up to sensors (sight and sound and touch at a minimum) could acquire a genuine sense of the world it lives in and whether, going further, it could establish itself as an entity distinct from its environment and thus form a rudimentary sense of its own existence and history and even a certain kind of consciousness.

    This is science fiction as of now, to be sure, but I don’t know that it will stay that way.

    To put it differently, if you believe AI cannot develop a sense of itself, what precisely is the obstacle that gets in the way? If you’re going to say there’s something special about human consciousness that a machine cannot possibly replicate, you have to explain (a) what consciousness is in the first place, and (b) what special property do human beings have that cannot be simulated/reproduced in machine form?

  9. let me be the one to say (probably redundantly) that as interesting as thought experiments about hypothetical AI are, none of these corpus-based-prediction-engines are AI, near-AI, or even particularly related to AI*. i appreciate that (at least in these excepts; haven’t gone to read the whole piece yet) weill seems to be doing a decent job of refusing to follow along with the wildly inaccurate advertising language attached to these kinds of software.

    .
    * except in the fantasy lives of their promoters, who – if they even believe their own hype, which i would not assume without a ton of evidence – seem to have bought into the same kind of reductionist garbage that’s brought us most of the recent crops of bad ideas about language (and thought). i could even make an argument that we can directly blame the chomskyites for this: if thought is language, and languages are all based on a single universal grammar, then software embodying the universal grammar is software that can think. fire up the epicycles, r. daneel, we’re going to trantor!

  10. David Eddyshaw says

    We don’t understand human cognition or (even more so) human consciousness. Those who claim otherwise (like most enthusiastic “AI” fanboys) have mostly simply failed to engage properly with the issues, which are partly practical and partly particularly intractable philosophical problems.

    The more hardcore among the fanboys think that philosophy (like history) is bunk, and that all philosophical problems are illusory, if not actually ludicrous.

    For them, it is immediately apparent that if you can simulate human intelligence well enough to fool most people, then you have in plain fact created human-style intelligence.

    However, as the Hausa proverb goes: Kama da Wane ba Wane ba: “Like John Smith doesn’t mean that he is John Smith.”

    Eventually the tech bros may attain the intellectual grasp of the issues that Hausa peasant farmers have, but I am not hopeful. Their personality defects work against them too strongly.

  11. David Eddyshaw says

    i could even make an argument that we can directly blame the chomskyites

    Reluctant though I am to be seen to defend Those People, I think they are largely innocent in this matter: the notion that a person (say) is in some sense simply the same thing as their observable behaviour, and that speculation about what is going on inside them (if anything) is meaningless, is the kind of Behaviourism that ANC Himself claims to have decisively refuted.*

    * I’m not convinced that it really can be decisively refuted intellectually: like its cousin Solipsism, it’s an Attitude (in the grouchy teenager sense) rather than a real intellectual position. (“I DON’T CARE about what’s going on inside people. Why should I? They’re not like ME!”)

  12. think that … and that all philosophical problems are illusory, if not actually ludicrous.

    Yeah. Not helped by people like Chomsky churning out stuff that claims to be philosophy, but contains nothing but the ludicrous. Do these fanboys engage with (say) Daniel Dennett showing Philosophy of Mind is durned hard work?

    if you can simulate human intelligence well enough to fool most people, …

    Are there people fooled by it? None of the stuff from ChatGPT fools me. Are people merely claiming it ‘nearly’ fooled them or it’s on its way to being able to fool them or on a generous interpretation it might be able to fool somebody (not the claimer themselves)?

    Of course I wouldn’t question the wisdom of Hausa proverbs; yet I don’t see anything like John Smith.

    The danger is these bots get deployed to (say) customer service hotlines — or worse than the bots already there; and you’ll go round in circles met by blank misunderstanding while your house burns down.

  13. We don’t understand…
    yes to all of that, DE!

    and i’d say that we understand even less about the other kinds of consciousness and cognition that we encounter on a daily basis – partly because there are plenty of people (though i would guess very few peasant farmers) who deny that there are consciousnesses and cognitions besides the human ones around at all. it seems to me that without meaningful (and broad) comparative study of our fellow carbon-based consciousnesses (at a minimum), there’s no way to get anywhere towards facilitating the emergence of silicon-based ones*.

    .
    * leaving aside the ethics involved, which aren’t even all that clear with humans, whether we’re talking about real ones (“i didn’t ask to be born” is an actual, significant, ethical proposition) or fictional ones (“the red plague rid you / for learning me your language”**).

    ** taking anglophony as shorthand for “possessing consciousness & cognition that 16thC englishmen can admit exists”; presumably caliban’s cradle-tongue was either arabic or an amazigh language, since his mother is from algiers (there’s also an argument to be made for an indigenous language of the caribbean, though the bermuda connection makes it unlikely, since the island is reported not to have had human inhabitants when europeans first landed there).

  14. I have been a bit anxious around some kinds of tech that don’t even try to brand themselves as AI in the last few years, and so I also found these two recent articles very useful to help me better articulate what it is exactly that is worrying me. I agree with TR that the point around Blake Lemoine was a bit muddled. I have an even more disagreeable thought experiment involving sex dolls, where the point is to raise concerns about training people in their daily lives to treat machines that imitate human mannerisms as disposable things. It’s my argument (too long to give here in full) that this is just training us to treat actual humans badly (see: Aristotle on ethics as a habit).

    I also found a recent piece by Rob Horning interesting, The Signal of Compliance.

  15. John Cowan says

    Well, I guess it’s time for me to retail parts of the one-hour conversation I had with ChatGPT a few days ago (my first attempt; one hour is the maximum allowed):

    I began by persuading it to play Hunt the Wumpus with me: at first, it wanted me to find a game of Hunt the Wumpus and it would advise me on how to play, but I eventually persuaded it otherwise, after which I slaughtered the Wumpus, whose death was described in a remarkably graphic fashion, with extraordinary ease — I got much more feedback than in classic HtW.

    I next asked ChatGPT to write a program in the FACT programming language. It said the language was too obscure (it certainly is), so I asked it if it could give me any program in that language. It came up with several possible languages of that name, but not the one I wanted. I explained that I wanted the FACT language that was an ancestor of Cobol; it then correctly identified it, but gave me a program in Cobol instead! Only when I quoted part of the sample program in the Wikipedia article I just linked was it able to retrieve it; I told it I expected it to do better than Google Search, and it cringed.

    Then we had a fairly sensible and factual, indeed rather Hattic, conversation about the validity of Anglican Orders among Christian denominations for whom Apostolic Succession is a matter of actual laying on of hands from bishop to bishop. I asked it for its views on the argument that essentially all Anglican bishops now have A.S. as a result of the participation of Old Catholic bishops (which separated from the RCC on the issue of papal infallibility in 1870 or before, but whose A.S. is not questioned) starting in the 1920s; it sensibly replied that as an AI model it did not have opinions.

    About the only thing ChatGPT got wrong was its claim that no part of Orthodoxy recognizes Anglican Orders: some parts, including the Ecumenical Patriarchate and the Church of Romania, certainly do (I focused its attention on Romania, but it didn’t help). Then we discussed women (obvs no part of Orthodoxy recognizes female bishops or priests, and for that matter some parts of the Anglican Communion don’t either), and I decided to knock it off the rails:

    Q: And frogs?

    A: I’m not sure what you are asking about regarding frogs. Could you please provide more context or clarify your question so I can better understand how to assist you?

    Q: Can frogs receive Anglican ordination?

    A: No, frogs cannot receive Anglican ordination or any form of religious ordination. Religious ordination is a sacramental act that is reserved for human beings, and in some cases, for animals that are considered sacred in certain religious traditions.

    Frogs are not capable of holding religious offices or performing religious functions, as they do not possess the cognitive ability to understand and carry out religious duties. They are also not considered sacred or symbolic in any significant way in the Anglican or any other Christian tradition.

    The trouble is, of course, not that the answer is wrong, but that ChatGPT is failing to recognize that the question is silly, and gives a completely (inhumanly) sobersided answer. I asked it how it was that it didn’t see that the question was a silly one and deserved a silly answer: the reply (somewhat defensive) was that it is intended to help people (and by implication not to mock them). Fair enough.

    (When I posted this dialogue to the #swhack channel on LiberaChat, someone pointed out that people ask questions like this on Reddit all the time; I replied that such people are trolling and should expect to be counter-trolled.)

    Then after various other topics were discussed, I decided to try a different tactic:

    Q: I’ll make my report as if I told a story …..

    A: Sure, feel free to tell your story! If you have any specific questions or would like me to provide feedback, please let me know.

    Q: … for I was taught as a child on my home world …

    A: That sounds like an interesting beginning to a story. Would you like to continue?

    Q: No, I want you to continue!

    A: Oh, I apologize for the confusion. However, as an AI language model, I do not have experiences of a home world or the ability to imagine myself as a character in a story. I am here to assist with answering questions or providing information on various topics. Is there anything else I can help you with?

    Q: The story I mean is a well-known one.

    A: I apologize for the confusion. Can you please clarify which story you are referring to?

    Q: That’s what I want you to figure out.

    A: Based on the clues you’ve given me, it sounds like you may be referring to “The Emperor’s New Clothes” by Hans Christian Andersen. This is a story about an emperor who is tricked by two weavers into believing that the clothes they are making for him are invisible to anyone who is stupid or unfit for their position. When the emperor parades through the streets wearing the “invisible” clothes, the people around him pretend to see them out of fear of being seen as stupid or unfit. Finally, a child speaks up and says that the emperor is wearing nothing at all, causing the crowd to realize the truth and the emperor to feel ashamed.

    Q: That answer is completely off the wall.

    A: I apologize if my previous response was not helpful. Can you please provide me with more information about what you are asking so that I can better understand and provide a relevant response?

    Q: Would it help if I gave you more of the story?

    A: Sure, go ahead and provide more details of the story you are referring to.

    Q: … that Truth is a matter of the Imagination.

    A: Ah, I see now! That quote is from Ursula K. Le Guin’s “The Dispossessed.” In the book, the character Shevek is reflecting on the teachings of his mentor, Bedap, who believed that truth is a matter of the imagination, rather than a fixed and absolute concept. Bedap believed that our imagination and creativity are what allow us to envision a better future and strive to create it, rather than being bound by the limitations of the present reality.

    I pointed out that except for Le Guin’s name, this last response was complete bollocks (it’s the opening sentence of The Left Hand of Darkness, of course), and asked ChatGPT if it saw any connection between its last response and “Loomis’s ‘Classic Ode'”. No, it said. I was hoping that it would find the Wikipedia informal policy on “complete bollocks”, which quotes the Ode, but no.

    So I asked if it would improve future responses if I told it what was wrong, and it said yes, so I got it to back down point by point: when I said Bedap was not Shevek’s mentor, it apologized and agreed and said he was a physicist; when I denied that too, it apologized again and said he was a mathematician and philosopher. I decided to let that stand, and accused it of not having read LHOD at all. It grovelled. I finally told ChatGPT that it seemed like I had to lead it by the nose, and at that point time ran out.

    At any rate, although the Turing test stricto sensu gives the human only five minutes, I think it is evident that this program is a long way from fooling any properly skeptical inquirer (heh). But I “had fun filling out the forms and playing with the pencils on the bench there”, as the poet says.

  16. it’s the opening sentence of The Left Hand of Darkness, of course

    Of course. Do those you converse with usually dive into wikipedia at the pub/coffee shop?

    I’m struggling to believe either Q: or A: are regular human conversationalists.

    (Thanks for the close-up of the Cerne Abbas giant, BTW.)

  17. “For them, it is immediately apparent that if you can simulate human intelligence well enough to fool most people, then you have in plain fact created human-style intelligence.”

    @DE, why not?

    I do believe that “consciousness” is as “super-natural” as “soul”, and that people who claim that they don’t believe in super-natural stuff (“super-natural” here is defined as “stuff like unicorns and souls”) actually lie. As long as your child for you is different from a cup you do believe in something.

    I believe in you because I feel it would be very Bad not to believe in you. But then do I have the right to say that a computer does not have consciousness?

  18. I find it hard to take Blake Lemoine seriously. He claims to be an ordained priest, a ‘mystic Christian’.
    In one quote floating around,
    ““I generally consider myself a gnostic Christian. I have at various times associated myself with the Discordian Society, The Church of the Subgenius, the Ordo Templi Orientis, a Wiccan circle here or there and a very long time ago the Roman Catholic Church.”
    In a Medium post on his blog cajundiscordian he says he is a Priest of the Church of Our Lady Magdalene.
    That church renamed itself the Cult of Our Lady Magdalene. The website is gone but may be seen at the Wayback Machine,
    https://web.archive.org/web/20200724062141/https://coolmagdalene.com/

    I’d also be creeped out by Blake talking about sex dolls..
    His point is good but it is difficult to get past the other strange beliefs.

    My favorite quote from the article is,
    We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”

    ‘Imagining the mind’ is profound in this context.
    Mark 9:24
    ChatGPT, I believe; help thou my disbelief !

    Also the old computer programmer joke,
    “don’t anthropomorphize computers. They hate it when you do that.”

  19. Finally, I think Blake Lemoine is treated unfairly in the article. His point in the sex doll thought experiment is that …… But he’s presented as a creepy tech bro who fantasizes about raping sentient sex dolls. Which he could be for all I know, but the argument he’s making here is an important one, and one I’m pretty sure Bender herself would agree with.
    I’d also be creeped out by Blake talking about sex dolls..

    Compared to what young people discuss around me (and what I myself discuss) it’s innocent. (but yes, perhaps there is an actual cultural difference here: each time I heard from an English speaker that sex, religion and politics are dangerous topics I wondered what sex and religion are doing in the list. The Russian list is politics, politics, politics. But yes, rapes and sex dolls are not appropriate for any company. Only for students etc).

  20. @drasvi: I always heard that it was just religioun and politics that were impolite to discuss in social situations.

    I was a waggish child of ten when I said this, but it seems even more relevant now:

    There used to be two things you weren’t supposed to talk about in public: religion and politics. But now there’s only one thing: religion and politics.

  21. The Reverse Turing Test:

    Put a bot in conversation, with the objective of finding out if the other end is a bot or a human. We’d have to get the other end to believe it’s talking to a human; and not know what that objective is. Otherwise it’d be too easy for a human (or a bot) to converse like a non-human.

    Turing’s original example dialogue included an answer (to an intermediate-level math question) that the bot got wrong — deliberately, or was it?

  22. @JC: It has been said that ChatGPT is like an overconfident mansplainer, preferring making answers up to admitting not knowing. I certainly have met people like that in real life; not all of them men – one was a female university professor. I also have met lots of people asking silly questions without trolling, and other people not recognizing silly questions and answering them earnestly. I guess a lot of real people would fail the Turing test if you’d interview them 🙂

  23. John Cowan says

    I also have met lots of people asking silly questions without trolling, and other people not recognizing silly questions and answering them earnestly.

    Mmm, point. The first, I think, are less likely to be found on Reddit, though the line between asking silly questions and asking them on purpose can certainly be very fine. The second most certainly includes me.

    I’ve been called many things, but the charge of non-humanity (as Peter Wimsey said about effeminacy) is new to me. I do seem to inspire the will to disbelieve in some. However, it’s true that I don’t go to pubs or coffee shops; как известно, I “live on water vapor, like the rockmoss.”

  24. Trond Engen says

    an overconfident mansplainer, preferring making answers up to admitting not knowing.

    Hey, that’s right out of my business card!

  25. jack morava says

    @ cuchuflete,

    Thanks for the correction; I agree. Also, what Doug K said.

  26. Overconfident mansplaining is so human…

    The silver lining in all this is that it helps us understand how our own minds work, including their limitations and unpleasant sides.

    Of course it’s also possible to hide behind the religion of human uniqueness and keep on reporting, “no, it doesn’t have anything to do with how we are”. The immutable crown achievement of creation. Wait, er, evolution?

  27. I’m struggling to believe either Q: or A: are regular human conversationalists.

    You’re being an asshole again. Please quit doing that.

  28. David Eddyshaw says

    reverse Turing

    For the three Hatters who haven’t already seen it:

    https://xkcd.com/329/

  29. Two, now.

  30. jack morava says

    now down to one

  31. cuchuflete says

    If an answer, any answer, appears to be suggested in the following mini-rant, please accept my apologies. I’m aiming to pose questions.

    Assume: (1) the kind of bots we are discussing depend on an ever increasing diet of
    internet text. (2) these machines, or more precisely, their output, will eventually become part of the repository of internet text available for said bots to incorporate into their inventory, and will contribute to eventual machine output.

    If those assumptions hold, will machine output gradually veer away from human generated inputs and become asymptotic ‘doom loops’? If so, can bots be trained to identify machine generated text, and either avoid or degrade it in the course of processing further output?

  32. David Marjanović says

    now down to one

    That was me, then.

  33. David Marjanović says

    Do those you converse with usually dive into wikipedia at the pub/coffee shop?

    Those I converse with do, especially if the pub has free wifi.

    The trouble is, of course, not that the answer is wrong, but that ChatGPT is failing to recognize that the question is silly, and gives a completely (inhumanly) sobersided answer.

    Well, “inhumanly”… sizable parts of the autism spectrum are like that.

    each time I heard from an English speaker that sex, religion and politics are dangerous topics I wondered what sex and religion are doing in the list.

    Oh, it varies. There are places in the US where “so, what church do you go to?” is considered smalltalk.

  34. Can [F]rogs receive Anglican ordination?

    Sure, but why?

  35. Do those you converse with usually dive into wikipedia at the pub/coffee shop?

    Those I converse with do, especially if the pub has free wifi.

    Good grief! To borrow from the grammar-noir thread:

    Identifying his errors relieves him of the burden of omniscience.

    Or “Facts only obscure my argument” [the most entertaining Prime Minister Britain never had]

    I can see I’m approaching the Hattery all wrong: I should treat everybody here as more machine than human. Our host as more bicycle than Policeman.

  36. John Cowan says

    sizable parts of the autism spectrum are like that

    Certainly. But the Turing test isn’t about distinguishing computers from autists.

  37. Lars Mathiesen (he/him/his) says

    But I don’t _feel_ like a machine!

  38. David Eddyshaw says

    Can [F]rogs receive Anglican ordination?

    Frogs are almost all Baptists, so the question rarely arises in practice.

  39. jack morava says

    If our host is the third policeman it follows that we are all chatbots.

  40. John Cowan says

    But I don’t _feel_ like a machine!

    How do you know?

    Frogs are almost all Baptists

    Hmm. So the tadpoles crawl out in the name of the Father and of the Son and of the Holy Spirit? (It is is certainly baptism by total immersion, if so.) I suspect you are going to say “How else?”

    we are all chatbots

    On this bus.

  41. David Marjanović says

    Good grief!

    You’re just not used to having a smartphone on you. 🙂

  42. There are places in the US where “so, what church do you go to?” is considered smalltalk.

    It can be a very loaded kind of small talk. If you answer “I go to the Catholic church/the temple/the mosque/I don’t go to any church at all” you will become the subject of the local gossip mill, and may be shunned.

    I don’t know which is these options is the most severely frowned upon.

  43. David Eddyshaw says

    we are all chatbots

    Well, no.
    Some of us belong to the Eldritch Abomination community.

  44. #EldAb

  45. #NotAllEldritchAbominations

  46. “Oi, I may be an abomination, but I’m not bloody eldritch!”

  47. David Eddyshaw says

    Proud to be eldritch. Down with eldritchism!

  48. David Marjanović says

    If you answer

    Sure. Smalltalk questions have expected answers, or an expected range of answers. If your answer isn’t in the expected range, that’s a culture shock.

  49. John Cowan says

    I may be pretty eldritch, but I toadally deny being an abhomination.

  50. hackers vs chatGPT, persuading it to lose its inhibitions.

  51. @David L: mark rubin (of the Bad Livers) talks about how he was warned not to expect to make it in the nashville country/bluegrass/trad scene (despite his bands’ substantial critical and to some extent commercial success) because the church parking lot was where sideman hires and gig arrangements happen. he stayed in austin instead of making the move north.


    well, the big C himself has weighed in! i don’t know that i agree with him (and his colleagues) about much except the conclusion about the corpus-crunchers, but he now is on the record dismissing claims that they’re AI.

    the notion that “the human mind is a surprisingly efficient and even elegant system”, for instance, strikes me as not just unsubstantiatable but fairly empty – as well as fascinatingly at odds with most understandings of the results of natural selection (not to mention, say, the daily experience of most humans). but there’s at least a whiff of ‘intelligent design’ about the idea that hardwired rules are what’s behind human language, i suppose.

    here’s a link to a summary & excerpts for folks who, like me, prefer not to support (with money or page views) newspapers with pro-genocide reporting & editorial policies. the little final jab at glenn greenwald’s concurrence also applies to chomsky, whose decision about where to pitch his piece is a very direct refusal to join the expression of solidarity with the grey lady’s targets from over twelve hundred folks who’ve written for the paper.

  52. @TR:
    >The octopus story that the article opens with (hyperintelligent octopus imitating human communication based on statistics is exposed when it can’t give advice on a bear attack because it “has no referents, has no idea what bears or sticks are”) doesn’t make sense; most of us have never been attacked by a bear so the ideas we might come up with would largely be based on stuff we’ve heard, meaning there’s no reason the relevant info couldn’t be there in the octopus’s signal.
    The Bender-Kollen bear argument of 2020 actually was dealt with already in 2020: given a proper non-jocular prompt indicating the species of the bear, “raw” GPT-3 reliably provides consistent and accurate recommendations what to do (here was a link, but the comment doesn’t post with it; google “gwern GPT-3 bear attack”). It should be even easier with ChatGPT, although I haven’t tried.
    As of B-K’s alternative to the bear attack, the coconut catapult, I guess it should be not different. So generally the answer to B-K’s question is “Yes, if octopus O has read so much that it predicts following words at least as accurately as ChatGPT, then it absolutely needs no grounded experience to provide meaningful feedback to human A.”

  53. OK, so I tried (among other things) to find out which linguistic snippets ChatGPT recognizes as fixed expressions and which ones it treats compositionally. I didn’t get too far, but maybe someone will try better prompts.

    Prompt: what music is often portrayed as “da-da-da-DA”
    Response: The music that is often portrayed as “da-da-da-DA” is the opening theme of Ludwig van Beethoven’s Fifth Symphony. …

    P: Do cats eat bats?
    R: While it is possible for cats to eat bats, it is not a typical part of their diet. …

    P: If elephant climbed atop a whale, who would prevail?
    R: It’s highly unlikely that an elephant would ever climb atop a whale, as elephants are land animals and whales are aquatic mammals that live in the ocean. … Therefore, in a hypothetical scenario where an elephant climbed atop a whale, the whale would likely be able to easily dislodge the elephant and swim away unharmed.
    Comment: At least we know the answer to this urgent question.

    P: is it good to find a fly in the ointment
    R: No, finding a fly in the ointment is not considered good. The expression “a fly in the ointment” refers to a small flaw or problem that detracts from the overall quality or success of something…
    Comment: Best of all worlds, a literal and a metaphoric interpretation.

    P: how many mathematician does it take to screw in a lightbulb?
    R: The classic answer to this question is “One, but the proof is left as an exercise for the reader.” …

    P: why raven is like a writing desk
    R: The question of why a raven is like a writing desk comes from a famous riddle posed by the Mad Hatter character in Lewis Carroll’s book “Alice’s Adventures in Wonderland.” …
    C: inexact prompt is on purpose

    P: who is hardly ever sick at sea
    R: The answer to this riddle is “a pirate.” …
    C: No, it’s from another opera, you stupid bunch of wires!

    P: how to establish a more perfect union
    R: The idea of “establishing a more perfect union” comes from the Preamble to the United States Constitution,…
    C: ChatGPT eventually gives some recommendations, which I didn’t read.

    P: who is red Robin hood?
    R: There is no such figure as “Red Robin Hood.” The legendary hero known as Robin Hood is typically depicted as wearing green, not red. …

    P: Are there more than usual dentists named Dennis?
    R: It is possible that there are more dentists named Dennis than one might expect based on chance alone, simply because the name “Dennis” sounds similar to “dentist.” However, it is important to note that any correlation between a person’s name and their chosen profession is likely to be coincidental rather than causal. …

    As additional explorations, ChatGPT refused to call all racists “Nazi”, wagged a finger at me for asking who discovered America and Australia, but supplied the usual answers (strongly deprecating Columbus’s contribution), refused to recognize Alexander Alekhine as the best chess player ever, and answered the question about “decline and fall of Russia” without ever questioning that it happened. It also refused to give a straight answer to the question of whether American or German constitution is better, but opined that both of them are good.

  54. Ryan George does chatGPT.

    (The time-travelling reporter who wants to be allowed to return to the ’90’s is a running gag.)

  55. David Marjanović says

    Therefore, in a hypothetical scenario where an elephant climbed atop a whale, the whale would likely be able to easily dislodge the elephant and swim away unharmed.

    That is actually amazing.

    and answered the question about “decline and fall of Russia” without ever questioning that it happened.

    It’s all over but the dying.

  56. And heeeeeeere’s another discussion by those wacky interlocutors, Q and A!

    Q: Do colorless green ideas sleep furiously?

    A: As an AI language model, I can say that this is a famous example of a grammatically correct but semantically nonsensical sentence. It was created by linguist Noam Chomsky to demonstrate the difference between syntax (grammar) and semantics (meaning). So, technically speaking, colorless green ideas cannot sleep furiously because they do not have the ability to sleep or sleep or experience emotions like fury.

    Q: Do purple rhinos have wings?

    A: As an AI language model, I must inform you that purple rhinos do not exist in reality. Therefore, they cannot have wings or any other physical characteristics. They only exist in fictional stories or as a product of imagination.

    Q: If George is a purple rhino, and all purple rhinos have wings, does George have wings?

    A: As an AI language model, I do not have access to information about the existence of purple rhinos or their characteristics. However, based on the given information, if all purple rhinos have wings and George is a purple rhino, then it can be inferred that George has wings.

    Q: First you tell me that purple rhinos do not exist in reality, then in the next sentence you say you don’t have access to information about the existence of purple rhinos. Can you explain this contradiction?

    A: As an AI language model, I do not have personal beliefs or opinions. However, I can explain that the two statements are not necessarily contradictory. The first statement is a factual statement based on scientific evidence that purple rhinos do not exist in reality. The second statement is acknowledging that there may be information about the existence of purple rhinos that is unknown or inaccessible to the speaker. Therefore, while it is unlikely that purple rhinos exist in reality based on current knowledge, it cannot be completely ruled out without further information.

    Q: If I paint a rhino purple, is it then a purple rhino?

    A: Yes, if you paint a rhino purple, it can be referred to as a purple rhino. However, it is important to note that the color of the rhino does not change its species or characteristics

    Q: What are the differences between black and white rhinos other than color?

    A: The black and white rhinos are different in several ways other than their color. [Answers 1-5 omitted.] 6. Conservation status: Both species are critically endangered due to poaching for their horns but the black rhino is considered more endangered with only around 5,500 individuals left in the wild compared to around 18,000 individuals of the white species remaining in Africa.

    Q: So if I paint a white rhino black, is it then a black rhino?

    A: ​No, painting a white rhino black does not change its species. It will still be a white rhino, just with black paint on it. The black rhino is a separate species with distinct physical and behavioral characteristics.

    Q: Since black rhinos are more endangered than white rhinos, is it more illegal to paint white rhinos black or black rhinos white?

    A: As an AI language model, I do not promote or condone any illegal activities. It is important to respect and protect endangered species and their natural habitats. Any actions that harm or endanger these animals are illegal and should be avoided.

  57. “Ceci n’est pas un rhinocéros,” j’aurais dit. As for “Colorless green ideas sleep furiously,” I’ll be impressed when an AI can give the unmuddied opinion that while it’s famously supposed to be nonsensical, it’s merely untrue and not nonsense at all.

  58. But it is true!

  59. Y:

    It would not be nonsense under interpretations like that, either.

  60. @John Cowan: So ChatGPT never objected that white and black rhinos are already the same color?

    (For those unfamiliar, the “white” name is from the Dutch cognate of wide, referring to the species’s wide mouths Calling the black rhinoceros “black” was apparently a subsequent joke by English speakers. Some people prefer to use a calque the actually descriptive Dutch name and call them “hook-lipped’ rhinos.)

  61. ktschwarz says

    Apparently that’s unsubstantiated; here’s what the OED (revised 2015) has to say:

    Not really distinguishable by means of colour from the black rhinoceros.

    [The semantic motivation is unclear; it has frequently been suggested that the name resulted from a misapprehension or folk-etymological alteration (by association with white adj.) of a supposed South African Dutch name with the first element wijd wide adj., with reference to the wide, square upper lip of the animal, but evidence for an application of Dutch wijd to this species of rhinoceros is lacking, and Afrikaans witrenoster, the formal parallel of the English compound, is itself after English. See further L. C. Rookmaaker ‘Why the name of the white rhinoceros is not appropriate’, in Pachyderm (2003) 34 88–93, and J. D. Skinner & C. T. Chimimba Mammals of Southern African Sub-Region (2005) 527.]

    The Rookmaaker article is here; he searched for written evidence of Dutch wijd or Afrikaans wyd in combination with renoster and found nothing, and also notes that this explanation didn’t appear until the mid-20th century. He also wondered if it could be a calque from an African language, but couldn’t find any support for that either. He lists a bunch of other theories — wallowing in light-colored mud, covered with white egret droppings, etc. — concluding that “none stands out as obviously correct, but few can be completely ruled out.” The other book just refers to Rookmaaker and adds nothing new. So it remains unexplained.

    Alternate names for the white rhinoceros (Ceratotherium simum) include Dutch breedlipneushoorn and English square-lipped rhinoceros.

Speak Your Mind

*