Meghan O’Gieblyn’s Babel (n+1 Issue 40, Summer 2021) is an essay about OpenAI’s GPT-3, described “by a friend who works in the industry as autocomplete on crack.” (I posted about it briefly last year.) She has plenty of interesting things to say; here are some excerpts:
From what I could tell, the few writers who’d caught wind of the technology were imperiously dismissive, arguing that the algorithm’s work was derivative and formulaic, that originality required something else, something uniquely human — though none of them could say what, exactly. GPT-3 can imitate natural language and even certain simple stylistics, but it . . . cannot perform the deep-level analytics required to make great art or great writing. I was often tempted to ask these skeptics what contemporary literature they were reading. The Reddit and Hacker News crowds appeared more ready to face the facts: GPT-3 may show how unconscious some human activity is, including writing. How much of what I write is essentially autocomplete?
Writers, someone once said, are already writing machines; or at least they are when things are going well. The question of who said it is not really important. The whole point of the metaphor was to destabilize the notion of authorial agency by suggesting that literature is the product of unconscious processes that are essentially combinatorial. Just as algorithms manipulate discrete symbols, creating new lines of code via endless combinations of 0s and 1s, so writers build stories by reassembling the basic tropes and structures that are encoded in the world’s earliest myths, often — when things are going well — without fully realizing what they are doing. The most fertile creative states, like the most transcendent spiritual experiences, dissolve consciousness and turn the artist into an inanimate tool — a channel, a conduit. I often think of the writer who said she wished she could feel about sex as she did about writing: That I’m the vehicle, the medium, the instrument of some force beyond myself.
[…]
GPT-3’s most consistent limitation is “world-modeling errors.” Because it has no sensory access to the world and no programmed understanding of spatial relationships or the laws of physics, it sometimes makes mistakes no human would, like failing to correctly guess that a toaster is heavier than a pencil, or asserting that a foot has “two eyes.” Critics seize on these errors as evidence that it lacks true understanding, that its latent connections are something like shadows to a complex three-dimensional world. The models are like the prisoners in Plato’s cave, trying to approximate real-world concepts from the elusive shadow play of language.
But it’s precisely this shadow aspect (Jung’s term for the unconscious) that makes its creative output so beautifully surreal. The model exists in an ether of pure signifiers, unhampered by the logical inhibitions that lead to so much deadweight prose. In the dreamworld of its imagination, fires explode underwater, aspens turn silver, and moths are flame colored. Let the facts be submitted to a candid world, Science has no color; it has no motherland; It is citizens of the world; It has a passion for truth; it is without country and without home. To read GPT-3’s texts is to enter into a dreamworld where the semiotics of waking life are slightly askew and haunting precisely because they maintain some degree of reality. […]
GPT-3 has a temperature gauge, which can be adjusted to determine the randomness and creativity of its output. If you give it the prompt: My favorite animal is — a low-temperature response will be a dog. If you turn up the temperature, it might answer a dragon, the goldfish, or a Wibblezoo (not a real animal). Turn it on high, ask it to produce titles for PBS documentaries, and it will suggest: That Time a Mutant Super Robin Nearly Wiped Out England, It’s Not Easy With Floofs on the Moon, and How Darth Vader Helped with the Founding of America. Turn the temperature down to zero and let it improvise, without giving it any prompt, and the output becomes redundant to the point of absurdity […] This tendency to get stuck in repetitive loops, the logical endpoint of a technology that has been trained to maximize statistical likelihood, is known as degenerate repetition.
O’Gieblyn tries hypnosis to jump-start her writing process, which has gotten stalled:
Twice my mind dimmed to almost nothing, words passing through me without leaving a trace, hands moving with an unthinking authority I’d only ever associated with the piano, never the keypad. Pure language, untainted by thought. I thought of the women I’d witnessed as a child in church, standing in the aisle of the sanctuary, faces upturned to the panel lighting, babbling a steady stream of nonsense that sounded, at its most convincing, like Aramaic, but that more often resembled baby talk. The words, they insisted, were not their own, a claim that has been confirmed, eerily enough, by neuroimaging. During glossolalia and automatic writing, the frontal lobe — responsible for language processing and the unified self — goes black. Where do the words come from?
Syntax is the last to go. The mind wanes, the ego recedes, meaning unravels. All that remains is the structure of language, the deep-rooted knowledge that nouns must follow prepositions, that verbs must be conjugated in the singular or plural. And in moments of total darkness, when even this structure dissolves, there persists somewhere in the limbic basement of the brain the rhythm of language, its cadence and flow, the dialect of pure sound we babbled as infants and grunted to one another on the savannah before symbols fell from the sky. […]
When I read my hypnosis texts again, more closely, it occurred to me that many of the phrases were not just iconic or universal but plagiarized. Petals like faces on the black ground was an inversion of Ezra Pound. I was a daisy fresh girl — that was lifted wholesale from Lolita. The technical term is cryptomnesia: concealed recollection. Memories misrecognized as inspiration. Every writer has experienced it. You pick up a book you read years ago, and there it is: an image, a metaphor, the exact words you’d believed were your own. It is difficult in such moments to avoid darker conclusions about the relationship between thought and language. Perhaps the French theorists were right: we are never really creating but merely drawing in our sleep from that stagnant reservoir of secondhand ideas. The writer can only imitate a gesture that is always anterior, never original; he simply invents assemblages from the assemblages which invented him.
Eventually she gets to the problems associated with the language generator, beginning:
Cryptomnesia is endemic to language models that use internet-scale data set. Some memorization is necessary for the technology to work at all, but the largest models are prone to “eidetic memorization,” an industry euphemism for plagiarism. A study published in 2019 discovered that roughly 1 percent of GPT-2’s output was verbatim text consumed during its training period. This included news headlines, exclusive software licenses, passages from the Bible and the Koran, whole pages of Harry Potter books, snippets of Donald Trump speeches, source code, and the first eight hundred digits of pi. The model occasionally regurgitated personal information — real usernames, phone numbers — that it had come across during its training, and in one case, staged a fictional conversation about transgender rights between two real Twitter handles. Once or twice, I’d spotted the plagiarism myself. One researcher posted on his blog a poem GPT-3 had written, titled “The Library of Babel,” after the short story by Jorge Louis Borges. The last stanza of the poem was lifted, in its entirety, from Byron’s Don Juan.
And I like this passage, near the end:
I could find meaning in literally anything, so how was I supposed to know what was significant and what wasn’t?
When I posed the question to Liz, she texted back, I think you get to decide what’s meaningful? The point of automatic writing, as she understood it, is not that the images have intrinsic significance, but that it yields a catalogue of motifs that the conscious mind could then construe into a satisfying narrative.
I realized it was this — decisive interpretation — that I’d been refusing. At some point, I had taken to obsessively reading my hypnosis texts as if they were holy writ, as though the words and images held some kind of mysterious authority. Archetypes, as Jung pointed out, have no absolute meaning unto themselves. They exist in the unconscious as empty forms, not unlike the axial system of a crystal, or — to update his analogy — the symbolic logic of a computer. Without the act of deliberate interpretation by a conscious subject, the primal, repressed imagery would continue to haunt the patient as though it were some external specter, governing her life in ways that seemed determined and inarguable. “When an inner situation is not made conscious, it happens outside, as fate,” he said.
Yes, decisive interpretation — deciding for ourselves what’s meaningful — is, I think, at the core of being adult human beings. No computer program will do that for us.
Incidentally, there are footnotes for the quoted passages, and it’s worth hovering over them, because a few of them are “GPT-3.” Gotcha! (Thanks for the link go to that champion link-trawler Trevor.)
Just because something is automatic, in the sense of not needing conscious input, it does not follow that it is automatic, in the sense of being machine-like. The subconscious is not mechanical. (Mine can do cryptic crosswords, unlike my conscious mind. I rarely arrive at the answers by actual ratiocination.)
… like failing to correctly guess that a toaster is heavier than a pencil,
Next she will be telling us that an uninflated dirigible is not a lighter than air craft.
Bringing in Jung’s ideas about the unconscious are virtually never a good idea:
True, although almost vacuously so.
False, at least for the unconscious as Jung postulated it.
Weirdest condensed matter physics metaphor of the year.
Imagine Chomsky doing this.
I wouldn’t call a neural net “mechanical” either.
Apparently it’s Jung’s.
I wouldn’t call a neural net “mechanical” either.
However, if you subscribe to the notion that all mental phenomena are ultimately “mechanical” ex hypothesi, that would apply just as much to conscious as to unconscious processes. (More so, I think, in my own case: my conscious mind is forever attempting to impose a plausible veneer of would-be mechanical rationalisations on the Monsters from the Id*.)
*Scilicet the part of me that can actually do crosswords, remembers to breathe, and can ride a bike. (Great film, though.)
My sons just saw The Forbidden Planet for the first time last month.
I actually have to write in the next week 5 to 10 pages of some incredibly boring but faux-enthusiastic crap for my work about my work. The real content of which is “I do this job because you pay me money and I want you to pay me more money and I agree to do the job, though I would rather do something else”. Gosh, I need me some GPT-3.
As an undergrad in the 1980s, I played with computer-generated haiku, for which I set up a grammatical parser and a vocab list. It produced some amazing samples, all lost now. The most impressive ones achieved their shock value through unexpected juxtaposition.
Now, looking at samples of haiku-by-GPT-3 (for example, here: https://www.amazon.com/dp/B08S3GMHZD/ref=dp_kinw_strp_1/356-9611533-3208258),
I see lots of mind-numbing cliché: just using a parser/vocab list was much more entertaining.
However, even with the parser, after a while, one would find a random pattern of sorts, so my literary friends would usually start out being amazed, but then would lose interest completely after about 5 minutes of generating these things.
Sure, but neural nets do not manipulate narrative tropes. They manipulate strings, which are in a tenuous relationship to words, which are in a tenuous relationship to utterances, which are in a tenuous relationship to narrative tropes.
Clarifying those tenuous relationships requires a knowledge model (and a genre model) on top of your language model, and AFAIK that particular problem is still where Minsky and Papert left it.
@Alon Lischinsky: Sure, but neural nets do not manipulate narrative tropes.
Au contraire, says xkcd. (Also.)
My thesis was on that. I found nothing interesting. Brett: EDIT: My attempt to install FreeBSD on a laptop lead to the creation of this comic: https://xkcd.com/349/
neural nets do not manipulate narrative tropes.
even in silicone-based environments, you don’t need a neural net to do that! the intrepid researchers at the heart of Foucault’s Pendulum manage it just fine with Basic (or a fictionalized equivalent), if i’m remembering right.
thank you for this, fascinating.
It used to be monkeys writing Shakespeare, now it’s computers performing eidetic memorization. In both cases humans have to assign meaning to the output otherwise it’s merely babble. That decisive interpretation is as hat says, the core of being human.
Per D.O.’s comment – agreed, I think GPT-3 has a bright future in marketing..
Quote,
GPT-3’s most consistent limitation is “world-modeling errors.” Because it has no sensory access to the world and no programmed understanding of spatial relationships or the laws of physics, it sometimes makes mistakes no human would.
This is the same problem that self-driving cars have and always will, for the same reason. It’s nicely summarized by Moravec’s Paradox. This is well known to philosophers of science though insufficiently publicized.
As Dr Moravec wrote,
It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.
Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.
Yes. See also Drawing on the Right Side of the Brain (Betty Edwards).
Previous discussion of lateralization in the brain.
Or, more pithily, the blain.