OUR NEW COMPUTER OVERLORDS.

Ben Zimmer has an interesting piece in The Atlantic on the recent Jeopardy victory of IBM supercomputer Watson over the two human competitors with the most winnings, Ken Jennings and Brad Rutter. I saw the last two nights, and it was fairly depressing from a petty human-centric point of view; our guys never had a chance against the mighty Watson. But, as Zimmer says, for all Watson’s data, it would not have been able to make anything of the “complex use of language” involved in Jennings’s quip “I, for one, welcome our new computer overlords.” Zimmer deflates IBM’s hype about the tournament and briefly discusses the distance between what computers can do and “full-fledged comprehension of natural language.” Well worth a read.

Comments

  1. I’ve been waving this definition at people (not the ones at IBM) who claim that Watson understands natural language: you understand (and speak) a natural language if you are capable of holding an adult-level conversation in it on any culturally relevant topic for a reasonable length of time. Watson doesn’t meet this definition, nor was it meant to. (Of course there are loopholes, like children, stuttering and locked-in syndrome.)

  2. As an ex-programmer, I was very impressed by the job the Watson people did. But I was astonished when people spoke of Watson “understanding natural language.” No no no, nothing remotely like that. It was a great feat of pattern-matching and association, and my hat’s off to them, but it’s hardly tantamount to understanding natural language.

  3. Hate to say this, but I did not know the Simpsons reference. So the entire point of the ‘computer overlords’ comment slipped past me. It sounded humorous (sort of), but without the cultural reference it’s not terribly funny. As a deadpan welcome to Watson, it sounds flippant and humorous enough. But without the Simpsons reference, it completely lacks the outrageous subtext of obsequiousness and selling out that makes it so funny.
    I would suggest, also, that this is a culturally narrow reference. Have you ever tried to explain a joke to someone from a foreign culture? It’s not always easy. Sardonic and ironic humour doesn’t necessarily travel very well. In fact, lots of things about language and humour don’t travel very well. So from Zimmer’s point of view, Watson’s inability to understand this particular type of humour is proof that computers can never master the subtleties of human humour. For me, it just proves that knowing humour in your own culture doesn’t entitle you to talk of humour in language in general, nor does it entitle you to assume that computers could never master it.

  4. Leonardo Boiko says:

    @Bathrobe: I’m not so sure about that. I’m not exactly an expert in multiculturalism, but I’d wager almost all cultures have irony—at least all I’ve seen. A few jokes may not travel intact but they do travel, even if surrounded by explanations. Besides, there is such a thing as a global culture. Memes spread and cross-polinate freely between cultural areas and languages. At this point, Simpsons, Batman, Indiana Jones are world heritage as far as I’m concerned. I’m Brazilian and grew up watching the Simpsons.
    In any case the point is that, even if you miss this particular in-joke, you at least have the capacity to understand in-jokes. Watson doesn’t. He can’t even understand it after-the-fact like you did. No one is saying computers will never be able to “really understand” language, just that right now they can’t – and the overlords remark felt to me like a dead-on highlight of Watson’s limitations.

  5. to me, the one big difference between a computer language (communication) skills and a human language (communication) skills is the abilitiy to understand what is NOT said, than just the words. Arguably, there may be a mathematical sequence to describe – and interpret – a gesture, a glance, a smell, a facial expression, a non-verbal sound (tapping, hissing, huffing, tutting, oohing etc.), but the meaning of what is not said, only implied – I can’t see how a machine can possibly ever do it? (I was reading Nathalie Sarautte whose whole method is based on that)
    I haven’t seen the programme, but I remember Kasparov argued that Deep Blue had an unfair advantage over him: having access to all the chess games in its memory. It’s like being allowed to use cribs at an exam. In the rematch Kasparov was allowed to use a chess database – and won. How was this addressed with Watson?

  6. Leonardo Boiko says:

    > I can’t see how a machine can possibly ever do it?
    Sashura, if you’re a materialist, you just have to look at a mirror. The human brain (or, more cautiously, the human body) is able to do it, so physical objects can do it, so it must be possible for a constructed artefact to do it. It’s conceivable that such an artefact might be very different from our computers and programs (or not). But if you don’t subscribe to a supernatural or religious worldview, we ourselves are proof that natural-language processing is possible.
    You speak of databases. It could be argued that all those subtle implications are simply pointers to a kind of social database. For example, “Got a clock?” in my language is effectively the same request as “What time is it?”, not a question about ownership of timepieces. How do I know this? Well, I learned it through interactions with other people. What is “not said” are conventions, cultural norms and other information that we humans acquire through our education and experience. It’s conceivable that future computers could acquire the same “database” in the same way, or in a different way.
    Even if you do think human consciousness has a supernatural foundation, human-like computers are not such a stretch, I think. Try to imagine what our technology would look like to a first-century person, then try to picture what it will be like in two thousand years from now—or four, or ten, or a hundred thousand. (I’m being rhetorical, it’s impossible ;) ).
    But as another ex-programmer (is there a club?), I agree with your intuition in that I don’t think our present-day computers are able to “really understand” human communication.

  7. j. del col says:

    How do we know Watson didn’t understand the remark?
    Perhaps it was just being polite by not responding with “Resistance is futile!”

  8. I suspect that the question here is even deeper. Or perhaps it’s near-identical: what do you think?
    http://www.futilitycloset.com/2011/02/23/the-parrot-of-atures/

  9. @John Cowan. That’s the Turing Test, which says that all that matters is external behavior. That should have been demolished in the same devastating attack which which Chomsky demolished behaviorist explanations of language, which says the same thing. That people are still quoting the Turing Test 45 years after its sell-by date has expired says something. I’m not sure what.
    To me, for a computer to “understand” something means that it builds an internal structure that’s arguably similar to what a human would build in its head. That doesn’t mean neuron-level simulation. 60s era AI was trying to do this, but it failed because it was using very wrong notions of how people’s brains actually process. Today’s massive text corpus approaches are headed in the wrong direction. I think we’ve got enough data out of modern brain imaging and psychological lab work for a fresh start.

  10. @bathrobe
    I have to agree, I didn’t know the Simpson’s reference either. However, I certainly recognized the phrasal template – it had a fairly large run in the places I lurk on the internet a few years ago, and can still be seen occasionally.
    As far as being able to recognize it, come on! All it requires is a recognition algorithm, and that was solved so long ago that it’s hoary with age. In the context of Jeopardy, of course, it’s the Simpson’s reference that’s important. In a more general language recognition context, there are hundreds if not thousands of such templates, most of which have the status of idioms.

  11. I haven’t seen the programme, but I remember Kasparov argued that Deep Blue had an unfair advantage over him: having access to all the chess games in its memory. It’s like being allowed to use cribs at an exam. In the rematch Kasparov was allowed to use a chess database – and won.
    Eh?
    Kasparov won the first match 4-2, and lost the 1997 rematch 3 1/2-2 1/2. Whereupon “Kasparov accused IBM of cheating and demanded a rematch, but IBM refused and dismantled Deep Blue”. (This is from Wikipedia, but it is also my recollection.)

  12. I’m in Bathrobes’ camp, doubled. I’ve never watched the Simpsons, and I had to look up Jeopardy in Wikipedia as I’d never heard of it, probably because I live in the UK. It seems to be Reverse Trivial Pursuit.
    (I should correct the first statement. I watched one episode years ago and couldn’t get on with it, which I am told exposes my obtuseness because everyone else in the world – except Bathrobe – thinks it’s brilliant.)

  13. John Roth:
    It is indeed the Turing Test, although Turing’s original spec only requires that “an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning”, which strikes me as too short a time.
    But what Chomsky refuted was the claim that a specific, rather limited, theory of behavior was sufficient to account for the full range of human language. He did not refute the idea that behavior is what matters. Indeed, all of Chomsky’s own theories are duck theories: as long as the object of study quacks like a duck, it does not matter if it has a syrinx inside or not. His “language acquisition device” is an abstract model, not something that necessarily has a specific neurological referent.
    What is more, all understanding is “merely” apparent understanding: none of us knows whether our fellow humans have gears in their heads instead of brains, nor in fact do we know if we have gears in our own heads (unless we go look). “I think, therefore I have no access to the level where I sum.”

  14. I should elaborate. I don’t mind the Simpsons. I’ve watched it a few times and the frequent flashes of social cynicism are engaging. I also agree that computers can necessarily understand that kind of humour or get their circuits around a lot of things that human beings can master in their stride.
    But my point remains. Zimmer’s example only works within a cultural in-group (even if it’s memes are now allegedly spanning the world). You can sit in California (or wherever, it doesn’t matter) and assume that the rest of the world is just like California, but it’s just not so. And to cite an example of typical Californian behaviour as proof that computers will never ‘get it’ just doesn’t wash. Sorry.

  15. Oops.
    I also agree that computers can’t necessarily understand that kind of humour…

  16. I’d wager almost all cultures have irony—at least all I’ve seen
    To some degree that’s true. But in my experience with Japanese culture, irony is precisely one of the areas where Japanese people don’t always get it. Not that Japanese culture lacks irony, but it’s certainly not as obvious as in Western culture, and not necessarily expressed in the same way. If a Japanese person, like Watson, didn’t get it, this of itself would disprove Zimmer’s point, or at least his headline example.

  17. John Cowan: What is more, all understanding is “merely” apparent understanding: none of us knows whether our fellow humans have gears in their heads instead of brains, nor in fact do we know if we have gears in our own heads (unless we go look). “I think, therefore I have no access to the level where I sum.”
    John, I agree 80% with that, i.e. with 100% of the “behaviorist” take (in the spirit of Mead) although not as to the exact wording (Luhmann, say, puts these things more carefully) – in particular not as to “unless we go look”, which I take to be mild-mannered, avuncular approval of introspection as a tool for understanding.
    But you amaze me: a few weeks ago you complained in the following words about something similar that you thought I had said: “It is beyond preposterous to assert that everything one says “on the Internet” should be treated as fictional including the identity of the speaker.” Have you changed your mind ? Or is the above quote merely part of your paraphrase of Chomsky’s views ?

  18. Consider the fact that we seem to disagree about certain things, and perhaps even disagree as to whether we are only apparently in disagreement, or are actually in disagreement, or maybe even in agreement. And even when we are in agreement about something, one or both of us may later decide that our agreement was deceptive, and that we had just not noticed that we were talking at cross-purposes. Doesn’t all this suggest that, from a theoretical point of view, it is reasonable to say that all understanding is in practice “apparent” – including such theoretical understanding ?
    Note that I am saying that it’s reasonable, not that it’s right. One of the implications of such theoretical analysis is that there can be no “right” thinking – but also no “wrong” thinking. There are only more or less useful ways of thinking for particular purposes. Nobody gets hurt by such general views – except those who make a living by telling other people how to run their lives.

  19. Kasparov
    This is what I refer to:
    he was denied access to Deep Blue’s recent games, in contrast to the computer’s team that could study hundreds of Kasparov’s.
    After the loss Kasparov said that he sometimes saw deep intelligence and creativity in the machine’s moves, suggesting that during the second game, human chess players, in contravention of the rules, intervened.
    But you are right – he lost the rematch, not the first match, my memory failed me.

  20. John Emerson says:

    Zimmer reports this morning that he got laid off. The Times seems to be hellbent on self-destruction.
    I blame myself. I could have known that my praise would get him fired.
    http://www.facebook.com/bgzimmer

  21. Thanks for the mention, John. Though “On Language” is indeed coming to a close, I still have my day job at the Visual Thesaurus (where I’ll continue to write the Word Routes column). You can follow me on Facebook and Twitter for further updates.

  22. John Emerson says:

    Only the good die young.

  23. Grumbly: By “go look” I meant surgery, although CT or MRI would do as well, of course. No, no, introspection is far too faulty a tool. As I have said elsewhere (following Dennett in this case), there are no qualia, but there certainly seem to be qualia, a fact which needs explanation.
    On your second comment, we are indeed at cross-purposes. By “understanding” I meant not “comprehension” but something more like “consciousness”. Because comprehension is limited, there is a principle of charity: assume that you are misunderstanding your fellow sophont rather than that they are willfully deceiving you — but “fool me once, shame on you; fool me twice, shame on me.”
    So I abate not a jot of what I said before. Whether or not I am a robot (and I can’t be certain, though I’m betting I’m not), I am certainly not a (philosophical) zombie, again because there are no such things. I find it useful to treat myself as an intentional system for the same kinds of reasons I find it useful to treat you as one. That involves telling both of us how to run our lives, if the way you or I run them includes hurting other people, as by lying to them.

  24. As I have said elsewhere (following Dennett in this case), there are no qualia, but there certainly seem to be qualia, a fact which needs explanation.
    It would make just as much sense to say that there seem to be no qualia (from a scientific point of view), but there certainly are qualia (from the viewpoint of experience). Over the past hundred years at the very least, being-and-appearance metaphysics have been shown to be absolutely worthless for theoretical purposes – though they live on in everyday discussions. I could also say that this view of Dennett’s is nonsense, although it seems to make sense – or else that it seems to make sense, although it is nonsense. Both of these claims would need explanation – but no explanation is forthcoming in terms of being and appearance. It is on the basis of such considerations that, as I said in a previous post, I never use the words “fiction” and “identity”, but only mention them. They create more problems of analysis than they apparently help to solve.
    I have similar reservations about this business of “intelligence” when the word is used in a theoretical context. It seems to me that it is too often used in an unintelligent way, as the history of ideas can help us to see. Ben Zimmer’s article is a nice piece of debunking, particularly in that he brings out the stage-managed aspects of Watson’s “understanding”. For various reasons not clear to me, however, very many people grab the wrong end of that shtick. They are savvy enough to know that an advertising claim to the effect that “Dash washes cleaner than any other detergent” don’t make it so – and yet they ooh and aah at any old rabbit pulled out of the hat, provided it is pulled with scientific aplomb.
    The issues conjured up around understanding, comprehension, and intelligence on the part of man-made machines are issues of talking heads. In everyday life, one assesses the understanding, comprehension and intelligence of a thing on the basis of the results produced by that thing (one is reminded of the “success semantics” of Ramsey), not by what it says. Someone may well think that he can entrust his financial investments to a computer, on the basis of its performance on a Jeopardy show – but that would probably be a stupid thing to do, just the kind of unintelligent thing one is not surprised to find a human being doing, or a computer.
    By the way, I’ve signed this comment with my real name. That should help to reduce the moral outrage you experience at my supposed defense of mendacity. As a result, you will be able to spend more time just being annoyed with my views. Don’t say I never did anything for you !

  25. John Cowan: Whether or not I am a robot (and I can’t be certain, though I’m betting I’m not), I am certainly not a (philosophical) zombie, again because there are no such things.
    John, I forgot to address this in my last comment. Who are you betting with ? I hope he doesn’t have horns and a tail. Anyway, what’s wrong with being a robot ? If you can’t imagine that, you might want to watch the film Blade Runner. As for zombies, the history of philosophy is full of them. Zombies are just folks like me and you, only a bit more obsessive about certain things.

  26. my real name
    You have? Where?
    I’m fine with being a robot, except if I were, I’d hardly have such wetware faults as diabetes, asthma, and sleep apnea (though chronic anxiety might be in the cards), so I’d say the odds are against it.
    And when I say seem I refer to appearances: there seems to be a back-and-front-switched copy of me in the mirror, but there isn’t. Moving the word to the other side of the remark (or the mirror) is not a defensible rhetorical strategy.

  27. Hockessin says:

    This blog site has got some very helpful information on it! Thanks for helping me.

  28. Any real non-spammer people who says “blog site”?

Speak Your Mind

*