I’ll take “Cheap Publicity Stunts” for $1,000, Alex

IBM’s assault on Jeopardy! isn’t a triumph for artificial intelligence. It’s an embarrassment.

by Colby Cosh

Having lived through the hype over IBM’s 1997 Deep Blue challenge to human chessplayers, I find myself intensely irritated at IBM’s 2011 assault on Jeopardy! The Globe’s tech reporter leads off his rumination with “On the surface, it has all the makings of a gimmick…”. So did Deep Blue; but let it be recalled that in the fullness of time, after public quarrels and investigative reports and documentaries allowed us to attain a historical perspective, the project actually turned out to be…a gimmick.

IBM didn’t exactly cheat in the Deep Blue showdown, but the company refused to let Garry Kasparov study the computer’s games the way he could have for a top human opponent. When Kasparov nonetheless figured out how to lead the computer into traps by studying tactical weaknesses of artificial intelligence, the company, fearing for its prestige, brought in human chessmasters—ringers—to tweak the program’s position-evaluation algorithm and prevent an awkward defeat. Ken Jennings is joining battle, not with an artificial mind, but with a coterie of corporate drones to whom sportsmanship comes second.

The general arc of computer-chess development, and the perpetually disappointing history of AI, were largely unaffected by the Deep Blue-Kasparov contest. Indeed, the main influence of the exhibition was probably the way it intensified research into anti-computer chess styles. Human-versus-computer competition basically reached a stalemate after 2002′s 4-4 draw between Vladimir Kramnik and Fritz, in which the inherent intellectual limitations of the machine and the physiological and nervous ones of the man more or less ended up cancelling out.

Every article about Watson, IBM’s Jeopardy!-playing device, should really lead off with the sentence “It’s the year 2011, for God’s sake.” In the wondrous science-fiction future we occupy, even human brains have instant broadband access to a staggeringly comprehensive library of general knowledge. But the horrible natural-language skills of a computer, even one with an essentially unlimited store of facts, still compromise its function to the point of near-parity in a trivia competition against unassisted humans. Surely this isn’t a triumph for artificial intelligence, or for IBM, so much as it is a self-administered black eye?

Jeopardy!, after all, doesn’t demand that much in the way of language interpretation. Watson has to, at most, interpret text questions of no more than 25 or 30 words—questions which, by design, have only a single answer. It handles puns and figures of speech impressively, for a computer. But it doesn’t do so in anything like the way humans do. IBM’s ads would have you believe the opposite, but it bears emphasizing that Watson is not “getting” the jokes and wordplay of the Jeopardy! writers. It’s using Bayesian math on the fly to pick out key nouns and phrases and pass them to a lookup table. If it sees “1564″ and “Pisa”, it’s going to say “Galileo”.

So why, one might ask, are we still throwing computer power at such tightly delimited tasks, ones that lie many layers of complexity below what a human accomplishes in having a simple phone conversation? The Globe‘s Omar el Akkad tells us, in a sidebar, that the University of Alberta’s world-leading poker software “can beat pretty much the best”…but in a two-player limit game, i.e., an unrealistically pure test of odds calculation that is to no-limit hold ‘em what a grade-school track meet is to a Formula 1 race. (The roots of that U of A research program go back almost 20 years.) Meanwhile, “Computer chess players can now beat all but the very best humans”—but that was more or less the state of affairs already attained in 1997 when Kasparov fought Deep Blue. And the obliteratingly total lack of progress toward the gold and silver Loebner Prizes (annual implementations of the famous Turing test) is such an embarrassment that the jury has been quietly adjusting the bar from year to year to keep things interesting.

El Akkad’s claim is that “Scientists, engineers and entrepreneurs keep pushing the boundaries of artificial intelligence”, but it would almost certainly be more accurate to state that, as Hubert Dreyfus predicted, they keep smacking into those limits without ever breaking through to the accurate imitation of mindlike activity. Dreyfus is, professionally, a specialist in incomprehensible European nonsense; but he was for decades the leading figure among artificial-intelligence pessimists, and his career has effectively been a long series of successful bets against fast AI development. It is rare for a philosopher to be able to claim strictly scientific falsifiability grounds for a finding, but Dreyfus and other AI skeptics arguably can.

I’ll take “Cheap Publicity Stunts” for $1,000, Alex

  1. The person who wrote this article definitely hasn't programmed before, and definitely doesn't know about the challenges faced by programming a computer, which is great at following things exactly the way you told it to, to be able to what you think it should. Humans are smart, yes, and you shouldn't take that for granted.

    • I make my living programming computers and I think the article is decent enough. Your comment is pretty incomprehensible though.

    • 10 PRINT "U OBV1OUSLY KNOW NOTH1NG OF C0MPUT0RZ"
      20 GOTO 10

      • You may be onto something, Dominic. An infinite loop printing a poorly-typographed text string with no punctuation. Very weak programming, indeed.

      • Damnit Colby! First rule of coding in Qbasic: Use labels for your loops. Ie:

        10 Loop1:
        20 PRINT "ALL YOUR BASE ARE BELONG TO US"
        30 GOTO Loop1

        God knows that QB code was hard enough to read.

        And screw this lousy article. Sure it's a cheap stunt, but I've been looking forward to this for MONTHS! And I will most definitely be putting hard cash down on the computer.

        • No real programmer would use a GOTO statement. And hell, I use FORTRAN!

          • FORTRAN forever!! But I do allow myself the occasional GOTO…I know, I know, bad form.

            Oh, yeah, and C Sharp C Schmarp….;-) [mutters quietly to self]

      • Not to argue against your central thesis here, but when it comes to the technical aspect of this challenge, you have to agree that it's hard to fault computer programmers for failing to design a machine that rivals the finest pattern-matching device known to man; a device once referred to by Isaac Asimov as "the most complex and orderly aggregation of matter in the universe" that, depending on your beliefs, was either designed by a deity or was never designed at all. AI researchers have arguably the highest conceivable standards to meet when it comes to thinking about thinking, and it's hard to fault them for failing to live up to the naive expectations of science fiction.

        • I guess I ought to mention I do write software for IBM, though I've never had any contact with the Jeopardy! project, and I have never done any AI. My opinions are not IBM's, blah blah, and I'm not trying to defend IBM in any way. I'm just trying to point out that your expectations of the AI field are naive. AI researchers are among those who face some of the hardest intellectual challenges faced by mankind.

          • By "the naive expectations of science fiction" I presume you mean "the naive expectations deliberately created by IBM promotional materials and employees".

          • Well no, I meant whatever it was that led you to this opinion:

            “It's the year 2011, for God's sake….the horrible natural-language skills of a computer, even one with an essentially unlimited store of facts, still compromise its function to the point of near-parity in a trivia competition against unassisted humans."

            I'm not sure what made you expect that one of the hardest possible computing goals should be solved by now, but I didn't think it was that much of a leap to assume it was Dr. Theopolis et al.

  2. The author is defiantly not a programmer, because if he was then he wouldn't have made the foolish mistake of thinking linguistic programming involves just a search of keywords through a database. Automatically with this viewpoint he has bias, because he himself is more concerned with the sanctity of Jeopardy then knowing what how complex natural language processing can be and how difficult it is to replicate because of that. While a good chess AI these days can be done by someone in high school, language processing will always be magnitudes more complex.

    • "Linguistic programming involves just a search of keywords through a database" is not what I wrote. Defiantly or otherwise.

      • "Jeopardy!, after all, doesn't demand that much in the way of language interpretation. "

        I may have even agreed with you if not for these biased qualifiers littered throughout your article. The point is not that IBM have created a machine that can target a narrow field of natural language. The point is that they have successfully implemented a machine that can quantify the English language while preserving structures of the language for contextual analysis. I guess the whole point of Op Ed is to rant about how everything fails to meet your expectations but at least don't try to pass off your claims as if you're some kind of professional journalist. Of course, I'm not in journalism and never understood the point of it so I guess I'm a hypocrite as well, but from reading what you have written, you sound like someone who've never understood the point of computer science or even mathematics in general so you just try to write sensationalist titles so you can pretend that you're contributing to mankind. If you really care, try understanding what Watson's NLP means to the niche that it's actually targeting rather
        than mocking these same people who try to explain to you why the world isn't black or white.

    • You read this article and thought, "wow, Colby sure cares about the sanctity of Jeopardy"?????

  3. Ah! The old computer will take over argument or they will get to be smarter than humans.

    As long as the main architecture will be the flow of one and zeros, computers will remain dumb. Yes, they can digest a whole bunch of data in a record time but basically, all computers still observe the true false system. There are some arguments that when you make then super fast, let us say 100 times faster as the fastest today, computers will be able to make decisions based on the background knowledge. Still nothing for out of range ideas, one in a lifetime good call and so forth.

    • On an organic level were just as much a computer program as real computer program is. We have DNA which for all intensive purposes lays out the classes, structures, mechanics, processes of an organic body. Our brains don't feel pain, but we process it by using organic chemicals as signals. Everything that is inputted in to the brain is just another signal for it to interpret, and get accustomed to.

      • No, it doesn't. DNA specifies how to construct proteins. That's pretty much all it does. All the other things people think it does are ususally figments of their overactive (and non-computer programmed) imaginations.

        Oh, and it's "all intents and purposes". Pet peeve.

    • Ya, computers are stupid!

  4. There seems to be some confusion over the difference between a computer and artificial intelligence ….perhaps it would help a great deal if we worked on natural intelligence first.

    Being a transhumanist, I can tell you that simply via extrapolation, the technological singularity is expected by 2045.

    • Good to know and believe me when I say that all of us wish you the best with the transplant.

      • Since it doesn't involve transplants, I don't need any good wishes.

        Hence my remark that we need to upgrade natural intelligence.

        • All right, my attempt at a cheap joke backfired on me.

          I have no idea of what you are talking about. ''Natural intelligence'' could be anything. The drug dealer who outsmart all his competitors has natural intelligence. Following a trend in the hope of cashing in could be natural intelligence.

          • LOL okay, at least you're honest.

            'Natural intelligence' is what humans are supposed to have….that is IQ…as opposed to 'artificial intelligence'… 'machines' that can think, not just record and remember.

            Street-clever, or street-smarts isn't 'intelligence' per se….it's a survival technique. Urchins in ancient cities had it.

            We are already becoming 'cyborgs' with the devices we use….and it won't be that long until there are man/machine hybrids. Many are experimenting with it now.

          • A chatsite that worked without constant 'pinging' would also be a major step forward.

  5. I…only…lost…because…I…was…not…programmed…to..give…my…response…in..the..form…of…a…question.

    • Good one!!!!

    • I'll go out on a limb and guess that the nerds at Think Inc. have thought of that.

  6. Publicity stunt, yes. Cheap? Maybe compared to IBM's total advertising budget, but certainly not cheap in terms of dollar amounts any of us live with.

  7. By the by, Dreyfus' co-authored book on Foucault is generally recognized to be the best such monograph in the literature (and there are loads of competitors). It must be pretty cool to be world-class in two totally unrelated fields.

    • Yeah, but it's Foucault… so, no one should care.

      • Charming.

  8. The author, like many others today, is foolishly caught in the trough of disillusionment, a step in the hype lifecycle of any groundbreaking technology. At first excessive hype, then trough of disillusionment, then for true breakthroughs such as the inevitability of AI, surprise among the small-minded naysayers that doubted it (and shot down the pioneering technologists) until it arrived. At which point they quickly change their view to "I knew it would arrive, I'm surprised it took so long".

    • No. You need to make a case that doesn't depend on repeating the same mantra decade after decade. It's been forty, in some cases fifty years since people started saying "AI is inevitable, resistance is useless". Model after model has been discarded. I'm not a technological pessimist by nature but I AM GETTING OLD; rhetoric like yours is beginning to sound a little street-corner-preachery.

      • A sense of humour…..I like that in a 40 year old blogger!

        • Colby isn't 40 yet – give him a few months.

          And you and I know that 40 isn't old.

          • I was pointing out Colby is merely a kid….and yet he's miffed that we haven't yet created….in his lifetime, mind you….what it took Mother Nature millions of years to do.

          • PING! #$%&*!

    • AI is only "inevitable" if you accept the modern materialist/atheist framework that assumes that only matter and energy exist, and mind is a pure epiphenomenon arising from purely random evolutionary processes. Since we are just machines made of meat, we will be able to create better machines.

      This is, of course, the default framework for the modern "educated" person. It requires ignoring all the vast amount of evidence, that contradicts it, or labeling it as superstition. "Strong AI" and the resultant "Singularity" will continue to be "just around the corner", so long as this framework persists…

  9. Ray Kurzweil has an interesting book on the subject of AI ("The Singularity is Near"). He believes AI will occur. His general point is that we are now just beginning to understand how our minds truely work. As our understaninding of our own thought processes improve, it will proivde suitable models for developing AI.

    From my own perspective, I see elements of AI in a range of technologies today, particularly from the perspecitve of situational awareness and response. Examples include: Cars being able to park themselves, autonumous underwater vehicles that are used for bathymetric survey. Similarly, some of Google's work on search engines using images and other face recognition software are all part of the large puzzle that is required for AI.

    • None of those examples are a generational leap beyond the simply computer-follows-instruction programming necessary for genuine AI.

  10. Perhaps what i found most disappointing about the deep blue incident was that immediately after the competition was over they tore the machine down so there could never be a rematch. It made for a very childish victory celebration.

    In the case of jeopardy, i find it hard to view as much more than a demonstration of voice-to-text software. Simply entering jeopardy 'answers' into google (or perhaps more reliably a similar style of search of a sanitised reference tome) is about the equivalent of what the computer needs to do once it has them. Granted after having fought with voice-to-text implementations before that will be an impressive demonstration, but hardly a case of AI.

  11. I have spent the past three years of my life trying to develop a "fully functional A.I. program." Like the author, I have also become doubtful that my original idea, as conceived, will happen in the next decade at least. It seems like the future of A.I. is things like Watson, things like Google Goggles, things like the ever-evolving machine translation tools we have out there.

    Will we have Hal or R2D2 in the next 30 years? I am beginning to think not. But will we have ever-improving voice recognition, and problem-specific tools that surprise us with their accuracy and usefulness? I'm excited to say, "Yeah!"

    • CyberMatt has it about right.

  12. "Surely this" what, Colby? Finish your sentences.

  13. We're still "throwing computer power at such tightly delimited tasks" because we're still really bad at them. The unstated hypothesis behind NLP as a subgoal of AI is that languages are codecs for logics, and that you can "crack" the code without general intelligence. For instance, if I say to you "The blag that troved the frump in 1564 also yurted the dongle in 1602." you don't need to know anything about the real-world referents of those words to understand trove(blag_a, frump, year=1564)&yurt(blag_a, dongle, year=1602). You map the sentence into a logical form without knowledge of its content. There are lots of ambiguities, but the hypothesis is we just need word co-occurrence statistics and maybe a concept ontology to find the few viable interpretations.

    Now, you could argue that our lack of success at this task is evidence that the hypothesis is wrong and NLP is non-viable without Artificial General Intelligence (or in other words, NLP is "AI complete"). Personally I don't believe that, which is why I've devoted the last six years of my life to the problem, with the intention of continuing on until it's solved, I'm convinced it's insoluble, or I die. I just think we haven't solved it yet because it's really that hard.

    We're definitely making progress, though. Watson _does_ use symbollic reasoning, parsing the sentences into logical forms and reasoning with them. And it does some impressive processing to crunch web text and Wikipedia text into a usable database of facts. But it also uses an "ensemble" of methods, and at the moment it's unclear which ones contribute how much to success.

    If Jeopardy were structured differently, maybe we'd have had a draw or a human victory. I think Ken and Brad almost certainly knew a higher percentage of correct answers to the questions than the machine. But the machine always wins the tie-breaker (the buzzer) when both contestants know an answer, so when it gets to a certain point it runs away with the competition.

    The point is that crossing that threshold where you can answer the questions really is that difficult, could not be done before, and is useful subgoal. I look forward to reading the papers that come out of this research and expect them to have made some worthwhile incremental progress.

Your email address will not be published. Required fields are marked *