Uncategorized

I’ll take “Cheap Publicity Stunts” for $1,000, Alex

IBM’s assault on Jeopardy! isn’t a triumph for artificial intelligence. It’s an embarrassment.

Having lived through the hype over IBM’s 1997 Deep Blue challenge to human chessplayers, I find myself intensely irritated at IBM’s 2011 assault on Jeopardy! The Globe’s tech reporter leads off his rumination with “On the surface, it has all the makings of a gimmick…”. So did Deep Blue; but let it be recalled that in the fullness of time, after public quarrels and investigative reports and documentaries allowed us to attain a historical perspective, the project actually turned out to be…a gimmick.

IBM didn’t exactly cheat in the Deep Blue showdown, but the company refused to let Garry Kasparov study the computer’s games the way he could have for a top human opponent. When Kasparov nonetheless figured out how to lead the computer into traps by studying tactical weaknesses of artificial intelligence, the company, fearing for its prestige, brought in human chessmasters—ringers—to tweak the program’s position-evaluation algorithm and prevent an awkward defeat. Ken Jennings is joining battle, not with an artificial mind, but with a coterie of corporate drones to whom sportsmanship comes second.

The general arc of computer-chess development, and the perpetually disappointing history of AI, were largely unaffected by the Deep Blue-Kasparov contest. Indeed, the main influence of the exhibition was probably the way it intensified research into anti-computer chess styles. Human-versus-computer competition basically reached a stalemate after 2002’s 4-4 draw between Vladimir Kramnik and Fritz, in which the inherent intellectual limitations of the machine and the physiological and nervous ones of the man more or less ended up cancelling out.

Every article about Watson, IBM’s Jeopardy!-playing device, should really lead off with the sentence “It’s the year 2011, for God’s sake.” In the wondrous science-fiction future we occupy, even human brains have instant broadband access to a staggeringly comprehensive library of general knowledge. But the horrible natural-language skills of a computer, even one with an essentially unlimited store of facts, still compromise its function to the point of near-parity in a trivia competition against unassisted humans. Surely this isn’t a triumph for artificial intelligence, or for IBM, so much as it is a self-administered black eye?

Jeopardy!, after all, doesn’t demand that much in the way of language interpretation. Watson has to, at most, interpret text questions of no more than 25 or 30 words—questions which, by design, have only a single answer. It handles puns and figures of speech impressively, for a computer. But it doesn’t do so in anything like the way humans do. IBM’s ads would have you believe the opposite, but it bears emphasizing that Watson is not “getting” the jokes and wordplay of the Jeopardy! writers. It’s using Bayesian math on the fly to pick out key nouns and phrases and pass them to a lookup table. If it sees “1564” and “Pisa”, it’s going to say “Galileo”.

So why, one might ask, are we still throwing computer power at such tightly delimited tasks, ones that lie many layers of complexity below what a human accomplishes in having a simple phone conversation? The Globe‘s Omar el Akkad tells us, in a sidebar, that the University of Alberta’s world-leading poker software “can beat pretty much the best”…but in a two-player limit game, i.e., an unrealistically pure test of odds calculation that is to no-limit hold ’em what a grade-school track meet is to a Formula 1 race. (The roots of that U of A research program go back almost 20 years.) Meanwhile, “Computer chess players can now beat all but the very best humans”—but that was more or less the state of affairs already attained in 1997 when Kasparov fought Deep Blue. And the obliteratingly total lack of progress toward the gold and silver Loebner Prizes (annual implementations of the famous Turing test) is such an embarrassment that the jury has been quietly adjusting the bar from year to year to keep things interesting.

El Akkad’s claim is that “Scientists, engineers and entrepreneurs keep pushing the boundaries of artificial intelligence”, but it would almost certainly be more accurate to state that, as Hubert Dreyfus predicted, they keep smacking into those limits without ever breaking through to the accurate imitation of mindlike activity. Dreyfus is, professionally, a specialist in incomprehensible European nonsense; but he was for decades the leading figure among artificial-intelligence pessimists, and his career has effectively been a long series of successful bets against fast AI development. It is rare for a philosopher to be able to claim strictly scientific falsifiability grounds for a finding, but Dreyfus and other AI skeptics arguably can.

Looking for more?

Get the Best of Maclean's sent straight to your inbox. Sign up for news, commentary and analysis.
  • By signing up, you agree to our terms of use and privacy policy. You may unsubscribe at any time.