Thursday 24 February 2011

AI Is Creeping Up On You

In movies, Artificial Intelligence, aka AI, always arrives with a bang.
The machines wake up, realize their power and immediately launch a nuclear holocaust or trap us in the Matrix or something similarly
unpleasant. I strongly suspect this will never happen. Instead, as the decades go by, we will increasingly be surrounded by AI at many levels – while vigorously insisting all through that it’s “no big deal”.

A milestone for artificial intelligence was achieved last week in a three-day Jeopardy contest held from February 14 – 16. For those unfamiliar with Jeo
pardy, it is a version of our beloved Quiz contests, with some differences.
For one, the clues are often presented in deliberately convoluted language, often with more than one meaning. As a further twist, the quizmaster presents the question as an “answ
er”, and the contestant must present the answer as a “question”.
For instance, rather than asking “Who wrote Hamlet and Macbeth?” the host will say, “This is the author of Hamlet and Macbeth” and the contestant will answer “Who is Shakespeare?”
Rather than the straightforward scoring system of quizzes, each clue comes with a “d
ollar value”, which is added or deducted to the contestant’s total depending on their answer.
There are also several “Daily Double” clues, where the contestant can wager a sum of money all the way up to their total “earnings” till that point.

The score of the contestant is the total amount of “money accumulated”.


The tournament last week featured two superstars of the Jeopardy world – Brad Rutter, the biggest all-time money winner on the show, and Ken Jennings, record holder for the longest championship streak.

But the spotlight was on the non-human entrant, Watson – a supercomputer designed by IBM running natural language processing software.
The clues were sent to Watson as a text message at exactly the same time they were made visible to the other contestants. Watson would have to unravel the language in the clue, find the answer, and press the buzzer before the other contestants did to have a chance at scoring.


The first day of the match on Feb 14 ended with Watson and Rutter tied at $50
00 with Jennings trailing at $2000. The internet was abuzz with theories about how the champions were merely “warming up” before trouncing the machine over the next two days.
All such speculations were crushed on Day Two, which ended with Jennings at $4,800, Rutter at $10,400… and Watson massively ahead with $35,734 !

The final day ended with Jennings at $24,000, Rutter at $21,600 and Watson at $77,147 – a thoroughly convincing victory.

The answer to “Jeopardy world champion” is now “Who is Watson?”


Apart from the immense entertainment, a pleasant aspect of the program was the graceful acceptance of defeat by the humans. The affable Ken Jennings even quipped, “I, for one, welcome our computer overlords.”


This was a marked contrast to the acrimonious ending of a similar Man vs. Machine event fourteen years earlier, when IBM supercomputer Deep Blue defeated world chess champion Garry Kasparov in a 6 game match played in 1997.

Kasparov proved to be a very poor loser – storming away after the last game, b
eing conspicuously absent at the prize distribution ceremony and accusing the IBM team of cheating.
IBM retaliated by refusing a re-match and decommissioning Deep Blue.

The whole episode remains mired in controversy and bad feeling.


Computer chess advanced considerably over the next decade.

In November 2006, the reigning world champion Vladimir Kramnik played Deep Fritz.

In contrast to Deep Blue which was specially designed software running on a customized supercomputer, Fritz was a commercially available program running on a high-end laptop.

Nevertheless, the computer won the 6 game match with 2 wins and 4 draws.


Since then, interest in human-computer chess matches has waned. Though not proved by actual play, it is quietly acknowledged that today’s best chess programs like Rybka running on a supercomputer would trounce any human chess player.


Lame Excuses

What amuses me about both the Watson and Deep Blue incidents is the subsequent proliferation of excuses from the human side for why these incidents were “nothing special” and “not really artificial intelligence”. The excuses fall into roughly four categories, which I list below in decreasing order of silliness, along with my responses.


Excuse 1:
“Deep Blue and Watson were both supercomputers with top end hardware. So it’s no big deal that they could do what they did”
Response:
And your point is? I can similarly imagine a rabbit saying, “It’s no big deal that humans are so intelligent, given their big brains and all.”
The power of the hardware is part of what makes the system impressive. I agree that Watson wouldn’t have won if it was running on a laptop, but I can bet you that Jennings wouldn’t do too well after a frontal lobotomy either.

Also note how quickly we jump from “A computer can never do X” to “It’s no big deal that a computer can do X”!!


Excuse 2:
“The computer isn’t really thinking. It is only doing what its program tells it to do”
Response:
This is in strong competition for the silliness top spot.
If a human had beaten the world chess champion, would you have agreed that he or she was thinking?

Conversely, why not argue that when Kasparov plays “he isn’t really thinking. He is only doing what the firing of neurons in his brain tells him to do”?


Excuse 3:
“The computer has no credit in this. The credit belongs entirely to the humans who programmed it”
Response:
No wait, it’s not the credit of the programmers at all, but of the genes and environment that shaped their brains. No wait, actually all credit is due to the process of evolution which shaped those genes. No wait…
See how this goes?

My point is, if we follow any consistent standard for giving credit, we should certainly congratulate the programmers who for designing Watson or Deep Blue, but after that we must credit the systems for their subsequent performance.


Excuse 4:
“Computers may be able to play chess and win Jeopardy, but they cannot invent new technology or compose music or *fill in the blanks*”
Response:
The sentence above is missing a “Yet” at the end.

One must remember that the first ‘computers’ in society were not machines, but a group of people, mostly women, working in science laboratories. They were so called because of their ability to perform complex arithmetic accurately and repeatedly – an ability much valued and taken to indicate great mental stamina.

Fifty years ago, anyone would have agreed that playing chess well required intelligence, and a high degree of intelligence, at that.

Talking computers which understand language have traditionally been science-fiction territory – a hallmark of intelligent machines and droids of the far future.


But every time real computers reach one of these milestones, the significance of the event is denied and the bar of “true intelligence” reset several notches higher.

The current list of “what computers can never do” includes “appreciating poetry” and “falling in love”. True, perhaps, but the question is, do they need to?

The goal of AI is not to create artificial humans, any more than the goal of aircraft designers is to create a machine which flaps its wings and lays eggs.


I personally believe that Artificial Intelligence will not take the form of an all-encompassing, godlike Supermind, so beloved of science fiction authors and fans.
Instead, as the centuries roll on, we will see a proliferation of specialized applications tailored to specific tasks, that we would definitely call intelligent, but our descendants may not.
Ultimately, the only remaining special feature of human intelligence may be the ability to invent excuses for why we are special!

11 comments:

  1. Have you heard about technological singularity?

    ReplyDelete
  2. Interesting! Yes, AI might creep up on humanity. It happened with Chess, but in a sense chess programs are still not commoditized. Robbolito and other free programs can beat the strongest humans, but apparently there is still only a handful of humans capable of writing such advanced programs.

    It's quite possible that Watson-style AI will become even more widespread. As I said in my post, I think Google and other companies will provide commercial versions. Of course, it's possible what will run on PCs will be a client, with the actual AI algos running in the cloud.

    I think a lot of the debates about "true" AI stem from a poor definition of intelligence. What constitutes intelligence is a quantitative question rather than qualitative, I think. The cutoff for adaptability and sophistication of algorithm to be called intelligent is arbitrary -- in that sense, "i = 0; while (i < 100) i++;" is an example of primitive intelligence! AI really started the day we began programming computers (and not necessarily digital ones).

    But I think what the AI community usually means when they talk about AI does not include chess, and does include natural language processing.

    ReplyDelete
  3. Hey...nice post...you know couple of days back I was reading about a Robot (japanese) who will take care of elder people(i dont exactly know the details) and Japanese people are very optimistic about it. The interesting part was there was a guy commenting on the Cons of such robot and the effect of it on the soceity. I found that was really funny and meanlingless to think like that and that exactly explains your point "Ultimately, the only remaining special feature of human intelligence may be the ability to invent excuses for why we are special! ".

    "One must remember that the first ‘computers’ in society were not machines, but a group of people, mostly women, working in science laboratories

    and hey I feel so proud about it :)

    ReplyDelete
  4. Nice post. The backstory however, is that AI has failed. What is now "AI" is, as you rightly pointed out, very specific domain intelligence. And so it is likely to remain.

    But the dream of the early AI researchers was truly intelligence, artificial only that it was not human. That dream is pretty definitely dead, at least within the field regardless of what Hollywood might think. Like so many other things, reality turned out to be a lot more complex than anyone envisaged. It turned out that modeling a domain is relatively easy. With enough computational power, a mathematically describable system - chess, perhaps even go - is eventually conquerable.

    But modeling a model that will in turn model anything in the physical universe is hard. In fact, no one has the slightest clue how to do it. Nature did it, although it took billions of years and many iterations, and the end product tends to be very bad at many things. Including, curiously enough, mathematical models. In fact the majority of people on this planet appear to be deficient in simple arithmetic. But then again, that may not have anything to do with intelligence.

    Makes you wonder: perhaps we should question our definition of intelligence. Is the machine model of mind really accurate? We find logic and precise computation hard. So we think it equates to intelligence. But is that really true? It's takes more computational firepower to catch a ball thrown at us than do work through a hundred problems in axiomatic logic. And that we can do instantly. Is intelligence about processing inputs and computations, state machines?

    Or are we missing something?

    ReplyDelete
  5. Sunil:
    The technological Singularity is the idea that AI will arrive with a huge bang and change society literally overnight.
    I am suggesting here that instead, it might just creep up all around us, while we just think its all "life as usual"
    -----------------

    Rajeev:
    How do AI practitioners draw the line ?
    Isn't natural language processing just a lot more computation than chess ?

    ReplyDelete
  6. Sean:
    Lots of food for thought.

    Let's see:

    First off, the laws of physics are algorithmic at the level of biological molecules.
    So,we could potentially model a human brain (or nervous system) cell by cell with a very computationally expensive algorithm.

    Unless one believes that there's some "non-physical" component to the workings of the brain, this system should be able to do any processing the brain does.
    So, this would be a (very inefficient)algorithm that replicates human intelligence.

    Now regarding another point of yours:
    Is the brain really "a model that can model anything in the physical universe" ?
    Or just a very large group of domain specific models with some interlinking and feedbacks ?

    From the little I know of cognitive neuroscience, our intelligence is proving to be more a large bunch of "cognitive modules" than an "Universal Turing Machine" (or more).

    In which case, AI has a chance.

    The original promises of the community proved to be overly grandiose.
    But hopefully, the lesson in humility has been learned, and the current bottom-up approach will yield some dividends.

    ReplyDelete
  7. Anindya:

    Trying to define AI is a quagmire. No simple definition is possible. But we can identify components. For example, one rather arbitrary element is this: you could say that if it's possible for a single human to program an algorithm without access to real-world data, it's not AI.

    As you say, NLP is more complicated than Chess. (That's not the only important difference -- NLP is not a well-defined problem, Chess is. Also, it depends on the type of Chess program. Self-teaching chess programs can get more complicated than some NLP programs.)

    But you could equally well argue that natural language is just a lot more computation than counting the number of zeroes in an array of integers. Would you call that AI then?

    That's why I think it's a matter of agreeing on a cut-off. Not a formal, well-defined cut-off, but an "I'll recognize AI when I see it" cut-off.

    ReplyDelete
  8. Nice post with interesting logic. But someone is really deprived of due credit!

    ReplyDelete
  9. Anindya,

    Good talking to you after all these years :) reminds me of conversations when we were kids.

    I agree with both your points, good analysis.

    with respect to the first (laws of physics are algorithmic at the molecular level) and also perhaps to the second (interconnections v. TM): there may be some additional things to consider.

    (1) we don't appear to have a science that can effectively analyze emergent properties of systems. we can barely analyze emergent properties of simple systems (such as conway's game of life - strange things happen when interactions persist over time, but the only perfect way to compute is to actually iterate - there is no analytical "shortcut"). that raises the question of whether it's even possible for complex systems. are there "laws" at a higher level than molecules but at a lower level than biologically describable units? is "intelligence" an emergent property? is thinking in terms of molecules thinking at the right level for intelligence?

    (2) And that also relates to the point about interconnections among "cognitive modules". perhaps the interconnections are more important than the modules? perhaps intelligence lies in the interactions (which may or may not be subject to precise laws or fuzzy predictive analysis - but perhaps, as you pointed out, we can build a replica and actually compute a result) (but but: doing the computation isn't really adding to the sum of human knowledge - we wouldn't *understand* much more about the nature of intelligence by doing that than by simply having a baby, a more natural way of creating the same intelligence than modeling neurons with circuitry)

    which leads to another observation about machine "intelligence" v. human intelligence. in machine intelligence as we see it, in even the most sophisticated AI programs, all of the interactions that take place - all the way from the electrons in the circuits through computer code that executes - are ultimately comprehensible and explainable in a vocabulary people understand. but if we implemented human intelligence in a silicon model, in some giant computation, we still wouldn't "understand" how the intelligence observed emerged - and the intelligence would perhaps not either (being bound to the limitations of the human mind it is perfectly modeling)?

    ReplyDelete
  10. Hi Sean, Rajeev,
    Looks like this post has attracted the attention of my two biggest computer-buff friends from school and college days respectively. :)

    Rajeev:
    That's why I think it's a matter of agreeing on a cut-off. Not a formal, well-defined cut-off, but an "I'll recognize AI when I see it" cut-off
    -------------------------

    Yep, and that's why I get frustrated when the goalpost for "true AI" keeps getting shifted.

    I wonder to what extent we confuse "intelligent" with "human-like".
    Perhaps a chess playing computer that responds to human speech, will be seen as more intelligent than one which actually composes music but does not process language.

    It'll be interesting to see how perceptions about AI change when robots that "show emotion" - like Kismet - are perfected.

    ReplyDelete
  11. Sean:
    ----------------------------
    we don't appear to have a science that can effectively analyze emergent properties of systems. we can barely analyze emergent properties of simple systems
    ----------------------------

    Sometimes I think this may just be a limitation of human intelligence.

    We say we understand a system when we can analyse it in terms of a "small number of
    variables", so to speak.
    So far we have been lucky - physics would never have gotten off the ground if you had to take the motion of every star in the galaxy to describe the fall of an apple !
    But who knows - maybe in some cases you just *have* to use a zillion interacting parts and see things "just happen".

    The same issue is showing up in math as well.
    About a decade ago, the Four Colour theorem was proved. But people hated the proof because a large part of it was checking several million cases, and many still believe that there must be a "simpler principle involved".

    Maybe, but it is plausible the simplest proof of some result could really involve checking a billion cases.
    And if our brains worked at supercomputer speed, that would be perfectly fine !

    ReplyDelete