The Myth of Artificial Intelligence (February/March 2001 | Volume: 52, Issue: 1)

The Myth of Artificial Intelligence

AH article image

Authors: Frederick E. Allen

Historic Era: Era 10: Contemporary United States (1968 to the present)

Historic Theme:

Subject:

February/March 2001 | Volume 52, Issue 1

 

Marvin Minsky, the head of the artificial intelligence laboratory at MIT, proclaimed in I 1967 that “within a generation the problem of creating ‘artificial intelligence’ will be substantially solved.” He was cocky enough to add, “Within 10 years computers won’t even keep us as pets.” Around the same time, Herbert Simon, another prominent computer scientist, promised that by 1985 “machines will be capable of doing any work that a man can do.”

That’s hardly what they’re saying nowadays. By 1982, Minsky was admitting, “The AI problem is one of the hardest science has ever undertaken.” And a recent roundtable of leading figures in the field produced remarks like, “AI as science moves very slowly, revealing what the problems are and why all the plausible mechanisms are inadequate,” and “Today, it is hard to see how we would have missed the vast complexities.” How did we come—or retreat—so far?

It all began in 1950, when the British mathematician Alan Turing wrote a paper in the journal Mind arguing that to ask whether a computer could think was “too meaningless to deserve discussion,” but proposing an alternative: a test to see if a computer could maintain a dialogue in which it convincingly passed for human. He predicted that “in about fifty years’ time it will be possible … to make [computers] play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.” It was a time when computers were new and magical and seemed to have limitless possibilities. Turing ended his paper with “We can only see a short distance ahead, but we can see plenty there that needs to be done.”

Plenty of people were ready to do it too. In 1955, Alien Newell and Herbert Simon, at the RAND Corporation, showed that computers could manipulate not just numbers but symbols for anything, such as features of the real world, and therefore could handle any kind of. problem that could be reduced to calculation. They then went to work on a General Problem Solver that could resolve any kind of difficulty susceptible to rules of thumb such as humans were generally believed to use. They gave that up as overambitious in 1967, but before then their work had helped inspire a host of other undertakings, the main ones at the lab at MIT under Minsky, where, in 1965, a researcher named Terry Winograd developed a program that could move images of colored blocks on a computer screen in response to English-language commands. People also worked on programs to hold ordinary conversations, as Turing had suggested, and they saw many early signs of promise.

By the 1970s, the young field was running into trouble. Nobody could come close to making a computer understand the sentences in a simple children’s story with the comprehension of a four-year-old. As researchers reached dead ends, they began to