Thursday, August 18, 2016

The coming AI phase change

How long will it be until we have real honest-to-goodness general human-level AI? This question is of at least some interest to a futurist (if for no other reason than that smart AIs might put futurists out of a job :-) ).

There are several polls of experts out there, summarized here, which can be summed up by saying the median guess is in the 2040s. (One quickie poll most current commentators missed was Wired's Reality Check in 1996, which predicted a C-3PO -like robot in 2047. The Macmillan Atlas of the Future (1998) calls for 2035.)

On the other hand, some people, e.g. Robin Hanson, think it will be much longer. Hanson informally surveyed various AI researchers and came to the conclusion that we still have a couple of centuries more work to do at the least.

On the other hand, Ray Kurzweil famously claims (with tongue at least a little in cheek, one presumes) it will be exactly 2029. He points out that the progress in a field like this is not linear, and gives the Human Genome Project as a recent, well-documented example of dramatic acceleration in terms of a naive figure of merit.

What is the chance for a dramatic acceleration of AI development? Do we even have a usable figure of merit for AI? How about IQ? Unfortunately, IQ was invented as a tool for tracking human intellectual development--the Q means the quotient of the mental to physical age, at least originally--so that it seems likely that by the time we have a machine for which it makes any sense at all to say it has any IQ at all, we will basically have already succeeded at the AI task.

I think to some extent AI is more like the major conceptual breakthroughs like building a flying machine. The hard part of developing working airplanes was not the solution of any of the various technical problems. It was understanding what the problems were. Everybody and his dog Muttley knew how to build a machine with lift which would get off the ground. It took the Wright brothers, with the point of view of bicycle makers, to realize that the problem was balance and attitude control. Once they succeeded, virtually nobody copied their technical solution (wing warping); ailerons quickly became the standard method. But beforehand, control was an "unknown unknown."

A decade ago in Beyond AI, I took up cudgels against those who were predicting a major takeoff in artificial intelligence by virtue of a self-improving super-AI. Long before that happens, I said, we will see something a bit more mundane but perfectly effective: AI will start to work, and people will realize it, and lots of money, talent, and resources will pour into the field. “... it might affect AI like the Wright brothers' Paris demonstrations of their flying machine did a century ago. After ignoring their successful first flight for years, the scientific community finally acknowledged it; and aviation went from a screwball hobby to the rage of the age, and kept that cachet for decades. In particular, the amount of development effort took off enormously.” That will produce an acceleration of results, which will attract more money, and there's your feedback loop. The amount of money going into aviation before 1910 was essentially nil (Langley's grants to the contrary notwithstanding). Once people caught on that airplanes really worked, though, there was a sensation and a boom. By the end of the 1920s, Pan American was flying scheduled international flights in the 8-passenger Ford Tri-motor. The ensuing exponential growth in capabilities continuing unabated right up to the Sixties.

Hanson points out that pouring more money into the field might not accomplish much:
Fourth, it is well known that most innovation doesn’t come from formal research, and that innovations in different areas help each other. Economists have strong general reasons to expect diminishing returns to useful innovation from adding more researchers. Yes, if you double the number of researchers in one area you’ll probably get twice as many research papers in that area, but that is very different from twice as getting much useful progress.
This is quite true, but there is another reason to think that we might be on the cusp of a phase change into a higher growth mode in AI research: it's moving out of academia and into industry. There's more emphasis on getting results and less on looking clever. I've been in both and I can personally attest. Academia is a lot more fun; in industry you get a lot more done that is useful.

In November 2005, attending a AAAI symposium, I found myself in casual conversation with one of the other artificial intelligence researchers at the reception. One of the hot topics of conversation in AI circles just then was the DARPA Grand Challenge. The previous month, five autonomous vehicles—self-driving cars and trucks—had successfully completed a grueling 131.2-mile course in competition for a $2 million prize offered by DARPA, the defense department's research agency. This was a major advance in the state of the art, since the previous Challenge, held just a year and a half earlier, had been a complete failure, with the best vehicle only managing to go 7.3 miles.

I had remarked as much to my AAAI friend, and he demurred. The apparent advance, he insisted, consisted of nothing but new ways of combining existing sensory, control, and navigation techniques. That seems to be a fairly common ivory-tower way of seeing things.

But that, of course, is exactly what the vast majority of actual technological progress consists of. And the Grand Challenge results show graphically what kind of a difference it can make in the real world. However much a specialist may recognize all the parts and elements of a new machine from earlier efforts, what the world at large notices is whether or not it works. And in the case of self-driving cars, a major watershed was crossed between March 2004 and October, 2005.

But there was no Clever New Trick that would have excited an academic, nothing even that an active AI researcher recognized as an advance.

The money makes a big difference too. I would guess that a machine that could run a human equivalent AI in real time would cost about $1 million today. One reason that AI progress has been slow and roundabout is that academic researchers were (a) underfunded and (b) sidetracked by finding ingenious solutions that ran on way-too-skimpy hardware, but which were brittle. In the brain, many things are done by brute force and are as a result robust.

All of which we may be in the process of shifting gears away from. Major feedbacks seem to be in place, and I surmise that we might be kicking into an exponential growth mode. The only problem is, I still don't know how to label the Y-axis.

1 comment:

  1. Hi, Josh. I have a problem with the more optimistic estimates of the arrival date of human equivalent AI which I've not seen discussed, and which I'd appreciate your thoughts on.

    In order to get anywhere near human level digital intelligence, it seems clear that this cannot simply be programmed into a computer in the old-fashioned way, but must rely on autogenous machine learning, as you described in "Beyond AI" (particularly ch.8), thus on the machine being able to form new concepts off its own bat, thus to some extent rewriting and expanding its own software.

    But this is intrinsically a non-linear process which a human engineer cannot design; he can presumably only set the machine off in the right direction and rely on its own abilities to cope with what it finds, as one would do when raising a human child.

    Therefore human level intelligence cannot be designed into a system, it must emerge from a suitably versatile and responsive system developing itself or evolving in a learning environment.

    Given the range of human responses to this situation, it seems clear that we could end up with a genius, or with a psychopathic killer, or with a schizophrenic or a sufferer from dementia. I don't see that the human designers would necessarily have much control over the outcome, except the purely evolutionary one of selecting the better variants for the next round of tests, and deleting the worse ones. This could proceed faster than biological evolution, of course, but it would be hard to say how much faster.

    In other words, I wonder whether the extreme complexity of what is being attempted, and hence the large number of potential failure modes, might place a brake on progress which becomes progressively stronger the higher the level of intelligence being aimed at. (Replacing the expected Singularity with something more like a Plateau.)

    My principal interest is in space exploration, which is often grossly over-hyped (today everybody and his dog Muttley wants to go to Mars, and claims they can do it within the next ten years). So I've become very sensitive to anything that sounds like empty hype, and very aware that progress seems to be decelerating as the landscape of the future turns out to be so much more complicated and chaotic than it looked from afar.

    I wonder how you would view this question?

    Thanks.

    Stephen
    Oxford, UK

    ReplyDelete