Thursday, September 10, 2020

AGI: What, how, and when?

 Recently, Robin Hanson posted the following predictions:

 AGI isn’t coming in the next thirty years. Neither are Moon or Mars colonies, or starships. Or immortality. Or nano-assemblers or ems. ... So if you are looking for science-fiction-level excitement re dramatic changes over this period, due to a big change we can foresee today, you’ll be disappointed.

I agree about starships and immortality, and have different probabilities for some of the other things. But I replied (on his Facebook page) as follows:

We will almost certainly have AGI in 30 years, and probably in 10. A teleworker directing a machine looks, from the outside, a lot like an AI running it. The big tech issue of the next decade or two will be working out how the two can be seamlessly integrated as the balance shifts from telehumans to pure machine. This applies as much to Rosey the Robot as to self-driving trucks.

To which he rejoined

Care to bet on that "probably in 10" claim?

and me:

Give odds and a definition.

Robin:

AGI could take away most jobs from humans, right? So what odds do you give to >50% unemployment rate in over year 15 from now?

me: 

Wrong. To displace humans, AI would have to be cheaper, and that was not part of the ""probably in 10" claim". Furthermore, it's not necessary that there is a fixed amount of work to be done; instead of doing the same with less humans, we could simply make and accomplish more things. Like colonies on the moon, etc. But again that's a prediction not of what will be technically feasible, but of whether we collectively want to do it. And again, not part of my claim.

More directly to the point, I do not accept " >50% unemployment rate " as a definition of "AGI exists."

Robin:

... but do you really think AGI could exist yet stay expensive for a long time? I'm much less interested in when it "exists" than when it is useful.

Me:

You might want to read Matt Ridley's "How Innovation Works." AGI is going to come out of the confluence of a bunch of precursor technologies, and it will evolve, not be invented in a flash in some ivory tower. Telework is very likely one of the precursors; it will force people to learn how to break intelligent action into components, and then begin to try to automate some of the components. As this process goes forward, there will be niches where comparative advantage puts AI and others where it puts people. So AI will evolve toward being useful for people. But at the same time, people will evolve (culturally, not genetically, given timeframe) toward being more compatible with AI. Imagine you have a bunch of people working for you. At first, they are unskilled laborers. They save you time by mowing your lawn. But over time, they get smarter. They begin to be able to help with your research and writing. Now they are like graduate students. Pretty soon they are as smart as you are but they still don't replace you; they still relieve you of work but that simply allows you more scope, done at a higher level of abstraction. Somewhere in there your job changes from doing X to managing robots that do X. But managers make more than workers; the robots have become more valuable but so have you.

Tim Tyler puts in:

We were talking about 10 years from now. We will likely have some pretty sophisticated machine intelligence by then - in agreement with JoSH, but not Hanson - but machines won't be able to do all jobs more efficiently and more cheaply at that stage - and there likely won't be lots of unemployed humans knocking around either. Ricardo [a reference to the economic principle of comparative advantage] will not save the human race from obsolescence - but in 10 years, humans will still be needed.

Robin replies

I agree AI will improve, but at the same rate we've seen for 50 years. The question is what YOU can mean by "AGI in ten years" if it isn't cashed out in what they actually do in society.

Tim Tyler:

JoSH said: "To displace humans, AI would have to be cheaper". He's talking about the existence of the machines, not their cost-competitiveness. There are other issues that may hinder adoption of intelligent machines: regulations, preferences for humans in some jobs, sensor/actuator progress, training time/cost, etc.

and Adam Ford asks:

J Storrs Hall it seems your picture of AGI doesn't require AI understanding. Is that correct?

Not sure if this question can be answered directly atm - if not treat it as an intuition pump -  how far can generality be taken without there being understanding in the agent?

Since generality is a spectrum, and not an ideal state, I'm not confident that if we did achieve generality in AI, it would quickly force humans out of most jobs.

So I thought I should compose a blog post putting together my thoughts on the subject of AI, AGI, how, what, and particularly when. Come to look at the blog and lo and behold, I had already done a good part of that, four years ago: The Coming AI Phase Change 

... which did a fairly good job predicting the kind of progress we've seen in AI in the interim. 

Robin's notion of  "AI will improve, but at the same rate we've seen for 50 years" might well be summed up by the change in the state of the art between the famous Dartmouth AI workshop in 1956 and the 50th anniversary conference, also at Dartmouth, in 2006. I was there (for the second one), and so was Geoff Hinton, who gave a very nice talk on the history and future of "neural networks" and what has come to be called "deep learning" in the meantime. I can assure you that the rank and file of classical symbolic AI researchers very much thought of Hinton as a second-class citizen. But the actual achievements of AI since, ranging from Alpha-Go to GPT-3, show us that Hinton had something they had missed. 

I gave a paper at that conference too, which I mention because it goes to the answer of Adam's question above. AI had always meant "building a machine that could think like a human," but over the 50 years in academia it got subjected to so much grade inflation that it had come to mean essentially "the clever programming that captures some specific skill" such as playing chess or driving a car. By the 21st century various people, notably Ben Goertzel, had come to use "AGI" to try and recapture the original meaning. An AGI would be simply a human-level AI, one that could be expected to be able to, or be able to learn to, change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, and maybe even die gallantly.

More directly to the point, an AI would be able to have a deep brandy-and-cigars discussion with you, examine its own motives, know the limits of its own knowledge, and so forth. It would not necessarily actually get this right: as Robin propounds, humans don't get this right all the time, but we went for millennia with a geocentric cosmology as well. 

So how would you go about building an A(G)I? I have plenty of ideas on this too, but unfortunately the margin is too small to contain them.

 


1 comment:

  1. Hi Josh,
    thanks for maintaining the intellectual, foundations-based rigor in the discussions and fostering technooptimism. I discovered this blog after having read WIMFC - it was an intellectual feast for an engineer like me.

    Please keep the blog posts coming.

    ReplyDelete