Friday, September 11, 2020

Eleven days in May, revisited

 Well, it's 9/11/2020, so a young man's thoughts flit lightly towards major disasters. A few months (!) back I wrote a post attempting to put the mortality of COVID in perspective, and here we sit; perhaps a longer perspective is now possible.

In the meantime the CDC has released data indicating that the vast majority of Covid deaths had major co-morbidity factors, which bolsters the interpretation that it best be looked at as essentially an acceleration of your remaining days in this vale of tears.

Rather than try to guess how many people actually died of, rather than with, COVID, I simply took the number of people who died of any cause, and compared that to the average non-Covid numbers. You can these stats at the CDC website here.

It now looks like this: 

The initial spike got a longer tail, and there was a second wave, and of course there will be a tail from the second wave you can't see yet due to reporting lag. 

Back in May I wrote that the overall effect of the pandemic had been to reduce everyone's life expectancy by 11 days. What does it look like now?

The low curves are the number of days we were losing per week, the upper ones are cumulative, and the two curves represent the differences from average and statistically significant figures. The "real" number should be between them.

So we lost eleven days in May (actually mostly April), then it slacked off some, and then picked up a bit for the second wave, and is going down again. As of right now we are losing about a day of life expectancy a week, but that is declining. The total amount of life expectancy we've lost so far is closing in on a month.

By the way, remember that average American life expectancy has been improving throughout history. How far would you have to go back to have a life expectancy one month shorter? About to 2010, it turns out. And this decade has been one of abnormally slow life expectancy growth; over the 20th century we typically gained 2 or 3 months a year.

Well, that's one way to look at it, but what's gone is gone. How much more life expectancy can you expect to lose?

To predict the remainder of the loss I fit a curve to the excess deaths. A lognormal curve fits the spikes quite well, as you can see:
The heavy blue curve is actual excess deaths, the orange is a lognormal fit to the first spike, and we add a second lognormal to it for the green, total, curve.  Note that these are in thousands of excess deaths in the US on a weekly basis. You need to compare them to an average 55,000 deaths per week or nearly 3 million a year. 

The total area under the green curve from current date on is just 14,000 deaths, which works out to be 43 hours of life expectancy loss.

 You can all relax now.


Thursday, September 10, 2020

AGI: What, how, and when?

 Recently, Robin Hanson posted the following predictions:

 AGI isn’t coming in the next thirty years. Neither are Moon or Mars colonies, or starships. Or immortality. Or nano-assemblers or ems. ... So if you are looking for science-fiction-level excitement re dramatic changes over this period, due to a big change we can foresee today, you’ll be disappointed.

I agree about starships and immortality, and have different probabilities for some of the other things. But I replied (on his Facebook page) as follows:

We will almost certainly have AGI in 30 years, and probably in 10. A teleworker directing a machine looks, from the outside, a lot like an AI running it. The big tech issue of the next decade or two will be working out how the two can be seamlessly integrated as the balance shifts from telehumans to pure machine. This applies as much to Rosey the Robot as to self-driving trucks.

To which he rejoined

Care to bet on that "probably in 10" claim?

and me:

Give odds and a definition.

Robin:

AGI could take away most jobs from humans, right? So what odds do you give to >50% unemployment rate in over year 15 from now?

me: 

Wrong. To displace humans, AI would have to be cheaper, and that was not part of the ""probably in 10" claim". Furthermore, it's not necessary that there is a fixed amount of work to be done; instead of doing the same with less humans, we could simply make and accomplish more things. Like colonies on the moon, etc. But again that's a prediction not of what will be technically feasible, but of whether we collectively want to do it. And again, not part of my claim.

More directly to the point, I do not accept " >50% unemployment rate " as a definition of "AGI exists."

Robin:

... but do you really think AGI could exist yet stay expensive for a long time? I'm much less interested in when it "exists" than when it is useful.

Me:

You might want to read Matt Ridley's "How Innovation Works." AGI is going to come out of the confluence of a bunch of precursor technologies, and it will evolve, not be invented in a flash in some ivory tower. Telework is very likely one of the precursors; it will force people to learn how to break intelligent action into components, and then begin to try to automate some of the components. As this process goes forward, there will be niches where comparative advantage puts AI and others where it puts people. So AI will evolve toward being useful for people. But at the same time, people will evolve (culturally, not genetically, given timeframe) toward being more compatible with AI. Imagine you have a bunch of people working for you. At first, they are unskilled laborers. They save you time by mowing your lawn. But over time, they get smarter. They begin to be able to help with your research and writing. Now they are like graduate students. Pretty soon they are as smart as you are but they still don't replace you; they still relieve you of work but that simply allows you more scope, done at a higher level of abstraction. Somewhere in there your job changes from doing X to managing robots that do X. But managers make more than workers; the robots have become more valuable but so have you.

Tim Tyler puts in:

We were talking about 10 years from now. We will likely have some pretty sophisticated machine intelligence by then - in agreement with JoSH, but not Hanson - but machines won't be able to do all jobs more efficiently and more cheaply at that stage - and there likely won't be lots of unemployed humans knocking around either. Ricardo [a reference to the economic principle of comparative advantage] will not save the human race from obsolescence - but in 10 years, humans will still be needed.

Robin replies

I agree AI will improve, but at the same rate we've seen for 50 years. The question is what YOU can mean by "AGI in ten years" if it isn't cashed out in what they actually do in society.

Tim Tyler:

JoSH said: "To displace humans, AI would have to be cheaper". He's talking about the existence of the machines, not their cost-competitiveness. There are other issues that may hinder adoption of intelligent machines: regulations, preferences for humans in some jobs, sensor/actuator progress, training time/cost, etc.

and Adam Ford asks:

J Storrs Hall it seems your picture of AGI doesn't require AI understanding. Is that correct?

Not sure if this question can be answered directly atm - if not treat it as an intuition pump -  how far can generality be taken without there being understanding in the agent?

Since generality is a spectrum, and not an ideal state, I'm not confident that if we did achieve generality in AI, it would quickly force humans out of most jobs.

So I thought I should compose a blog post putting together my thoughts on the subject of AI, AGI, how, what, and particularly when. Come to look at the blog and lo and behold, I had already done a good part of that, four years ago: The Coming AI Phase Change 

... which did a fairly good job predicting the kind of progress we've seen in AI in the interim. 

Robin's notion of  "AI will improve, but at the same rate we've seen for 50 years" might well be summed up by the change in the state of the art between the famous Dartmouth AI workshop in 1956 and the 50th anniversary conference, also at Dartmouth, in 2006. I was there (for the second one), and so was Geoff Hinton, who gave a very nice talk on the history and future of "neural networks" and what has come to be called "deep learning" in the meantime. I can assure you that the rank and file of classical symbolic AI researchers very much thought of Hinton as a second-class citizen. But the actual achievements of AI since, ranging from Alpha-Go to GPT-3, show us that Hinton had something they had missed. 

I gave a paper at that conference too, which I mention because it goes to the answer of Adam's question above. AI had always meant "building a machine that could think like a human," but over the 50 years in academia it got subjected to so much grade inflation that it had come to mean essentially "the clever programming that captures some specific skill" such as playing chess or driving a car. By the 21st century various people, notably Ben Goertzel, had come to use "AGI" to try and recapture the original meaning. An AGI would be simply a human-level AI, one that could be expected to be able to, or be able to learn to, change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, and maybe even die gallantly.

More directly to the point, an AI would be able to have a deep brandy-and-cigars discussion with you, examine its own motives, know the limits of its own knowledge, and so forth. It would not necessarily actually get this right: as Robin propounds, humans don't get this right all the time, but we went for millennia with a geocentric cosmology as well. 

So how would you go about building an A(G)I? I have plenty of ideas on this too, but unfortunately the margin is too small to contain them.