Thursday, August 18, 2016

The coming AI phase change

How long will it be until we have real honest-to-goodness general human-level AI? This question is of at least some interest to a futurist (if for no other reason than that smart AIs might put futurists out of a job :-) ).

There are several polls of experts out there, summarized here, which can be summed up by saying the median guess is in the 2040s. (One quickie poll most current commentators missed was Wired's Reality Check in 1996, which predicted a C-3PO -like robot in 2047. The Macmillan Atlas of the Future (1998) calls for 2035.)

On the other hand, some people, e.g. Robin Hanson, think it will be much longer. Hanson informally surveyed various AI researchers and came to the conclusion that we still have a couple of centuries more work to do at the least.

On the other hand, Ray Kurzweil famously claims (with tongue at least a little in cheek, one presumes) it will be exactly 2029. He points out that the progress in a field like this is not linear, and gives the Human Genome Project as a recent, well-documented example of dramatic acceleration in terms of a naive figure of merit.

What is the chance for a dramatic acceleration of AI development? Do we even have a usable figure of merit for AI? How about IQ? Unfortunately, IQ was invented as a tool for tracking human intellectual development--the Q means the quotient of the mental to physical age, at least originally--so that it seems likely that by the time we have a machine for which it makes any sense at all to say it has any IQ at all, we will basically have already succeeded at the AI task.

I think to some extent AI is more like the major conceptual breakthroughs like building a flying machine. The hard part of developing working airplanes was not the solution of any of the various technical problems. It was understanding what the problems were. Everybody and his dog Muttley knew how to build a machine with lift which would get off the ground. It took the Wright brothers, with the point of view of bicycle makers, to realize that the problem was balance and attitude control. Once they succeeded, virtually nobody copied their technical solution (wing warping); ailerons quickly became the standard method. But beforehand, control was an "unknown unknown."

A decade ago in Beyond AI, I took up cudgels against those who were predicting a major takeoff in artificial intelligence by virtue of a self-improving super-AI. Long before that happens, I said, we will see something a bit more mundane but perfectly effective: AI will start to work, and people will realize it, and lots of money, talent, and resources will pour into the field. “... it might affect AI like the Wright brothers' Paris demonstrations of their flying machine did a century ago. After ignoring their successful first flight for years, the scientific community finally acknowledged it; and aviation went from a screwball hobby to the rage of the age, and kept that cachet for decades. In particular, the amount of development effort took off enormously.” That will produce an acceleration of results, which will attract more money, and there's your feedback loop. The amount of money going into aviation before 1910 was essentially nil (Langley's grants to the contrary notwithstanding). Once people caught on that airplanes really worked, though, there was a sensation and a boom. By the end of the 1920s, Pan American was flying scheduled international flights in the 8-passenger Ford Tri-motor. The ensuing exponential growth in capabilities continuing unabated right up to the Sixties.

Hanson points out that pouring more money into the field might not accomplish much:
Fourth, it is well known that most innovation doesn’t come from formal research, and that innovations in different areas help each other. Economists have strong general reasons to expect diminishing returns to useful innovation from adding more researchers. Yes, if you double the number of researchers in one area you’ll probably get twice as many research papers in that area, but that is very different from twice as getting much useful progress.
This is quite true, but there is another reason to think that we might be on the cusp of a phase change into a higher growth mode in AI research: it's moving out of academia and into industry. There's more emphasis on getting results and less on looking clever. I've been in both and I can personally attest. Academia is a lot more fun; in industry you get a lot more done that is useful.

In November 2005, attending a AAAI symposium, I found myself in casual conversation with one of the other artificial intelligence researchers at the reception. One of the hot topics of conversation in AI circles just then was the DARPA Grand Challenge. The previous month, five autonomous vehicles—self-driving cars and trucks—had successfully completed a grueling 131.2-mile course in competition for a $2 million prize offered by DARPA, the defense department's research agency. This was a major advance in the state of the art, since the previous Challenge, held just a year and a half earlier, had been a complete failure, with the best vehicle only managing to go 7.3 miles.

I had remarked as much to my AAAI friend, and he demurred. The apparent advance, he insisted, consisted of nothing but new ways of combining existing sensory, control, and navigation techniques. That seems to be a fairly common ivory-tower way of seeing things.

But that, of course, is exactly what the vast majority of actual technological progress consists of. And the Grand Challenge results show graphically what kind of a difference it can make in the real world. However much a specialist may recognize all the parts and elements of a new machine from earlier efforts, what the world at large notices is whether or not it works. And in the case of self-driving cars, a major watershed was crossed between March 2004 and October, 2005.

But there was no Clever New Trick that would have excited an academic, nothing even that an active AI researcher recognized as an advance.

The money makes a big difference too. I would guess that a machine that could run a human equivalent AI in real time would cost about $1 million today. One reason that AI progress has been slow and roundabout is that academic researchers were (a) underfunded and (b) sidetracked by finding ingenious solutions that ran on way-too-skimpy hardware, but which were brittle. In the brain, many things are done by brute force and are as a result robust.

All of which we may be in the process of shifting gears away from. Major feedbacks seem to be in place, and I surmise that we might be kicking into an exponential growth mode. The only problem is, I still don't know how to label the Y-axis.

Tuesday, August 16, 2016

How is a great nation like a flagellar bacterium?

Consider the humble E. Coli. It is worth studying for several reasons: it has atomically-precise electric motors to turn its flagella, for example. The flagella themselves are corkscrew-like hairs that act as propellers when turned one way, and induce random motions when turned the other. But the point of interest is what the flagella are used for.
The E. Coli can also sense the density of dissolved nutrients in the water it is swimming in. It swims along and notices that the level is climbing; it is swimming into a region of higher concentration; it's happy. So it keeps swimming.
On the other hand, if it swims through the region, it notices the concentration declining. So it reverses its flagella and tumbles. It winds up facing in a random direction, and sets out again. This may or may not get it on a course that takes it back into the food, but at least it has a chance. The result of the whole algorithm is a biased random walk that tends to keep it in the higher-concentration areas.
It turns out that an ordinary housefly does much the same thing. Its maddening buzzing around is a biased random walk through the concentration of food-like smells in the air. This causes a concentration of flies in the vicinity of things like garbage cans.

Like the bacterium or the fly, the population of a great nation cannot really see where it is going. (Needless to say, neither can the government.) All they can do, in general, is to tell if things are getting better or worse. And if things are getting worse, the people will call for Change. In a democracy, they have a mechanism to do this more or less peacefully, but they will do it the hard way if they must. Think of Romania; think of the Soviet Union.
And when the people call for Change, what they get is a tumble in the highly multidimensional space of policy options.
This makes it a bit hard for a futurist.

Sunday, August 7, 2016

Gosper's hierarchy of needs

In yesterday's post I tried to point out that the intuitions about whether a machine implementation of our minds was really conscious (etc) seemed to depend on how much its internal mechanism resembled our own. In particular, a Chinese Room implemented as a lookup table seemed particularly resistant to the notion that there's "somebody home."

But that left unexamined the question of how the lookup table got filled in. In the case of HashLife, the answer is straightforward: take the patch of cellular automata space you are trying to skip forward, run the Life algorithm on every possible configuration, and fill in your lookup table. But equally obviously, you don't actually have to do this ahead of time: you run your Life simulation as usual, looking for speedups in your table, and every time you see a situation you don't have listed, run it as normal a step at a time and then insert the results in the table. That's why it's a hash table, sparsely populated in the address space of starting patches.

In practice, the big systems in Life that the experimenters were trying to run were highly stylized, with glider guns and sinks and mirrors and similar gadgetry, to construct circuitry and Turing machines or even to emulate (!) more complex cellular automata. In such a case HashLife essentially creates a direct table-driven implementation of the higher-level machine.

How would we apply this scheme to running a human mind? We don't have hash tables in our heads, and whats more, the address space a human experiences is so vast and finely divided that we never experience exactly the same input or situation.

We don't have hash tables in our heads, but we do have circuitry that looks suspiciously like associative memory, a point I first ran across in Pentti Kanerva's thesis. What's more, as we know from our experience with neural networks, it is reasonably straightforward to arrange such circuitry so that it will find the nearest stored memory to the requested address. With a bit more work, you can make a memory that will interpolate, or extrapolate, two or more stored memories near a requested address.

Do you remember learning to walk, or tie your shoes, or tell time from an analog clock dial, or read and write? These were all significant cognitive challenges, and at one time you were heavily concerned with the low-level details of them.  But now you do them unconsciously, having essentially hashed them out to operate at a higher level.

Thus it seems not unreasonable to claim that the authentic human experience includes the HashLife-like phenomenon of losing direct consciousness of lower-level details in the process of becoming concerned with higher ones. Indeed I would claim that you cannot have the authentic human experience without it.

The remaining question is, how high up can we go?

Saturday, August 6, 2016

HashLife and Permutation City

I've been thinking a bit about the Hanson/Caplan disconnect on ems. To recap and oversimplify, Hanson makes the assumption that an emulation or upload should count as a person; in a world with many times as ems as biological humans, the happiness of the ems matters as much or more than that of the meat. Caplan thinks the opposite: only we wild-type folks are real people, and the ems are just computer programs, with the straightforward result that they can't really be happy, or presumably even conscious of any real emotion at all.

Where you fall on the spectrum between these two beliefs would seem to depend very strongly on your intuition of what you really are. As a lifelong AI researcher, for example, I have always seen myself (and of course everybody else) as a computational process that just happens to be running on a n evolved biological substrate, albeit one of phenomenal sophistication and computational power. On this view, running the same computation on another substrate would not make any difference. It would not only still be conscious, and still be human; it would still be me.  Hanson explicitly endorses this view by using the term "emulation" for what would otherwise be called an "upload."

What if, however, instead of simulating the brain on a neuron-by-neuron level, you started working out the functionality of various pieces of it, as we have begun to do for the pre-processing in the retina, various pathways in visual and auditory cortex, and so forth. Many of these are perfectly understandable data-processing tasks reducible to algorithms, and others might be modeled to an acceptable precision by, e.g., neural nets trained on traces from real brains.

One can take this process further, reducing larger and larger parts of the implementation of "me" to algorithmic black boxes, and losing more and more of the information processing structure that is explicitly parallel to my brain. Let us suppose that we can do this in such a way that the result continues to act just like me from the outside, doing and knowing the same things, having all my memories, quirks, personality, and so forth.

The obvious endpoint of this is that we get to the point that the whole mind is one algorithmic black box, only related to the original person by input/output correspondence. It the resulting program conscious, human, me?

The problem is that this is not the real endpoint. The conceptually simplest way to implement that i/o black box is not a mysterious machine that might intuitively be comparable to the mysterious machinery of a human brain, but as a lookup table. We input one large number, say every tenth of a second, that encodes every sensory input, combine it with another large number that represents your memory, look up the corresponding line in the table, where we find two more large numbers, one of which encodes all the nerve signals to your muscles, and the other one is the new memory. That's all the mechanism; the rest is just the table of numbers.

Somewhere in the depths of cyberspace, John Searle is smiling.

But it gets worse. I first heard this scheme from Hans Moravec at a conference, but it also got folded into one of the weirder and more thought-provoking SF books, Greg Egan's Permutation City. It begins by considering one of the more sophisticated implementations of Conway's Game of Life cellular automaton universe. In an ordinary implementation, you compute the contents of each cell at each generation by a lookup table that encodes its state and interactions with its neighbors. But in HashLife, you don't have to do that on such a limited, atomic scale in either time or space. You have a giant hash table that stores the mapping from chunks of space to their configuration several steps of time later. The bigger and more complete your hash table, the wider the areas and more steps at a time you can skip.

Let us now return to Em City, and imagine what HashEms would be like, given precisely the strong economic pressures to efficiency that Robin depends on for the central prediction of the book. Not only do HashEms begin to skip internal steps: predictable interactions between multiple ems get folded into the table and optimized away.  Ultimately, no human-level day-to-day interaction (or recreation) is explicitly computed; only the I/O behavior of Em City as a whole emerges from the table-driven black box.

Are the ems still conscious? Human? Me?

Wednesday, June 29, 2016

This Island Earth

I finally finished Neal Stevenson's Seveneves, a monumental end-of-the-world novel. I tend to "read" this kind of book as an audiobook while driving, which means it takes a while to get through one of this length.
Seveneves begins with the moon blowing up, although as many reviewers have pointed out, this isn't a spoiler since it is the first sentence in the book. On the other hand, there will be some spoilers here, so I urge you to read the book first.

As I started to listen to the story, once it became apparent that the moon was going to continue to break up and ultimately produce a "hard rain" of meteors that deposit enough energy in the atmosphere (and surface) that it would glow red hot--and it does glow red hot in the story-- I realized I had seen it before. One of the more visually spectacular (if scientifically dodgy) SF movies of the 50s, This Island Earth, has the planet Metaluna being attacked by forces which use meteors as weapons, and in the movie, you see the attack succeed, the rain of meteors overwhelm the defenders' forcefields, and ... yep, the planet glowing red hot as the aliens finish it off.

(Oh, and as long as we're on 50s SF, the parallels to When Worlds Collide where there is a frantic rush to build rockets to escape the doomed Earth with a remnant of humanity is too obvious to mention.  So I won't.)

This is not a review of Seveneves -- that would take a year to write properly -- but just an investigation of one point. Namely, does that really work? Could a meteor bombardment make a planet glow red hot, and if so for how long? In the story, it is some appreciable fraction of 5000 years.

We are used to vast energies in our weather system dwarfing what mere puny humans can do with our our little machines. But all of that energy is just side effects of the incoming sunlight on its way to being re-emitted as IR. For the planet as a whole, that energy flux works out to about 300 watts per square meter (and in equals out to within roughly a tenth of a percent).

For the entire planet to glow red hot, it would have to be about 1000 degrees Kelvin, and it would radiate a LOT more energy into space. To do the calculation right, it gets complicated, in lots of ways, so what I'm about to do is an extreme simplification. But for a back of the envelope, the average temperature of the earth now is about 288 K. Planck's Law says black-body radiated power scales as the cube of the temperature, so the red-hot Earth would be radiating about 40 times as much as now, i.e. 12 kilowatts per square meter. Earth is about 5e14 m^2, so it would be putting out 6e18 watts.

One kilogram at the height of the moon's orbit has a potential energy relative to the surface a little over 60 megajoules (the kinetic energy of its 1 km/sec orbital velocity is just half a megajoule). Thus you need to drop 1e11 kg of moon per second on the earth to keep it red hot. The moon masses 7e22 kg, so you get 7e11 seconds of bombardment if you want to keep the Earth red hot.

7e11 seconds is 22,000 years; so yes, it works. You need to use less than a quarter of the moon.

Tuesday, June 28, 2016

The Age of BiV

Robin Hanson and Bryan Caplan are having a cross-blog debate, or at least discussion, about Robin's book Age of Em. Without going into the specifics, I get the feeling that there is a certain disconnect right from the start.
Robin thinks that the em world will develop so quickly, and contain so many people compared to the outside meatspace world that we can treat the latter as static and uninteresting; and besides, the em world is what the book is about!
Bryan on the other hand finds it hard to think of the ems as people at all. After all, they are just computer programs interacting through software; the whole em world is just one big video game. The real people are the ones on the outside.

Allow me to propose a scenario which I think has at least a chance of engaging both points of view. Let's assume a level of technology roughly equivalent to that in Age of Em, i.e. the ability to scan a brain down to a level where you have captured all the essential detail. Below this level we will assume that we have studied the structure of neurons etc in general all the way down to the molecular level. But we will also assume that we have a fabrication ability to match. So when we have scanned a brain, we will simply rebuild it, molecule by molecule.
What do we do with such a brain? Why, we put it in a vat, of course, and let it think. We put the vat (it's only 1.5 liters) in a giant factory-like building along with lots of pipes and pumps and tanks of nutrient fluid, and enough computing hardware to run a virtual world simulation for each brain. Note that the level of understanding of our neural circuitry (and amount of computation) necessary is just the same as in Robin's em world.
Now are these BiVs ("Brain in a Vat") really people? After all, we could instead of copying them molecule for molecule, simply have taken actual human brains and put them in the same BiViac (sorry). Furthermore, we now have a situation on which philosophers from Descartes to Bishop Berkeley have weighed in for centuries. BiVs think; therefore they are.

The BiV world has many, but not all, of the features of Robin's em one. A reasonable level of nanotech lets you build a brain in well under an hour. One obvious place to get the material is deconstructing a previous brain, copying off a record of its fine structure if you care to. Everbody gets to live in a fantastically splendiferous virtual world, shared or not at their whim. You can teleport, you can fly, you can have a city with a million flying cars and no traffic worries, because the cars simply go through each other.
The big difference would be the lack of a brain-speed control knob.

How are we to measure the income and wealth of a BiV? Each one can have, to all apparent physical purposes, anything he wants. In the real physical world, all he needs is a fixed supply of "juice" (under which rubric we will include both nutrients and electricity for the VR system, pumps, etc). "Fixed" is the operative word; in some sense it is not possible to make him any better off, physically, than he is at a "subsistence" level.

What is subsistence level? One easy way to guess is the energy input, some of which is direct power and some of which gets backed out into the manufacturing process for the nutrients. Grand total for a full human body is 100 watts; nanotech manufacturing brings the capital cost down to a level we can ignore for the moment. So the BiV at current typical power prices needs 24 cents per day, or less than $100 per year.

So that's subsistence, in a virtual world of limitless luxury. The main reason a BiV would want more money would be prestige, status, to have things others didn't that were only valuable because they were rare, and things in the real world. So he might work hard for that, if he were ambitious. But given that people need so little for the baseline, it's hard to imagine how they could be so unproductive as to have to work more than a few days a year if they didn't want to.

The average American (including children, the elderly, etc) produces nearly 500 times as much in economic value than the BiV subsistence. If we start with a selection of motivated, intelligent people, one can guess that they would be at least 5 times as productive on average. A factor of 2500 is a long way to fall in productivity, even if most BiVs spent most of their time sending pictures of cats to each other.

On the other hand, one can sink a virtually infinite amount of time into software. A large part of the internal effort in a BiV (or em, for that matter) world would be spiffying and decorating the virtual worlds, which would do nothing as seen from the outside in the real world. I think that the real question in either formulation would be how much of the effort went into really useful productivity enhancement tools, and thus tended to counteract the diminishing returns to adding more brains.

Friday, June 10, 2016

Back to School

Everybody and his dog Astro in the futurist world seems to be writing a review of Robin Hanson's book Age of Em these days, so I thought another one might be an appropriate opening post.
As I assume most of the readers who find their way here already know, the book is an analysis using standard social science results of a very carefully selected possible part of a possible future, namely a city full of uploaded human minds interacting through a simulated virtual reality.
My own thoughts on such a scenario, such as they were, appeared in Nanofuture about ten years ago:
 Uploading offers another way into a bigger world. As wide-open as the physical possibilities are with nanotechnology, they are wider still uploaded. The current-day philosopher asks, "What is it like to be a bat?" but the upload could know. We could have new senses, not merely mapped onto our current set, and new forms of intuition, maybe even new emotions, more appropriate to the world we live in.  I mentioned before how our present artificial environment has outstripped our native equipment evolved on the African savanna; how much more will the world of tomorrow?
You must not think of such a world as over-complex and confusing. It would be to us, but so would our world be confusing to Homo Erectus. In fact, our descendants (and with a little luck, maybe even ourselves) will be more naturally comfortable, and understand their environment more intuitively, than we do ours today. That's because we've jacked up the complexity of our current world, but not the equipment we use to understand it; they will be able to do both.
Where does personal responsibility and independence go when people are programs running on the same ultra-megacomputer? Perhaps surprisingly, the range of options is the same, or perhaps even wider, than in the physical world. Let's consider a few cases, as widely scattered signposts to the vast terrain of possibilities.
There could be the equivalent of a processor per person, with communications channels between them, and one or more complex environment simulations for them to interact in. This would correspond to people with separate brains in the real world. This level of integration would interact well with real humans and people running on physically separate robot processors. The assumption here is that your thoughts are entirely yours, and that you could own a part of the physical world or simulated environment over which you would exert more or less exclusive control.
In a software world, it will be possible to create the equivalent of germs, fleas, lice, and ticks--the descendants of computer viruses--simply by thinking about them. The temptation will be great for the community to want to control your thoughts in fear of such things, even though people who did that would be as rare as people who deliberately spread disease today.
This is a significant concern because lowering the firewalls between people will have so many advantages in other ways. Exactly the same kinds of thing happened to people when they began living in cities: disease was a scourge, and epidemics like the Black Death could wipe out a third of the population. Yet people crowded into cities because it greatly facilitated communication and trade, the building of common infrastructure, and other economies of scale. And yes, there were plagues; but the advantages (usually) outweighed them. Indeed, living in such cities clearly made people stronger and more effective in the long run.
In the physical world, technology has helped finesse the issue, with transportation, sanitation, medicine, and so forth. Nanotechnology can carry that further, for example with skinsuits acting as biological firewalls but allowing direct personal contact. In the software world, the choices are harder. Uploading will allow things like direct transfer of thoughts and emotions, joint experience, and many modes of interaction as yet unthought-of. It will also allow not only direct monitoring of people's thoughts, but legislated changes in the structure of their minds. Given the track record of bureaucracies in the real world, the clear and present danger is that communities of uploads would quickly evolve into soulless monstrosities.
Luckily, soulless monstrosities won't win in the long run. They just can't seem to "play nice" with other soulless monstrosities. Evolution could have taken us that way, like ants, but didn't. There's too much value in the adaptable flexibility of the semiautonomous intelligence that we are. One of the challenges awaiting us as we move forward is to understand this well enough to avoid some unfortunate experiments.
With a properly defined Bill of Mental Rights, however, an upload community could be a truly marvelous place. It would be like the concentration of talent of a Hollywood or Silicon Valley--centers of great creativity and an enormous value to humanity as a whole.
 Robin's scenario precludes some of these concerns by being very specific to a single possibility: that we have the technology to copy off any single particular human brain, we don't understand them well enough to modify them arbitrarily. Thus they have to operated in a virtual reality that is reasonably close to a simulated physical world.
There is a good reason for doing it this way, of course: that's the only uploading scenario in which all the social science studies and papers and results and so forth can be assumed to still apply. Any other scenario, and you'd have to examine a lot of assumptions on a case-by case basis.
But it also allows a different kind of analysis, which I don't remember Robin doing in quite this way in the book: We can examine the question "What is it like to be an Em?", in the same spirit as the philosopher Nagel and his bat. And, again because of the assumptions, you can do this by pretending that you, the Em, are living in an actual, comprehensible, physical world.
There was a Star Trek episode ("A Taste of Armageddon") in which there was a planet whose highly advanced civilization featured disintegration chambers -- people walked in, and nobody walked out. The Em world would have these, but with a second control button: someone walks in, and identical twins walk out.
Furthermore, the process is, again by Robin's explicit assumption, inexpensive. Now I estimate that in current computing technology the amount of hardware it would take to run a human-level AI will cost a million dollars. But with the assumptions that we don't understand how it works and thus can't optimize it, and thus have to run a direct copy simulation at a fairly low level, you might reasonably be talking about orders of magnitude more computer to run a human mind.
Yes, we'll probably get there, with Moore's Law. But the take-away implication is that in the Em world, processing power will be very, very cheap.
What that means is to an Em, things are very cheap. Everything in the Em world is part of a simulated VR; any object is just a piece of software. An Em can create or copy giant machines, tropical islands, great cities (minus the people), and probably Mars-like planets with the wave of a hand. "What it would be like" is like living in a Utility Fog world. The hardest part of creating virtually anything would be deciding and describing what you wanted.
Another salient aspect to Robin's Em world is his original economic insight that if people are cheap to copy (and the process is fast), they would induce a Malthusian dynamic and wages would tend down toward subsistence level. Furthermore, the people selected for uploading will be highly intelligent and motivated.
Now where have you ever lived where there were lots of creative, intelligent people, many just like you, doing remarkable wonderful things with big expensive toys (some of which you designed!), but you only got, personally, subsistence wages for it?
Yep, you got it. Graduate school. What it is like to be an Em is ... a graduate student.