Friday, June 10, 2016

Back to School

Everybody and his dog Astro in the futurist world seems to be writing a review of Robin Hanson's book Age of Em these days, so I thought another one might be an appropriate opening post.
As I assume most of the readers who find their way here already know, the book is an analysis using standard social science results of a very carefully selected possible part of a possible future, namely a city full of uploaded human minds interacting through a simulated virtual reality.
My own thoughts on such a scenario, such as they were, appeared in Nanofuture about ten years ago:
 Uploading offers another way into a bigger world. As wide-open as the physical possibilities are with nanotechnology, they are wider still uploaded. The current-day philosopher asks, "What is it like to be a bat?" but the upload could know. We could have new senses, not merely mapped onto our current set, and new forms of intuition, maybe even new emotions, more appropriate to the world we live in.  I mentioned before how our present artificial environment has outstripped our native equipment evolved on the African savanna; how much more will the world of tomorrow?
You must not think of such a world as over-complex and confusing. It would be to us, but so would our world be confusing to Homo Erectus. In fact, our descendants (and with a little luck, maybe even ourselves) will be more naturally comfortable, and understand their environment more intuitively, than we do ours today. That's because we've jacked up the complexity of our current world, but not the equipment we use to understand it; they will be able to do both.
Where does personal responsibility and independence go when people are programs running on the same ultra-megacomputer? Perhaps surprisingly, the range of options is the same, or perhaps even wider, than in the physical world. Let's consider a few cases, as widely scattered signposts to the vast terrain of possibilities.
There could be the equivalent of a processor per person, with communications channels between them, and one or more complex environment simulations for them to interact in. This would correspond to people with separate brains in the real world. This level of integration would interact well with real humans and people running on physically separate robot processors. The assumption here is that your thoughts are entirely yours, and that you could own a part of the physical world or simulated environment over which you would exert more or less exclusive control.
In a software world, it will be possible to create the equivalent of germs, fleas, lice, and ticks--the descendants of computer viruses--simply by thinking about them. The temptation will be great for the community to want to control your thoughts in fear of such things, even though people who did that would be as rare as people who deliberately spread disease today.
This is a significant concern because lowering the firewalls between people will have so many advantages in other ways. Exactly the same kinds of thing happened to people when they began living in cities: disease was a scourge, and epidemics like the Black Death could wipe out a third of the population. Yet people crowded into cities because it greatly facilitated communication and trade, the building of common infrastructure, and other economies of scale. And yes, there were plagues; but the advantages (usually) outweighed them. Indeed, living in such cities clearly made people stronger and more effective in the long run.
In the physical world, technology has helped finesse the issue, with transportation, sanitation, medicine, and so forth. Nanotechnology can carry that further, for example with skinsuits acting as biological firewalls but allowing direct personal contact. In the software world, the choices are harder. Uploading will allow things like direct transfer of thoughts and emotions, joint experience, and many modes of interaction as yet unthought-of. It will also allow not only direct monitoring of people's thoughts, but legislated changes in the structure of their minds. Given the track record of bureaucracies in the real world, the clear and present danger is that communities of uploads would quickly evolve into soulless monstrosities.
Luckily, soulless monstrosities won't win in the long run. They just can't seem to "play nice" with other soulless monstrosities. Evolution could have taken us that way, like ants, but didn't. There's too much value in the adaptable flexibility of the semiautonomous intelligence that we are. One of the challenges awaiting us as we move forward is to understand this well enough to avoid some unfortunate experiments.
With a properly defined Bill of Mental Rights, however, an upload community could be a truly marvelous place. It would be like the concentration of talent of a Hollywood or Silicon Valley--centers of great creativity and an enormous value to humanity as a whole.
 Robin's scenario precludes some of these concerns by being very specific to a single possibility: that we have the technology to copy off any single particular human brain, we don't understand them well enough to modify them arbitrarily. Thus they have to operated in a virtual reality that is reasonably close to a simulated physical world.
There is a good reason for doing it this way, of course: that's the only uploading scenario in which all the social science studies and papers and results and so forth can be assumed to still apply. Any other scenario, and you'd have to examine a lot of assumptions on a case-by case basis.
But it also allows a different kind of analysis, which I don't remember Robin doing in quite this way in the book: We can examine the question "What is it like to be an Em?", in the same spirit as the philosopher Nagel and his bat. And, again because of the assumptions, you can do this by pretending that you, the Em, are living in an actual, comprehensible, physical world.
There was a Star Trek episode ("A Taste of Armageddon") in which there was a planet whose highly advanced civilization featured disintegration chambers -- people walked in, and nobody walked out. The Em world would have these, but with a second control button: someone walks in, and identical twins walk out.
Furthermore, the process is, again by Robin's explicit assumption, inexpensive. Now I estimate that in current computing technology the amount of hardware it would take to run a human-level AI will cost a million dollars. But with the assumptions that we don't understand how it works and thus can't optimize it, and thus have to run a direct copy simulation at a fairly low level, you might reasonably be talking about orders of magnitude more computer to run a human mind.
Yes, we'll probably get there, with Moore's Law. But the take-away implication is that in the Em world, processing power will be very, very cheap.
What that means is to an Em, things are very cheap. Everything in the Em world is part of a simulated VR; any object is just a piece of software. An Em can create or copy giant machines, tropical islands, great cities (minus the people), and probably Mars-like planets with the wave of a hand. "What it would be like" is like living in a Utility Fog world. The hardest part of creating virtually anything would be deciding and describing what you wanted.
Another salient aspect to Robin's Em world is his original economic insight that if people are cheap to copy (and the process is fast), they would induce a Malthusian dynamic and wages would tend down toward subsistence level. Furthermore, the people selected for uploading will be highly intelligent and motivated.
Now where have you ever lived where there were lots of creative, intelligent people, many just like you, doing remarkable wonderful things with big expensive toys (some of which you designed!), but you only got, personally, subsistence wages for it?
Yep, you got it. Graduate school. What it is like to be an Em is ... a graduate student.

No comments:

Post a Comment