I've been thinking a bit about the Hanson/Caplan disconnect on ems. To recap and oversimplify, Hanson makes the assumption that an emulation or upload should count as a person; in a world with many times as ems as biological humans, the happiness of the ems matters as much or more than that of the meat. Caplan thinks the opposite: only we wild-type folks are real people, and the ems are just computer programs, with the straightforward result that they can't really be happy, or presumably even conscious of any real emotion at all.
Where you fall on the spectrum between these two beliefs would seem to depend very strongly on your intuition of what you really are. As a lifelong AI researcher, for example, I have always seen myself (and of course everybody else) as a computational process that just happens to be running on a n evolved biological substrate, albeit one of phenomenal sophistication and computational power. On this view, running the same computation on another substrate would not make any difference. It would not only still be conscious, and still be human; it would still be me. Hanson explicitly endorses this view by using the term "emulation" for what would otherwise be called an "upload."
What if, however, instead of simulating the brain on a neuron-by-neuron level, you started working out the functionality of various pieces of it, as we have begun to do for the pre-processing in the retina, various pathways in visual and auditory cortex, and so forth. Many of these are perfectly understandable data-processing tasks reducible to algorithms, and others might be modeled to an acceptable precision by, e.g., neural nets trained on traces from real brains.
One can take this process further, reducing larger and larger parts of the implementation of "me" to algorithmic black boxes, and losing more and more of the information processing structure that is explicitly parallel to my brain. Let us suppose that we can do this in such a way that the result continues to act just like me from the outside, doing and knowing the same things, having all my memories, quirks, personality, and so forth.
The obvious endpoint of this is that we get to the point that the whole mind is one algorithmic black box, only related to the original person by input/output correspondence. It the resulting program conscious, human, me?
The problem is that this is not the real endpoint. The conceptually simplest way to implement that i/o black box is not a mysterious machine that might intuitively be comparable to the mysterious machinery of a human brain, but as a lookup table. We input one large number, say every tenth of a second, that encodes every sensory input, combine it with another large number that represents your memory, look up the corresponding line in the table, where we find two more large numbers, one of which encodes all the nerve signals to your muscles, and the other one is the new memory. That's all the mechanism; the rest is just the table of numbers.
Somewhere in the depths of cyberspace, John Searle is smiling.
But it gets worse. I first heard this scheme from Hans Moravec at a conference, but it also got folded into one of the weirder and more thought-provoking SF books, Greg Egan's Permutation City. It begins by considering one of the more sophisticated implementations of Conway's Game of Life cellular automaton universe. In an ordinary implementation, you compute the contents of each cell at each generation by a lookup table that encodes its state and interactions with its neighbors. But in HashLife, you don't have to do that on such a limited, atomic scale in either time or space. You have a giant hash table that stores the mapping from chunks of space to their configuration several steps of time later. The bigger and more complete your hash table, the wider the areas and more steps at a time you can skip.
Let us now return to Em City, and imagine what HashEms would be like, given precisely the strong economic pressures to efficiency that Robin depends on for the central prediction of the book. Not only do HashEms begin to skip internal steps: predictable interactions between multiple ems get folded into the table and optimized away. Ultimately, no human-level day-to-day interaction (or recreation) is explicitly computed; only the I/O behavior of Em City as a whole emerges from the table-driven black box.
Are the ems still conscious? Human? Me?