One of the more common posthumanist themes is that we can download ourselves into a computer matrix of some kind (and indeed that we may even be in a matrix) and live forever (or until the matrix dies). As a thought experiment, this serves to challenge our intuitions and raise interesting questions, which is what thought experiments are supposed to do. The trouble begins when we take it seriously as a real option.
The brain, we can safely say, is not really a computer. But then, neither is my laptop. This seems wrong – if my Apple Mac isn’t a computer, then what is? The answer is due to one of the most important thinkers of the twentieth century: Alan Turing. Turing was a mathematician who was working on a problem called the Halting Problem (better known in its German language incarnation as the Entscheidungsproblem), posed by German mathematician David Hilbert in 1928. Without too much detail, this is the question whether an algorithm can always work through all the implications of some axioms (presumed true statements) and determine if all or some conclusions are themselves true. In short, will a program, as we now know algorithms, stop? The result Turing came to in a 1936 paper was that it is not possible to show for all algorithms that they reach a decision (in German, Entscheidung). But the point for us here is how he did this.
Turing came up with the idea of a machine with an infinitely long tape that gets written to, read from, or erased at a location by a “head”. The tape represents the working space for the machine, and each “symbol” written determines what the head, which can move or stay in one spot, does. So will a Turing Machine freeze, or work forever, or come to a conclusion? This is a rather important question not only for mathematics or philosophy, but also for computer designers and programmers. If something is an algorithm that can reach a solution, then what it simulates is known as computable.
So the point I was making is that although my Mac closely resembles a Turing Machine, in fact it isn’t.1 Turing Machines do not suffer from power outages, limited memory, failures of components, electromagnetic pulses and the like. But, for our purposes, it’s close enough. On the other hand something that is not algorithm-driven, like a pair of dice, is not any kind of a Turing Machine for our purposes. The question is whether the mind is a Turing Machine, or a close instance of something like one, and if you simulate one on a UTM, is it “the same” as the mind it simulates? We are back at the ontological question. Is the IT a BIT (and vice versa)?
This is a bit more complex than our previous rumination about information, but it resolves to the same issue: the reality of abstractions. Just as my Mac is a physical object that is near enough to a Turing Machine for it to help us understand what the Mac does, the brain apparently does some things that are near enough to a TM (those ANNs in machine learning, for example. But that only pertains to a part of what the brain does. The central nervous system is analogue, so digital models are at best approximations.
Every formal model is two things: it selects some properties but not others to model (for tractability if nothing else), and it is a representation of what is being modelled. In order to represent the world, we need to interpret the model, which is in effect to map variables to types of objects or processes. For example E = mc2, needs a thesaurus where the symbols chosen are representing energy, mass and the speed of light in a vacuum.
Now suppose we could non-destructively map a brain down to, say, the cellular level, and build a virtual model of it. Would it be “the same” thing as the brain? Would it even function the same way to stimuli? Would it be that person or animal? More importantly, what would it leave out? In short, is the Matrix even capable of simulating a mind?
One thing that is missing in such a simulation, is a body and a world that body inhabits. Again, we can simulate it, but again, what are we leaving out?
The reasons these questions are important are perhaps obvious, but for the gappists, what is being left out is the feels, the phenomena of being conscious. So we must ask ourselves, can we specify these? In what terms?
By definition, a physical description is a model of something in the world. Given our limitations as cognitive systems, any physical description we may give of anything will leave things out. Science, we may say, is the art of saying as much as accurately as we can in as few terms (or variables) as possible, so that we can handle and understand it (the world). So, any physical description (or encoding) will leave out something that is real, and that means the uploaded mind will be something other than the physical mind.
If the mind relies upon what some call embodiment (but note the nounification here!), then AI will also need embodiment of some kind. There are an effective infinity of logically possible worlds, and so a real AI will need to be able to perceive and manipulate the world to stay grounded and not end up in some fantasy or hallucination of its own. And no matter in what state an uploaded mind begins, it is an AI after all, and has the same limitations as a constructed AI. The Matrix has in-built flaws relative to the world of the physical, even if we think the universe is something like a hologram on a boundary surface of a quantum space.2
So, I think it beyond our capacities, practical and in principle, to create a neural model of our selves, and upload it to a computer. In short, anything we can upload to a computer is going to be a gross simplification, and anything that is not a simplification is going to need to be a physical object, like a human brain in a human body. And as for the promise of immortality, well that just contradicts the laws of thermodynamics. Little errors will creep in, stars will die, the entropy of the universe will reach its maximum. And computers will eventually run out of replacement parts and operators. Even Apple computers.
One caveat here, though, is that physical analogue computers (not digital nor quantum) could conceivably be made. After all, that is effectively what happens in of the brain. This won’t solve the simplification issue, but it may make computation happen more like a brain. Only computation is an abstraction, and...
All this notwithstanding, models are a necessary part of knowledge. They idealise complex aspects of the observed universe, predict things not yet seen, and suggest causal relations for those aspects. In short, a model is a guide to explanation, and nobody can doubt that a working model of humans minds would be of great benefit to our understanding.
But each model is a mapping of processes, not the processes themselves. No amount of information any less than a mapping of the entirety of the brain, its body, its environment, and all the other brains, bodies and environments they have will finally exhaust the information required for there to be a one-for-one map of a single person’s brain. At best we can do an abstraction of a mind; and one which may in some salient ways resemble those of the original mind, but we cannot upload ourselves into immortality.
That is, it isn’t a Universal Turing Machine. It may be a more limited TM with the limitations of a physical computer programmed in. UTMs can simulate any TM.
Which I do not. Again with the misplaced concreteness.
"...even if we think the universe is something like a hologram on a boundary surface of a quantum space. 2...Which I do not. Again with the misplaced concreteness."
This is interesting and i don't fully follow you... Let's see, the concreteness you reject there is that "boundary surface"? Because it's a reification of a concept"? Are we in 'there are no objects' territory, but processes on processes on processes which is physics therefore physicalism?