So, what is the explanatory target with the mind’s consciousness? What is it we physicalists cannot explain that needs the feels to account for? What, exactly, is missing from any possible physical account?
In order for there to be a philosophical problem it needs to be clearly stated what is in need to reconciliation. As we have noted, the issue for most is the gap between the physical description of our experiences (the “hard” problem) and the nature of the feelings we have when experiencing them (the “Hard problem”). This gap was inspired, in part, by a tradition of materialism (as it was then called) championed by Australian or Australian-resident philosophers in the 1950s to 1970s. The individuals concerned included Ullin T. Place (an Englishman), Jack Smart (also English), David Armstrong, John Mackie and others in various Australian universities.
Also known as Australian realism, because of its commitment to the reality of the material world, this view triggered reactions, most notably from Colin McGinn, at Oxford, in 1989, in a paper “Can We Solve the Mind–Body Problem?”. Later the problem was called the explanatory gap. As Mark Rowlands summarised the matter in 2009:
We know that the brain does it [creates consciousness], hence no ontological gap, but we do not know how, hence the explanatory one.1
There are several issues to take up here. One is whether or not there is an unavoidable explanatory gap. The consensus of gappists, if I may label them, is that there is no way we can understand how the brain creates consciousness but we know that it does. This idea underlies all attempts to create conscious computers (misleadingly called Artificial Intelligences). If only we can simulate what the brain does in sufficient complexity, then we will “get” a conscious computer. And this problem is real enough. Those who work with Artificial Neural Nets (ANNs) to find patterns in data and classify them know that an ANN of any complexity is opaque to our understanding because the complexity is astronomical in terms of the connections. And this is using only a simulation of a network of McCullough-Pitts Neurons. Recently, over 3000 types of cells have been identified in human brains, of which neurons are only 50% or so. If these other cells play a role in cognition (and I find it hard to think they do not) then our brains are many orders of magnitude more complex than any machine ANN. But this is only a hard problem, not a Hard Problem. It is not unavoidable, at least in principle, that we could not explain how brains create consciousness if they do this physically, although we may never be able to actually do it in fine detail.
The other issue is that it is unclear how there is a something-it-is-like to explain at all. This is a somewhat larger topic. Let us start small. A thermometer reacts to its environment and results in a reading that indicates to a viewer that the temperature is hot. Well, it is actually only hot to some viewers. To return to the DC comics universe, it ain’t hot to Superman, unless it is as hot as the cosmic egg was before expansion.
Now let’s consider something more sophisticated: a camera. Film emulsion based cameras “represented” the world by a chemical reaction caused by light. Any viewer (who could see in the right spectrum) would see what light was hitting the lens at that moment. But neither the thermometer nor the film camera actually did have a representation of the world. They mediated the world to observers. To properly represent the world themselves, they would need to have a persistent internal model of the things seen, and even a modern digital camera doesn’t have that. There is no explanatory gap for a digital camera. What about a digital camera with a machine learning system, representing the image as a model? Still no gap. Despite its complexity, and its intractability to reduction, we do not think that there is anything other than a physical process here. By induction, at no point is there a threshold where a physical system suddenly acquires phenomenal experience. What is missing then? Why can’t we say that we have the feels alone of all physical systems (where we includes sufficiently sophisticated animals)? The answer, I believe, is that we were told so.
The explanatory gap relies upon the explanatory stubbornness of experience. But should we ask what that stubbornness applies to, we get ill-formed, suggestive and often illusory explanatory targets. What-it-is-likeness is nothing more than traffic direction – a handwave in a particular direction. Philosophical questions like “does person A see blue the same way person B does?” and “if someone never saw red but became the leading physical expert in red-sight, do they learn anything new when they see it?”2 are, like all thought experiments, designed to test ideas at the limit, but they do not tell us anything about the world. If Mary knows more when seeing red, it means she experiences red-sight. But this is a case where the properties of a physical process cause an effect. Knowing all about red (or vision, etc.) is distinct from experiencing red, both causally and in terms of knowledge. To return to bats, what it is like to be a bat is just to be a bat. All other experiential knowledge is analogy with our own.3
If we cannot define or at least accurately specify what feels are, then I would say we have no explanatory gap to breach. That we cannot do this in physical terms now is not argument that such explanations are impossible. In fact, we do not have explanations for an indefinitely large number of things, and may never do so, but that gives us no reason to think that they are inexplicable. Misplaced concreteness is applied to artefacts of language.
So what explains the uniqueness of my experience versus the experience of anyone else? I believe it is one of perspective. Let’s return to the digital camera. What it “experiences” (processes) is determined by not only the physical structure of its systems, but its location in time and space. If I drop my camera in a corner of my living room and it snaps a picture, it is experiencing my room in ways I am very unlikely to. However it will recall that experience far more accurately with electronic hardware than I can with wetware. And this is a pointer to something very often left out of this analytic discussion: we are bodies as well as brains, and we exist not only in a physical and biological environment, but also in a social and cultural environment, and so our experiences are determined (or constrained; I do not want to get into determinism yet) by a complex environment of salience. And since there are so many variables in these environments, it is vanishingly unlikely that any two individuals will have the same physical experience. It is, in other words, all about the perspective we have.
The impact of this way of thinking about thinking4 is seen in a particular case: that of Alfred Russel Wallace’s spiritualism. Wallace was the coauthor of the selection theory of evolution, with Darwin, but in his later years he became convinced that the capacities of the human brain were too much for evolution (by which he meant selection) to account for. As he said, a gorilla survives with a much smaller brain than a human, and so our cognitive capacities were not survival based. He proposed a dualist view of Spirit and Selection. Spirit was the non-physical driver of some aspects of evolution.
Now when I first read Wallace’s Darwinism (1890) I came across this passage among many others:
It must be remembered we are here dealing solely with the capability of the Darwinian theory to account for the origin of the mind, as well as it accounts for the origin of the body of man, and we must, therefore recall the essential features of that theory. These are, the preservation of useful variations in the struggle for life; that no creature can be improved beyond its necessities for the time being; that the law acts by life and death, and by the survival of the fittest. We have to ask, therefore, what relation the successive stages of improvement of the mathematical faculty had to the life or death of its possessors; to the struggles of tribe with tribe, or nation with nation; or to the ultimate survival of one race and the extinction of another. If it cannot possibly have had any such effects, then it cannot have been produced by natural selection. (p466)
Unlike Darwin and many evolutionary theoreticians of today, Wallace thought that selection was like animal breeding – an all or nothing affair. The Darwinian view was that selection is about rates of reproduction, and this was something they never agreed upon. In any case, Wallace proposed that this was something else:
The special faculties we have been discussing clearly point to the existence in man of something which he has not derived from his animal progenitors something which we may best refer to as being of a spiritual essence or nature, capable of progressive development under favourable conditions. (p474)
This spiritual element led Wallace to engage in the then-fashionable séance culture (much like the later parapsychology of J. B. Rhine at Duke University in the 1930s was fashionable). However, when I read Wallace’s argument, every time he wrote “spirit” I read it as “society”. This was because one of the better theories for human brain growth, or “encephalisation”,5 was the so-called Machiavellian hypothesis (or Machiavellian intelligence hypothesis), in which the cognitive evolution of social primates, including us, was driven by adaptation to (and hence selection for the ability to engage in) social interactions. It takes more brain space to track our interactions with others, and as encephalisation proceeded, a runaway selection led to the very large brains of hominids like Homo neanderthalensis, and of course, us. Had Wallace abandoned his “survival-first” version of selection (and allowed for mate preference, or sexual, selection) he would not have needed to posit a dualist ontology. Today, following philosopher Kim Sterelny, this is known as social scaffolding. No magic is required.
The explanatory gap is thus not a mechanistic failure of physicalism, but a failure to treat intelligence as something that must be in its world, interactively and mimetically. It must be learned, not programmed. I’ll come back to this in the chapter Unloaded.
Rowlands, Mark, ed. 2001. “The Explanatory Gap.” In The Nature of Consciousness, 51–74. Cambridge: Cambridge University Press.
This is called the Mary argument after Frank Jackson’s presentation in 1982. It is known more generally as the Knowledge Argument.
The same thing is true of what it is like to be somebody else than you are, either in history or anything else.
A ten year old son of a friend once defined philosophy as thinking about thinking, which is a pretty good first approximation.
My short introduction to classical languages at college happened after the pronunciation change, and so I believe the “c” here and elsewhere in Greek-derived terms should be pronounced as a hard K sound. I do this to annoy both classicists and biologists.