Every era conceives of the hard things in terms of something, usually a technology, that is more or less understood. And in our (western) history, little is harder to understand than the mind, and its consciousness. Consequently, the latest technologies have always been employed to act as known analogies for mental processes and capacities.
With the rise in Europe of mechanical engineering in a big way, around the end of the sixteenth century the notion of a mechanism became widely appreciated. This is the era of Cudworth denying that human souls are mechanisms, and of Descartes asserting that, while humans were not mechanisms, animals certainly were automata without the “feels” (my word) of sentience.
Leibniz, who was a contemporary and occasional competitor of Newton’s, used a mill analogy:
It must be confessed, moreover, that perception, and that which depends on it, are inexplicable by mechanical causes, that is, by figures and motions, And, supposing that there were a mechanism so constructed as to think, feel and have perception, we might enter it as into a mill. And this granted, we should only find on visiting it, pieces which push one against another, but never anything by which to explain a perception. This must be sought, therefore, in the simple substance, and not in the composite or in the machine. ( Monadology, sect. 17)
In other words, there is nothing material and mechanical about the mind, but only a non-material substance. I think there’s a fallacy involved here, but I’ll return to that. Since it was assumed by all that humans had the feels, in this case perception, it followed that the substance must be real. The philosopher Robert Cummins dubbed this “Leibniz’ [Explanatory] Gap”, and it means that no material account of feeling is possible, which is, after all what modern attacks on monism also argue for.
Very often this is cast as an explanatory issue. The argument is about explaining and predicting how a (mechanical) system will behave. Other times this is a question of ontology; of what actually exists. And these must be held distinct. As I noted before, the argument from the abstract to the concrete is a bad argument indeed, although not really a fallacy as it is just a general philosophical mistake. But consider for a moment why it is so common a mistake.
Humans evolved, and what they evolved to do, as opposed to what they inherited from their ancestral lineage, is to interpret their world. That is to say, they use their brains and bodies to interact with their environment. This is true of all organisms with a nervous system, but we do something other species generally do not: we talk about it. Now, many species communicate, sometimes in quite sophisticated ways. But they tend not to communicated with symbols, even if they use signs. That is, when a bird signals a threat or a bee the location of pollen, there is nothing much symbolic about it. There is a literal threat, or literal pollen, and they exist in the immediacy of the situation. Humans do something other species can only do rarely. They communicate with displaced reference as primatologists and anthropologists call it. This is talking or communicating with signs about something not in the immediate context, like home, or ancestors, or resources that won’t be available until the next season.
There is a reason why humans do this, I believe. While we did not adapt to a single aspect of our world (or even to all of them) one feature of our environment created continual and intense selection pressure: other humans. Understanding them, predicting their actions, teaching non-present skills, finding ways to use the environment and passing them on (like making fire) and so on, made symbolic communication something our species alone developed.1 Now, with this comes a certain anthropocentrism. We can reason about the motives and intentions of our fellow humans, and to an extent about the other animals (although not very well – read something in the bestiary tradition from the middle ages about the character and plans of animals), and so we think we have a handle of the world. And so “man is the measure of all things” as Protagoras is said to have claimed (he was before Plato, who reports it). So far from being a kind of relativism, in this context it means that humans think that they know what is and isn’t. They do not, usually, and when they do they have to work hard to get that knowledge.
And this leads us to the topic of this section (it took some time!). After Leibniz’ mill analogy, the mind (or rational soul if you have a more classical education; in Plato’s works this is noûs) was analogised (or metaphored) to steam engines, telegraph networks, telephone exchanges, electrical devices, electronic devices, and in a final abstraction, a mathematical model of information. If ever there was an issue with abstraction, thinking the mind is made of mathematics is perhaps the ultimate sin. To make this out, let me discuss something else: spear throwing.
In the 1980s, William H. Calvin was a neurosurgeon who argued that the reason for our big brain was that we needed better “circuitry” (the electric/electronic metaphor) for making the calculations to throw stones and spears precisely. But those who throw do not make calculations (not explicitly, and from what we know about neurobiology, not implicitly either). Instead they (their coarse and fine motor systems) learn by experience how to throw things. It is trial and error, or as we usually call it, practice. Calvin used mathematics to model what was going on for an entire embodied system, and not just the motor control part of the nervous system. Similar concerns apply when we say that we “are” Bayesian, or the neurone is a signalling system in a literal sense. It is just that we use mathematics to both describe and to explore our models of things. But the models are not the things being modelled.
In 1989, physicist John Wheeler made the claim that all its are just bits:
It from bit symbolizes the idea that every item of the physical world has at bottom — at a very deep bottom, in most instances — an immaterial source and explanation; that what we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin… (Information, Physics, Quantum: The search for links, p311)
Similarly, the physicist I mentioned before but did not name (Max Tegmark) holds that the only properties subatomic particles have are mathematical). But if mathematics is not the language of the universe itself, and is our language used to investigate the universe, as I think, then is is a classic case of reification. It seems physicists, or at least theoretical physicists, are prone to this tendency.
But as a founder of modern information theory noted
Information is information, not matter or energy. No materialism which does not admit this can survive at the present day. (Wiener 1948: 132)
Information theory was a mathematical model of the engineering needed to transmit messages, but whether these were meaningful or not was beside the point. It was an abstraction of transmission, and Claude Shannon, who described it first in 1948, called his theory a “theory of communication”. However, information became the rage from that era, and it didn’t take long for it to find its way to the question of mind. Information became a quasi-physical property of quasi-physical things.
At first, researchers were mainly concerned with the question how physical nervous cells processed information to give outputs, and how they “learned” (what we call today Machine Learning). Two Walters, McCullough and Pitts in 1943 had abstracted what was known of the architecture of neurons to propose what is now known as the McCullough-Pitts Neuron. Later, Frank Rosenblatt developed the notion of a Perceptron. This was a program that could revise its likelihood of outputting a binary choice to more accurately identify the nature of its inputs. This is now a case of a “classifier system”, and the overall project here was to create “threshold logical systems” that mimicked the human neuron. Instead of the binary on-off signal of Shannon information, where the term “bit” was first coined, these simulated neurons would “fire” (send a signal to the next attached neurons) only when a threshold of signal weights had been reached, and the more they were reached, the more likely the neuron was to fire.
But information was not the probabilities of firing, nor the signal (or rather, cell-cell chemoelectric triggering). It was, instead, a general abstract model of what was going on in the brain, and even then it was a gross oversimplification of the biology. It left out (as being uninteresting) glial cells which both supported the neurons and gave them the rather large amount of energy required to function. Real neurons have a period of rest, known as the “refectory period”, before they can fire again. Perceptrons and the like do not. What got left out of the model is as significant as what is included, and it has implications for the dynamics of the model and the longer inferences we can make about the actual brains.
A newish kind of information model is IIT: Integrated Information Theory. Proposed by Guilio Tononi and his colleagues, it is one version that does not breach the fallacy of misplaced concreteness. Its proposals are based on axioms and their implications but they do distinguish between the formalisation of integrated information and the physical substrate, and again it has the virtue, if it succeeds, in not restricting its claims of consciousness to neurotypical adult humans, or even humans exclusively. However, there are physicalists who reject it as a scientific theory, including the philosopher who has most strongly argued for a physical brain account of mind, Pat Churchland. The IIT proposes that the metric of consciousness is a value of integration (ɸ) of informational networks. Now on the one hand I applaud the argument for its notion that consciousness is (an abstract discussion of) a physical state, but on the other, IIT (and the other competing or complementary hypotheses) accept Leibniz’ Gap. I am not a neurobiologist (nor a neurobiologist’s bootblack) but it behooves a philosopher to stick to philosophical tasks. I think, therefore, that the Gap is nonexistent, as I will discuss in the next post. In short, so far as I am concerned, there is no Hard Problem.
The hard/Hard distinction is one that was made in the work of Chalmers, and I think it smuggles into what needs to be explained an unsupported set of premises, the reality of qualia, According to him, there are mental some processes, such as perceiving, classifying, judging, deciding, which are hard to model and so on, but they are amenable to physical explanation. But the feels of being aware, seeing, deciding and so forth, have an irreducible aspect, and they are not, as Leibniz said, found in the physical mill. This is the Capital-H Hard problem. IIT-folk think it’s explained, others punt for Higher Order Theories (HOT) and some like Global Neuronal Workspace Theory (GNWT). I have no opinion about the merits or failures of these as scientific theories. Philosophers supervene upon science, they cannot direct it. As far as I can tell, though, everyone concedes there is a Hard problem, and that is philosophical.
When I say “our species” I mean our species is the last one standing of a group of species that might have also developed this ability. I think it likely this occurred with Homo ergaster about 1.5 million years back. I also do not think that much rests on the notion of species.