Advanced Philosophy

Philosophical Zombies and Us

What does it mean to be a conscious being?

Stefan Schroedl
The Apeiron Blog
Published in
10 min readNov 19, 2020

--

Photo by Simon Wijers on Unsplash

A philosophical zombie is a hypothetical human being who walks, talks, and behaves exactly the same as a normal person; we could subject it to brain scans or to any imaginable examination, and we couldn’t find anything unusual. However, even if it pretends to the contrary, what makes it a zombie is that it lacks consciousness (or, as the philosophers say, qualia): a subjective, inner world of experiences.

Our personal introspection strongly convinces us that we are not zombies ourselves; we know first hand “what it is like” to be us. So could we spot a philosophical zombie if it crossed our path? Or put the opposite way: As our technology over time perfects the art of building ever more deceptive humanoids, how sure are we that something resembling consciousness won’t be emerging along the way?

These questions seem impossibly difficult and elusive. Our experiences appear utterly disconnected from the rest of the physical world, immediate, unique, indivisible … How could we ever imagine accepting any physical explanation at all? But maybe the answers are just too close for us to see clearly …

In the rest of the article, I will try to make the case that when we have heated debates about mental capacities such as consciousness and intelligence in other beings, animals, or machines — all we are really debating is the use of words, not “reality.” Ultimately, language is a tool built for the practical purpose of survival rather than primarily for finding the ultimate truth. But it is easy to forget that even such a powerful tool has its limits.

If you agree with this thesis — and it might not be such a big surprise after all — feel free to stop reading here. Otherwise, please bear with me while I back up the scope to make my case. Let’s start by pondering in a little more detail on the problem we are concerned with:

The “Hard Problem” of Consciousness

This question revolves around finding an explanation of why and how we subjectively sense, think, and feel like we do. As opposed to the “easy” problem of explaining cognitive functions from a purely third-person, behavioral perspective.

The underlying assumption is that our mental processes are part of the physical world and can thus, at least in principle, be observed and described as such. The Western tradition of philosophical thinking has long been shaped by an explicit or implicit dualism between a physical body and an (immortal) soul residing in a different metaphysical space. But how could something interact with the brain according to the laws of physics while not being a physical entity itself? A non-material consciousness could also parallel physical processes without affecting them in return; that would diminish its significance, though.

Neuroscience has shed a lot of light on how the brain works; to mention just two striking examples: We have gathered more detailed data on how unconscious brain activity anticipates our perceived free decisions; and we can guess our dreams by reading and interpreting electrical brain waves during sleep. It looks increasingly unlikely that these correlations between brain activity and conscious experiences are coincidental.

We also learn about mental processes, included but not limited to consciousness, by trying to build them. Long gone are the days we considered playing chess a hallmark of human-only ingenuity. “Artificial” intelligence now routinely beats grandmasters, in almost any type of game; it also diagnoses X-ray images, answers questions in dialog form, translates between languages, and drives cars.

Image by OpenClipart-Vectors from Pixabay

Our self-image as the crown of creation stands deeply challenged, and it is only human that we fight for it tooth and claw. We keep moving the goal posts. We dismiss problems once they are solved: According to Tesler’s Theorem, “Artificial intelligence is whatever hasn’t been done yet.” A similar tendency has been going on in neuroscience. When we started to discover, some 200 years ago, the specialization of brain regions, our first geographies featured countries of virtue and character. Later, we learned that most brain areas are actually concerned with “lower-level” functions such as regulation of equilibrium and visual processing, we further retreated our ideas of the seat of “core humanness” to the frontal lobe and the amygdala. We imagined an inner homunculus, watching our perceptual movies in a Cartesian Theater, and giving orders for our actions. But what would that homunculus’ brain look like? An untenable onion argument.

Original work: Jennifer Garcia (User:Reverie) Derivative work: User:Pbroks13.

Another frequent objection is to require a solution and the physical (i.e., neural) architecture to be the same — an argument countered by Drew McDermott: “Saying [the chess computer] Deep Blue doesn’t really think about chess is like saying an airplane doesn’t really fly because it doesn’t flap its wings.” So, alas, this story seems to be just another chapter in the book of expulsion from the center of the universe; following the demise of the geocentric world view, the doctrine of life spark, and the discovery of evolution.

Perception is Indirect Interaction

We can only be aware of our environment by means of our five senses. Say, my perception of a buzzing fly: It correlates with a certain conscious brain process in our frontal lobe, which has been initially set into motion by a hair cell in my cochlea. This neuron was triggered by a fluid wave, in turn, caused by a column of air rhythmically pressing the ear drum. From the signal delay between my ears, I estimate the location of the source, and consulting my experience, I become aware and bothered by the insect — a mental model I can name. Obviously, I have no conscious insight at all into all these processing steps — only the final, mental model seems to occur instantly and effortlessly.

Any perception is mediated through physical interaction. We have a biased and subjective picture, but there is no alternative way to obtain information. “Real things” do not exist, they are projections of our sensing “in here”, in our body, towards space “out there” in the world. What sounds like a constant-pitch tone is really a periodic movement of air molecules. The literal “truth” of our mental image is secondary as long as it helps us classify, predict, and act in the world in a way favoring our survival. Inner models work by correctly reflecting individual aspects, through something we could call analogy, simulation, or mathematically, a homomorphism.

File:1421 Sensory Homunculus.jpg: OpenStax College derivative work: Popadius.

In addition to an inner model of our surroundings, it is useful for us to have a model of ourselves. The sensorimotoric cortex is a map of our body, distorted but somewhat topologically correct. At any point in time, we have a rough sense of the position of our body; this helps us to plan the motion for picking up the cup of tea from the table. Of course, this self-image is highly simplified — we couldn’t possibly distinguish the dozens of muscles that orchestrate this motion.

Consciousness

Consciousness is fleeting, constantly shifting. Notice your breath! Now that you read this sentence, you are aware of your breathing, but a second ago you were not.

At every second, a flood of information is streaming through our brain, but we are oblivious to most of it. There is evidence that subunits in our brain act much more independently than we think. The brain has a spatial extension, processing and transmission between areas take non-negligible time.

According to Marvin Minsky, consciousness is a big suitcase of disparate, and complex skills; our brain works like a society of loosely cooperating, independent agents. Oliver Sacks’ describes in a very observant and empathetic way the stunning inner world of people for whom this cooperation has broken down. Consider blindsight: People that have been clinically shown to be unable to see, but still somehow manage to avoid walking into obstacles. Some motion-planning part of the brain must be disconnected from the language and memory parts.

Attention is a spotlight that picks out what is important right now, from a chaotic sea of sensations and impulses. It gives us the impression of a single-center point. Attention-schema theory posits a mental model of our own attention, similarly to our internal body model. It also lacks a lot of details, and therefore is not really able to explain why our thoughts shift from one thing to another at this given moment.

Therefore, attention seems to have a detached, free, and mysterious quality — and voila: Consciousness. A major evolutionary advantage of this inner model is that it allows us to predict the behavior of our fellow humans. Our theory of mind models why others act as they do, what they might think, what they know and don’t know. It enables chimpanzees to understand cheating. Neuroscience has discovered mirror neurons designed for this purpose.

Language

Language is doubtlessly one of the most distinctive hallmarks of our species. It has exponentially scaled our social lives and accumulation of knowledge; it provides horizontal inheritance in addition to the much slower vertical (genetic) inheritance. We compose words into sentences according to grammatical rules; these express our thoughts, experiences, and feelings. The phonetic sound of a particular word is a mere convention; its meaning derives entirely from its typical usage pattern and context. Thus, language relates to our internal mental processes exactly in the same way as our perceptions relate to sensed objects in the physical world, as an imperfect but useful model.

The fact that we seem to understand each other, at least to some degree, points to the similarity of our internal processes. But what about the notorious cocktail-party question: Could your experience of the color “red” feel like my “green”, and vice versa? The answer is that the question is meaningless because of what is implicitly supposes. No question is imaginable so that your answer would reveal that difference. The fallacy is exactly parallel to what I said above about our senses: All we can ever go by are analogies and mental models of observations.

And here we project our linguistic concepts of “greenness” and “redness” into an independent, fictitious space of experiences. Sometimes we just take our words and ideas way too seriously — e.g., when reasoning about essence (a supposed true and characteristic meaning) and accident properties.

… And Finally, the Philosophical Zombies are Back

I hope I haven’t lost you on this admittedly circuitous route, but fortunately now we are back at the concept of the philosophical zombie — an unconscious person who nevertheless acts like a human in every discernible way. So, is that idea really conceivable? Is it possible? Let’s recap what we talked about above: We can only ever learn anything about “reality”, includes mental states and processes, by indirect interaction, which we also call observation. Our observations are framed and communicated by words; words derive their meaning solely through their patterns and context of usage, i.e., correlation with observations. They represent useful concepts and ideas, but there are no “real things” beyond what we ascribe to them.

Cognitive capacities of others can never be perceived first hand, only inferred from observation of (broadly defined) behavior and comparison with ourselves. Attempts at defining lists of criteria mostly seem arbitrary and stilted. How could we really know other creatures are sentient, intelligent, conscious, emotional beings? We rarely dwell on it, but if we really think about it thoroughly, we can’t even tell with certainty for our fellow human beings. We just grant others these qualities based on analogy with our own world of experience. As Descartes stated, the only thing we can be sure of is ourselves: “cogito ergo sum” — I think, therefore I am.

The story gets murkier with the distance in experiences — but these are shades of the same question, there is no sudden new quality. Our pets appear to have their own little personalities, they seem to love and adore us. Who are we to decree they have no soul? And moving on from there — the philosopher Thomas Nagel ponders “what it is like” to be a bat? We have learned that octopuses show signs of extraordinary intelligence, even while “thinking” with their arms. With typical human hubris, we graciously concede other species some fuzzy, inferior level of consciousness. Our empathy decreases exponentially with difference of experience, and this even applies to other people, according to our inherently tribal attitude.

In the nineteenth century, scientists discovered that amino acids and DNA, basic building blocks of cells, could be created by ordinary chemical reactions. But due to the enormous gap in complexity to actual living organisms, people couldn’t possibly fathom to ever be able to explain life. So they resorted to the theory of an invisible life spark that made living creatures fundamentally different from non-living objects. Very few people believe that today, although technically it hasn’t been, and in fact, cannot be refuted. You are free to cling to the notion, even though it doesn’t offer explanatory value. In other words: “life spark” is just a word.

Today we are going through a similar shift again. We should not confuse our words, such as “intelligence” or “consciousness”, with observable phenomena themselves. In short, there is no observable (and hence what we falsely but succinctly call “real”) difference between a philosophical zombie and us. The “hard problem” is not that hard, after all.

The Apeiron Blog — Big Questions, Made Simple.

We know that Philosophy can seem complicated at times. To make things simple, we compile together the best articles, news, reading lists — and other free resources to guide you on your journey. To continue with us, follow us on Medium and sign up to our free mailing list.

--

--

Head of Machine Learning @ Atomwise — Deep Learning for Better Medicines, Faster. Formerly Amazon, Yahoo, DaimlerChrysler.