Out Of Our Heads: Why you are not your brain, and other lessons from the biology of consciousness by Alva Noe, PhD (2009, Hill and Wang, New York)
I routinely teach a course formerly called The Psychobiology of Consciousness and currently called The Mind-Body Problem. Although I am not a consciousness researcher per se, I was drawn into the field of physiological psychology because of my fascination with this topic. Like many introspective people, I “discovered” John Locke’s inverted spectrum problem long before I’d ever heard of John Locke: if you and I are both looking at a red apple, how do I know that your experience of red is the same as mine? You might see it the way I see the blue sky, or a yellow dandelion; yet having learned the term “red” for that experience – the experience of looking at such an apple – you call it red and beyond that verbal agreement, neither of us have direct access to one another’s subjective phenomenology. Later, as a graduate student, I learned that there was such a thing as blindsight – a neuropsychological syndrome usually caused by damage to primary visual cortex in which a person becomes blind – yet can paradoxically can recognize objects by sight if forced to guess at their identity.
These examples convinced me that the best way to understand the mind-body problem – the question of how a physical brain can create ineffable subjective experiences (“red”, “cold”, “sourness”) – would be to become a sensory neurobiologist. Furthermore I began to study the taste system – because of all the sensory systems, that was the one that seemed to have the most circumscribed phenomenological experiences. Tastes were sweet, or sour, or bitter, or salty, and that was about it. (Yes strong and weak, and yes umami or oleogustic, but nonetheless, a more manageable range than millions of colors or thousands of auditory pitches.) Furthermore I styled myself as a researcher in “taste quality coding”, which is to say, I was interested in understanding the patterns of neural activity correlated with those particular experiences. In that respect my work was in the tradition of Francis Crick and Christof Koch’s suggestion that people interested in consciousness should begin to search for the neurobiological correlates of consciousness – brain activity associated with a particular feature of conscious experience.
Even at the beginning, though, I think I knew there was something wrong with this approach. There’s a danger in taking the word “coding” too seriously. When we taste something, some of our taste buds detect the molecules of our food, and cause electrical signals to stream towards our brains. Eating a sweet apple versus a salty pretzel both cause this electrical activity, but presumably the activity is different in some way for the apple than it is for the pretzel – hence we can tell the difference, and hence we experience sweetness in one case and saltiness in another. Whatever that difference is we might call the code for taste quality. Like a code, the meaning (“sweetness”) is in a different “language” (a barrage of electrical impulses). However, a code implies decoding – someone or something will translate the message and experience the sweetness as a result. But is this really what happens? There’s no little guy inside of our brains that decodes the message. Our brains operate on the language of electrical impulses: there’s no need for a decoding at all. This was a thought illusion one of my scientific heroes, Robert Erickson, tried (mostly in vain) to disabuse his colleagues of. One colleague who was sympathetic was Bruce Halpern, whose article “Sensory coding, decoding, and representations: Unnecessary and troublesome constructs?” must have pleased Erickson when it was presented at a festschrift in his honor.
Regardless of concerns about decoding, there is still the question of where our subjective experiences come from. The working assumption of Crick and Koch, obviously, is that they come from brain activity. Most people believe that only organisms with brains are conscious – I am conscious, the rock is not. My dog is conscious, my tomato plant is not. But if this is right (and when I get around to talking about Alva Noe, I will point out that he does not think this is right – or rather, that this is not the whole story) – then there is an interesting problem. Our brains are made up of 80 billion neurons (and hundreds of billions of glial cells) which are not in physical contact with one another, yet we seem to have only one unified consciousness. How is such a thing possible? (And Noe would chime in here: and why is the skull a magical barrier?)
Imagine we were to remove one of these 80 billion neurons. Or a million. Or a billion. Such things happen all the time of course, as a result of aging, neurodegenerative disorders, strokes, head injury. These events may change someone’s behavior, but they do not eliminate consciousness. But how far could we go? How many could we eliminate? (One could ask the reverse question: when does consciousness emerge in embryological development?) There’s really no principled way to give an answer to this question. I think, in fact, that it was because of this problem that the renowned philosopher David Chalmers proposed a radical solution. Unable to draw a line, Chalmers proposes that no, we’re wrong, the tomato plant is conscious too. And so is the rock. Chalmers proposes that consciousness is a fundamental property of the universe, like mass, and that (somehow) the magnitude of the consciousness is proportional to the amount of information involved. If this sounds loopy, I think it does too. If I get around to reviewing one of his books, I’ll say more.
Alva Noe (remember him? This essay is about him!) has a very different answer to this conundrum. Noe believes the mistake is to start with the premise that consciousness occurs inside of us, inside of our brains. He doesn’t believe the neurobiological correlates of consciousness will reveal anything about the mind-body problem. Instead of going inward, more and more restrictively (as Penrose and Hameroff do, with their idea of consciousness as a product of the quantum states of microtubules – an idea even loopier than Chalmers’ panpsychism), Noe goes more expansive. Noe suggests that consciousness is not something in us but something we do – and that it encompasses (is encompassed by?) our interactions with the world (including all that we are perceiving at the moment and all that we are acting upon). We should be looking not for consciousness in our brains, or even worse, in some small part of our brains (the microtubules of Penrose or the dynamic core of Gerald Edelman and Giulio Tononi), but rather in the dynamic interactions of a situated agent in its locally-accessible environment.
This may also sound like a loopy idea, but I don’t think so. Consider the following exercise I have my students try in the first week of class. Take a pencil and close your eyes. Now draw a tree on a piece of paper. As you move the pencil, ask yourself the following question: as you are guiding the pencil, do you in some sense “feel” the paper through the tip of your pencil? Most people do. (And the golfer “feels” the ball hitting the club, the blind man “feels” the grass with his cane, the gardener “feels” the roots of the bush with her rake.) Of course what’s really happening is the pencil, or the golf club, or the cane, or the rake, is vibrating against our hand and fingers in a way that we’ve learned to ascribe that to that other feeling. Except that’s not quite right either, since if we are our brains, what’s really really happening is that the vibrations against our hands and fingers are causing neural activity in the hand region of primary somatosensory cortex (or somewhere “beyond” that in the neural circuitry). Or maybe the first description is right after all. Noe would argue for that more expansive view of our bodies as extended. The voice from across the room is experienced as being across the room, not in our auditory cortices and not in our eardrums.
These kinds of examples are discussed in Noe’s Chapter 4 (Wide Minds) where he also reviews some of my favorite studies from my class. There is the rubber hand illusion, in which an experimenter touches a fake hand which is visible to the subject while simultaneously touching (in the same relative location) the subject’s actual hand (hidden from view). Over time, the subject experiences that rubber hand as part of his or her own body, and have the experience that the touch is being felt from the rubber hand itself. (If you watch the video linked here, be warned that the explanation provided for the effect falls into the usual trap that Noe objects to in his book).
Noe addresses related experiments in his Chapter 3 (The Dynamics of Consciousness) which is the chapter where his book really begins to gather momentum. Here, he addresses the rewired ferret experiments of Mriganka Sur. These technically arduous and brilliant experiments (with one outstanding flaw, in my opinion, which maybe I will write about another time) essentially produced ferrets in which information from the eyes was redirected to primary auditory cortex. These ferrets behaved like they still experienced vision despite this redirection, and features of the auditory cortex developed a visual cortex like character. In the battle, in other words, between the brain (I’m auditory cortex, therefore you shall hear) and the dynamic interactions of a situated agent in its locally-accessible environment (to coin a phrase), the latter wins. The sensory-motor contingencies were visual, so the experience was visual, despite the identity of the brain region.
Related, Noe also describes another favorite of my Mind-Body class: sensory substitution, especially the work of Paul Bach y Rita. Rita was interested in developing a technology that might help the visually impaired. In the original incarnation, blind subjects were seated in front of a large TV camera, which they could direct at an object. The camera’s view would then be translated as little electrical tingles on the subject’s back, isomorphic to the scene. So if the camera was pointing at the letter X drawn on a chalk board, the subject would feel a X-shaped set of tingles on his or her back. The technology improved over time, so that now the camera can be placed in a pair of sunglasses, and the electrode array is placed on a small pad worn on the tongue. Although Noe oversells the phenomenon a bit in his description, Rita describes the experience as visual or quasi-visual – at least, it is unlike touch. This phenomenology emerges once the subjects have some experience with the system, and is much more powerful when the subjects are in control of the camera. That is, pointing the camera at a stationary X is much less useful than panning the camera (now, by moving the head back and forth) – a behavior that is also very visual in nature. Even more exciting, users can duck to avoid objects or, alternatively, catch them. When visual objects approach us, they “loom” – they grow bigger. This does not occur (in nearly the same way) with somatosensory stimuli – so experienced users of this system immediately equate a spreading of the electrical tingles with an approaching object. They also quickly learn how to move their heads to get more information about an object, again, not a natural somatosensory behavior. Again, he have a case where the sensory-motor contingencies seem to specify the conscious experience rather than the brain area activated (here, the tongue region of somatosensory cortex). The dynamic interactions of a situated agent in its locally-accessible environment, once again, is explanatory. (See also a recent exciting paper by Julia Ward and Peter Meijer.)
There are problems with Noe’s ideas too. Phantom limb pain is a difficult condition faced by many amputees in which they continue to feel their non-existent limb – and often it feels excruciating. The usual explanation is that the lack of neural inputs from the hand to the somatosensory cortex produces a change in the brain so that this area is now dominated by inputs from other places – such as the face. Touch to the face is then felt in the hand (a case of the brain region winning over sensorimotor contingencies). There are also dreams and hallucinations – where sensorimotor contingencies would seem to have no explanatory power for a phenomenological experience – where the only thing that seems to be happening (correlated with the experience) is neural activity. To his credit, Noe takes on these situations. In some cases I found his explanations compelling (as with dreams) but in other cases less-so (as with phantom limbs).
Noe’s Chapter 6 is titled The Grand Illusion, which is how I first came to know of Noe’s thinking (he authored a paper called “Is The Visual World A Grand Illusion?” which I have used for many years in class). His answer to this question, by the way, is essentially “no”. Since this is probably most people’s answer to the question, one would wonder why such a paper would need to be written, which means I must do some explaining. Consider the examples I gave at the start of this essay. When we hear the sound of a distant voice, we experience the voice as coming from far away. In a sense, this is an illusion: the only reason we can detect the voice is that air molecules (set in motion by the speaker’s vocal cords) cause our ear drums to vibrate. Nothing about the way they vibrate indicates the origin of the voice that set them in motion. Likewise, we see the world in 3 dimensions: my coffee cup I see as being at arm’s length, my door is several feet away. But this too is something of an illusion. The only way I see these objects at all is that the reflection of light from them falls on my 2-dimensional retina. The brain, it would seem, creates the illusion of 3-dimensions. (Obviously a useful illusion, as it proves to be accurate when I reach for the coffee cup.)
But furthermore – so the story goes – we experience our visual worlds as being all in focus, and we experience ourselves as being able to easily detect changes in our environment. But a moment’s experimentation should prove that very little of the world is in focus: concentrate on any word on the screen of this essay, attempt to keep your eyes still, and notice that only that word is in focus. Also consider that magicians can easily fool audiences with sleight of hand tricks in which we fail miserably at detecting changes in our environment when we are distracted. (This is related to the psychological phenomenon of change blindness.) Noe describes the fact that many philosophers and psychologists have made much hay of these phenomena: that we have a false belief about the completeness of our perceptual worlds – and that this is the grand illusion. Noe argues that we do not in fact have false beliefs – or at least, that our behavior belies this. We are constantly shifting our eyes, tilting our heads. The artist does not look once at his or her portrait subject and draw from memory; the artist is constantly studying and restudying the subject throughout the sitting. We do not act as though we build up a representation of the world in our heads for constant consultation – we do not have to. The detail is not in our heads, it is in the world. Our feeling of the completeness of our perceptual experience is not, Noe would say, an illusion of the completeness of an internal representation of the world but rather an awareness – based on a lifetime of experience – that we have access to all that rich detail by employing the right, basic skills – eye movements, head movements, body movements. Again, Noe is reinforcing the point that consciousness is not in us but rather consists of what we do – the skills that we use to interact with the world.
For the neuroscientist – and for the taste quality coding theorists of the world – this hits home. Much of the program of sensory neuroscience has been based on understanding how stimulus features are represented in neural activity. In Chapter 7 (“Voyages of Discovery”), Noe takes on the giants of my field – David Hubel and Torsten Wiesel – Nobel Prize winning neurophysiologists. (Theoretical critiques aside, Hubel & Wiesel’s contributions to neuroscience are unassailable.) Noe notes that their discovery of the responses of visual cortex neurons – in anesthetized animals – was responsible for decades of research and thinking in neuroscience focused on understanding feature representation (which reached its most Baroque form with the probably misguided work of the genius David Marr.) The kind of reification of the duties of neurons or brain areas, and the eventual (also misguided) “modular brain” theories of cognitive science, are a long way from the warnings of Erickson and Halpern, cited earlier, that representations and internal models may not be necessary to explain behavior. (Mental representations or fuzzy modularity may still have some utility – but Noe would probably disagree.) Noe’s critique of Hubel and Wiesel was certainly the boldest part of the book, and for that reason, one of the most important.
In the end, then, I found Alva Noe’s book full of important ideas. He reviewed a number of key phenomena in psychology and neuroscience. He called out the hidden dualism of active programs in neuroscience. As effective and as thought-provoking as the book was, though, it still didn’t help me understand why that apple was red, and why it tastes sweet. The how of the mind-body problem still nags, but in part thanks to Noe’s writings, I am excited that we may have a better idea of the where.