Wednesday, January 11, 2006

 

The unsettling fallibility of brains

This year I began my exploration of neurophysiology. It is becoming apparent to me that the mind, a thing I used to think of as pretty solid, is quite ridiculous. I’ll describe two aspects that particularly stun me:

(1) Consensus among neurophysiologists seems to hold that the mind is a patchwork of semi-autonomous programs with limited access to one another’s calculations. The imperfect wiring between them leads parts of the brain to learn to “communicate” in ingenious and bizarre ways. Inter-brain communication is easiest to see in split-brain patients; however, it happens in ordinary people too.

To me the most astounding split-brain experiment is this one: Ask someone to feel around in a bag with their left hand, then tell you what’s in there. The right hemisphere gets the sensory stimuli from the left hand, but the left hemisphere controls language, and because the corpus callosum has been split, the left doesn’t know what signals have arrived at the right. Pain signals, however, are sent to both hemispheres. Apparently the right one is able to figure this out, because many patients eventually hit upon this strategy: Drive the pencil-point into the hand. The left hemisphere realizes it’s something sharp, and begins to guess, out loud. The right reacts to those guesses with facial ticks, smiling when the answers are close, until the word “Pencil” emerges.

I realize that pointing to the fracturedness of the personalities of people who have had their brains severed is not per se evidence that the rest of us suffer inter-brain schisms. Demonstrating the second proposition is harder and not as exciting, but neuroscience has embraced it.

In The Self as a Center of Narrative Gravity, Daniel Dennett explains how the concept of “self” imposes a veneer of coherence over our neural sub-systems even though experiments can demonstrate that they routinely conflict with each other. The self is a useful concept, but one with properties we’re not very good at thinking about. For instance, some questions regarding the self literally have no answer – such as the question, “What do you really want?”

We can provide an answer, and once we do we tend to defend it. That defense may be disadvantageous from an individual’s standpoint, but from a group standpoint it’s good: If we can depend on the expressed psychological states of others to remain somewhat constant, we can base plans on them. Thus the unity that the notion of self imposes on our patchwork behavior has evolutionary value. In nature, the self needn’t live up to any higher standard. However, under scientific scrutiny, it’s an embarrassingly ugly hack.

(2) The gambler’s fallacy. We see patterns where there are none. This trait served us well in the wild, because most of the stuff that mattered to us exhibited useful patterns. However, our cultural development has outstripped our cognitive development, and we are left believing in phenomena such as lucky numbers and astrology.

We so depend on pattern-recognition to track our gains and losses that we do a lot of it subconsciously and automatically. When experimental subjects are given a gambling game to play and are explicitly told it is random, with no patterns, their brains still predict future winnings based on past ones (Gehring and Willoughby, “The Medial Frontal Cortex and the Rapid Processing of Gains and Losses”, Science 2002 – as paraphrased byAdam Gifford in his working paper, “The Role of Culture and Meaning in Rational Choice”).

Adam Gifford hypothesizes that evolution favored those who perceive cause where there is none to those who failed to perceive legitimate cause. I like that explanation because it exists. (Can you think of any others?)

Not only do we attribute pattern to random sequences, we attribute agency where it doesn’t belong. Paul Bloom’s “Is God an Accident?” (The Atlantic, Dec 2005):

Stewart Guthrie, an anthropologist at Fordham University, was the first modern scholar to notice the importance of this tendency as an explanation for religious thought. In his book Faces in the Clouds, Guthrie presents anecdotes and experiments showing that people attribute human characteristics to a striking range of real-world entities, including bicycles, bottles, clouds, fire, leaves, rain, volcanoes, and wind. We are hypersensitive to signs of agency — so much so that we see intention where only artifice or accident exists. As Guthrie puts it, the clothes have no emperor.

Bloom’s hypothesis is that the (scientifically consensual) schism between brain subsystems for predicting the behavior of physical objects and brain subsystems for predicting others’ psychology makes us biologically predisposed to believe in the supernatural.

In a significant study the psychologists Jesse Bering, of the University of Arkansas, and David Bjorklund, of Florida Atlantic University, told young children a story about an alligator and a mouse, complete with a series of pictures, that ended in tragedy: "Uh oh! Mr. Alligator sees Brown Mouse and is coming to get him!" [The children were shown a picture of the alligator eating the mouse.] "Well, it looks like Brown Mouse got eaten by Mr. Alligator. Brown Mouse is not alive anymore."
The experimenters asked the children a set of questions about the mouse's biological functioning — such as "Now that the mouse is no longer alive, will he ever need to go to the bathroom? Do his ears still work? Does his brain still work?" — and about the mouse's mental functioning, such as "Now that the mouse is no longer alive, is he still hungry? Is he thinking about the alligator? Does he still want to go home?"
As predicted, when asked about biological properties, the children appreciated the effects of death: no need for bathroom breaks; the ears don't work, and neither does the brain. The mouse's body is gone. But when asked about the psychological properties, more than half the children said that these would continue: the dead mouse can feel hunger, think thoughts, and have desires. The soul survives. And children believe this more than adults do, suggesting that although we have to learn which specific afterlife people in our culture believe in (heaven, reincarnation, a spirit world, and so on), the notion that life after death is possible is not learned at all. It is a by-product of how we naturally think about the world.

I guess I can’t make their points as well as the authors I have mentioned; perhaps I should just wave my arm at their work.

Part of the reason these ideas are so unsettling to me is that they are subtle. I would have thought that a society full of patchwork selves would exhibit more inconsistent behaviors, and more problems due to them. I also would have thought that a hyper-proclivity to see pattern and agency where there is none would show up as a higher incidence of delusion, and more problems due to that.

I suppose, depending on your political allegiance, you might think that the mechanisms behind religion and behind conspiracy theory worldviews are pretty harmful. But even if you think so, as far as worlds full of delusional schizophrenics goes, ours seems pretty mild.

Comments: Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?