Monday, January 23, 2006

 

Jocks vs. Brains

When I was growing up, many around me endorsed the jock / brain dichotomy: one could excel either physically or mentally, and make up for deficiencies in one area by doing well in the other.

To a certain extent, that’s true: people specialize, and we’re all better off because of it. But it’s becoming obvious to me, as I begin to wade into neurobiology, that the idea of brain “strength” and physical health as distinct options is inaccurate. A well-functioning brain requires a fantastic physical coordination of organs (including but not limited to one’s digestive and cardiovascular systems) and a delicate, finely tuned chemical bath. Whenever a stereotypically scrawny, uncoordinated genius thinks of himself as having a lousy body, he is defining “body” rather narrowly.

Anecdotes of geeks with hayfever and other allergies abound. I wonder if there really is a statistical correlation there; if so, one might hypothesize that their brains have evolved ways of accomplishing difficult feats that require special, idiosyncratic chemical conditions.

Also contrary to popular western stereotype, the brain depends for its healthy development on a vital engagement with the world, rather than prolonged studious isolation. I like that; it fits better with my intuition, and its behavioral implications are more fun.

Monday, January 16, 2006

 

Internet advertising is whack

I used to be able to completely tune out the Yahoo ads. Weird letters flying around, percent signs dancing in front of a house in silhouette, all of it – I didn’t even have to try to ignore them, because they never registered in the first place.

Now the ads on my Yahoo account are dedicated in full to female models. I still don’t know what they’re advertising, because reading and storing in memory the advertiser’s logo would take conscious effort. Their advertisers gain no sales from me, and I gain no useful information from them. But they have finally succeeded in distracting me, sometimes severely. Yesterday I surfed a lingerie catalog that they brought to my attention. I still don’t know whose it was.

Here’s another bizarre business arrangement: some British kid’s idea for putting himself through college. He auctioned off the individual pixels of a webpage for a dollar apiece on EBay. He called it the Million Dollar Homepage, and advertisers on it have in fact made him a millionaire.

I am dying to know what sort of media formats we will “end up with” in the next century. Will advertisers yearn for the days of a more captive audience? Will we the hyper-free audience yearn for the days of free stuff, paid for by advertisers too stupid to realize they weren’t reaching us? As consumers’ gain the capacity to navigate the set of commercial goods and services faster, more accurately and more thoroughly, will advertising become less relevant, because we already know exactly where to get what we want? Will advertisers react by targeting younger, less search-savvy and more easily suggestible audiences? Can they do that any more than they already do?

[Hat tip to The Agitator re. the Million Dollar Homepage.]

Friday, January 13, 2006

 

Accounting for mathematicians

For the “things I wish someone showed me in high school” department:

I just found out how to do long math problems without getting lost! I’m elated. This discovery hasn’t just made things easier; it has made possible things that I was previously incapable of.

It’s just an accounting system. First, label every initial proposition with a number, in order:

X > 3 __________________ (1)
Y > 3 X ________________ (2)
Z > 3 Y ________________ (3)
etc.

Then start deducing stuff, labeling each new deduction and indicating how you got there:

Y > 9 by (1) and (2) ___________ (4)
Z > 27 by (3) and (4) __________ (5)
etc.

That’s right, my discovery is: write stuff out just like it would be in a math book. I knew that such a format was a good way to present an answer, but I didn’t realize it's useful before the answer's in hand.

To make it obvious what I’m talking about, I’ve had to choose an example that’s easier than the sort of problem for which you would actually need to do any accounting. But for complicated problems, the difference that a labeling system makes is just ungodly huge. For one thing, when something goes wrong, it’s much easier to find out where the error lies. For another thing, you can see what you’ve already tried, without which benefit I and billions (just a rough guess) of others can be prone to going in circles.

I feel like I’ve thought I was blind for years, and I just discovered that I’d been trying to see with the lights off. My inability to do long math problems has shaped to a large extent the areas of math I’ve explored – I took on topology, abstract algebra, combinatorics, etc., and stayed away from differential geometry, optimization and whatnot, because the proofs in the first set of things I listed were short, and the proofs in the second set were long. Whole new career options may have opened up for me today.

Wednesday, January 11, 2006

 

The unsettling fallibility of brains

This year I began my exploration of neurophysiology. It is becoming apparent to me that the mind, a thing I used to think of as pretty solid, is quite ridiculous. I’ll describe two aspects that particularly stun me:

(1) Consensus among neurophysiologists seems to hold that the mind is a patchwork of semi-autonomous programs with limited access to one another’s calculations. The imperfect wiring between them leads parts of the brain to learn to “communicate” in ingenious and bizarre ways. Inter-brain communication is easiest to see in split-brain patients; however, it happens in ordinary people too.

To me the most astounding split-brain experiment is this one: Ask someone to feel around in a bag with their left hand, then tell you what’s in there. The right hemisphere gets the sensory stimuli from the left hand, but the left hemisphere controls language, and because the corpus callosum has been split, the left doesn’t know what signals have arrived at the right. Pain signals, however, are sent to both hemispheres. Apparently the right one is able to figure this out, because many patients eventually hit upon this strategy: Drive the pencil-point into the hand. The left hemisphere realizes it’s something sharp, and begins to guess, out loud. The right reacts to those guesses with facial ticks, smiling when the answers are close, until the word “Pencil” emerges.

I realize that pointing to the fracturedness of the personalities of people who have had their brains severed is not per se evidence that the rest of us suffer inter-brain schisms. Demonstrating the second proposition is harder and not as exciting, but neuroscience has embraced it.

In The Self as a Center of Narrative Gravity, Daniel Dennett explains how the concept of “self” imposes a veneer of coherence over our neural sub-systems even though experiments can demonstrate that they routinely conflict with each other. The self is a useful concept, but one with properties we’re not very good at thinking about. For instance, some questions regarding the self literally have no answer – such as the question, “What do you really want?”

We can provide an answer, and once we do we tend to defend it. That defense may be disadvantageous from an individual’s standpoint, but from a group standpoint it’s good: If we can depend on the expressed psychological states of others to remain somewhat constant, we can base plans on them. Thus the unity that the notion of self imposes on our patchwork behavior has evolutionary value. In nature, the self needn’t live up to any higher standard. However, under scientific scrutiny, it’s an embarrassingly ugly hack.

(2) The gambler’s fallacy. We see patterns where there are none. This trait served us well in the wild, because most of the stuff that mattered to us exhibited useful patterns. However, our cultural development has outstripped our cognitive development, and we are left believing in phenomena such as lucky numbers and astrology.

We so depend on pattern-recognition to track our gains and losses that we do a lot of it subconsciously and automatically. When experimental subjects are given a gambling game to play and are explicitly told it is random, with no patterns, their brains still predict future winnings based on past ones (Gehring and Willoughby, “The Medial Frontal Cortex and the Rapid Processing of Gains and Losses”, Science 2002 – as paraphrased byAdam Gifford in his working paper, “The Role of Culture and Meaning in Rational Choice”).

Adam Gifford hypothesizes that evolution favored those who perceive cause where there is none to those who failed to perceive legitimate cause. I like that explanation because it exists. (Can you think of any others?)

Not only do we attribute pattern to random sequences, we attribute agency where it doesn’t belong. Paul Bloom’s “Is God an Accident?” (The Atlantic, Dec 2005):

Stewart Guthrie, an anthropologist at Fordham University, was the first modern scholar to notice the importance of this tendency as an explanation for religious thought. In his book Faces in the Clouds, Guthrie presents anecdotes and experiments showing that people attribute human characteristics to a striking range of real-world entities, including bicycles, bottles, clouds, fire, leaves, rain, volcanoes, and wind. We are hypersensitive to signs of agency — so much so that we see intention where only artifice or accident exists. As Guthrie puts it, the clothes have no emperor.

Bloom’s hypothesis is that the (scientifically consensual) schism between brain subsystems for predicting the behavior of physical objects and brain subsystems for predicting others’ psychology makes us biologically predisposed to believe in the supernatural.

In a significant study the psychologists Jesse Bering, of the University of Arkansas, and David Bjorklund, of Florida Atlantic University, told young children a story about an alligator and a mouse, complete with a series of pictures, that ended in tragedy: "Uh oh! Mr. Alligator sees Brown Mouse and is coming to get him!" [The children were shown a picture of the alligator eating the mouse.] "Well, it looks like Brown Mouse got eaten by Mr. Alligator. Brown Mouse is not alive anymore."
The experimenters asked the children a set of questions about the mouse's biological functioning — such as "Now that the mouse is no longer alive, will he ever need to go to the bathroom? Do his ears still work? Does his brain still work?" — and about the mouse's mental functioning, such as "Now that the mouse is no longer alive, is he still hungry? Is he thinking about the alligator? Does he still want to go home?"
As predicted, when asked about biological properties, the children appreciated the effects of death: no need for bathroom breaks; the ears don't work, and neither does the brain. The mouse's body is gone. But when asked about the psychological properties, more than half the children said that these would continue: the dead mouse can feel hunger, think thoughts, and have desires. The soul survives. And children believe this more than adults do, suggesting that although we have to learn which specific afterlife people in our culture believe in (heaven, reincarnation, a spirit world, and so on), the notion that life after death is possible is not learned at all. It is a by-product of how we naturally think about the world.

I guess I can’t make their points as well as the authors I have mentioned; perhaps I should just wave my arm at their work.

Part of the reason these ideas are so unsettling to me is that they are subtle. I would have thought that a society full of patchwork selves would exhibit more inconsistent behaviors, and more problems due to them. I also would have thought that a hyper-proclivity to see pattern and agency where there is none would show up as a higher incidence of delusion, and more problems due to that.

I suppose, depending on your political allegiance, you might think that the mechanisms behind religion and behind conspiracy theory worldviews are pretty harmful. But even if you think so, as far as worlds full of delusional schizophrenics goes, ours seems pretty mild.

Friday, January 06, 2006

 

Dangerous ideas

http://www.marginalrevolution.com/marginalrevolution/2006/01/the_end_of_insi.html

I can't believe I have the pleasure of reading stuff like this.

The link I’ve provided gets you to Tyler Cowen at Marginal Revolution, who provides a nice introduction. From there you can link to the whole big list. (Just in case his page goes offline, though, I’ll also stick a direct link to the list of dangerous ideas here.)


Tuesday, January 03, 2006

 

Mixing video games and politics

When I was a dumb kid, I played a lot of video games, in which bizarre cartoon-like creatures conspired and fought.

I think they did me some good. Children play stuff because it’s fun, and our perception of fun is, in a Darwinian sense, “meant” to induce the developing brain to seek activities that improve the brain, thereby improving the chances that the brain’s owner will reproduce. Video games are complicated, and fast, and the good ones require one to anticipate the behavior of other humans. I believe those qualities are good for you. It’s a testament to the strength of biological imperatives that when parents don’t want their kids playing those games, but the kids do, the kids usually win.

While the aspects I have described seem to me bound to have positive effects, I have not addressed the question of how video games affect a player’s attitude toward violence. I don’t know. This guy and these guys sure think they’re harmful – in fact, so it seems does every study that Google returns. It may be that Google returns the most popular things first, and for any topic X the position that X is bad is more popular than the position that X is good,– but I tried including phrases like “no effect” and “harmless”, and still got none of the other side of the argument, if it exists.

What I can say, though, is that video games have meaning now that they did not when I played them. I left right around the time that games started to be able to portray bleeding three-dimensional humanoids. Perhaps that novelty was enough to allow the storylines to be stupid, because they were: “creatures from hell invade Earth”, or “men fight creatures from hell on a moon base”. Even if they were to affect my attitudes toward violence, they could not have affected my politics.

Today, most of the storylines go like this: “There’s a terrorist threat [over there]. You must neutralize it.” The player is not given the opportunity to deliberate the morality of this imperative. He obeys it. He has fun doing so. His simulated fellow citizens treat him like a hero when he succeeds. (Here’s one description of what video games are like now.)

I would whine at this point that video games ought to be apolitical, to keep impressionable youth from forming baseless opinions about who’s right and about what sort of war actions are legitimate. But I don’t see any way around it – the realistic games sell the best, and the one-sided, “you’re a hero” games sell the best. Telling firms not to make those games seems like a violation of the First Amendment.

It’s all ugly and complicated.

This page is powered by Blogger. Isn't yours?