Wednesday, August 1, 2007

Reading makes you stronger

The first Alzheimer’s diseased brain I ever touched looked horrific. The cortex was shriveled, the ventricles were large, cavernous voids, and when I stained the sample I saw a galaxy of proteinaceous tangles and masses. The brain had clearly been degenerating steadily for over a decade, and it was difficult to imagine how the patient could have functioned. I was shocked to discover that, according to his charts, the patient’s dementia had only been detectable for a few years. In contrast, certain brains I analyzed appeared much more intact, yet came from patients who had suffered from severe dementia for over a decade.

These patients exemplify the dramatically different ways people can respond to neurodegenerative changes. Even when confronted with the same disease and comparable severity, people vary considerably in the extent of cognitive decline. Specifically, people with higher levels of education and occupational attainment are more successful at coping with the same amount of brain damage and degeneration.

One hypothesis that accounts for this discrepancy is the concept of cognitive reserve. The cognitive reserve hypothesis posits that people who have challenged their minds for significant portions of their lives (i.e. they didn't just start playing Sudoku at the age of 60) can compensate for neural deficits by recruiting alternate brain networks as backup or “reserve.” In support of this hypothesis, functional brain imaging shows that "high-functioning" older adults activate significantly more areas of their brains than both "low-functioning" older adults and young adults when performing certain cognitive tasks. This indicates neural compensation; the "high-functioning" old engage in alternative neural strategies in response to neural deficits or declines in cognitive abilities. Importantly, this type of compensation may be facilitated by a more flexible organization of the brain, which results from early cognitive experience.

Of course, people who did not start challenging themselves until later in life should not despair. Other requisites of compensation, such as plasticity (including the birth new neurons and enhanced signaling between existing neurons), may be improved by cognitive experience throughout life (although the earlier the better). And in a complementary aspect of cognitive reserve, people who challenge their brains throughout life may be able to protect their existing brain networks. Intellectually stimulating activities may increase the efficiency and capacity of these networks, enabling them to withstand a greater degree of age-related change while maintaining intact functioning (again, the earlier the better).

The cognitive reserve hypothesis has recently been supported by findings of Dr. Margit Bleecker, who studied the effects of lead exposure on cognitive function. The study involved 112 lead smelter workers in New Brunswick, who were divided into groups with high reading ability (12th grade or higher) and low reading ability (11th grade or lower). Reading ability is a recognized measure of cognitive reserve, and is perhaps a better metric than education and occupation (e.g. it distinguishes self-taught individuals who dropped out of school for economic reasons from people who graduated high school but are functionally illiterate). Importantly, although lead exposure has negative effects on many brain functions, well-ingrained functions like reading ability are resistant to the consequences.

Both groups had similar lead exposure, age, alcohol use, and depression levels, but those with high cognitive reserve performed 2.5 times better on cognitive tests than those with low cognitive reserve. In contrast, cognitive reserve did not protect motor speed and dexterity from the toxic effects of lead, indicating that other parts of the workers’ nervous systems were still vulnerable. These findings, published in Neurology, demonstrate that cognitive reserve protects against the cognitive effects of chronic lead exposure.

The key to cognitive reserve is not to wait until you’re in your 60s (or even 50s, 40s, 30s, or 20s, for that matter), but to challenge yourself intellectually as early and often as possible. So read, play "brain games," play soccer, and do all the other wonderfully fun and exciting things that are good for you.

UPDATE: For more information on cognitive reserve, see posts by Michael Merzenich of Posit Science and Alvaro of SharpBrains.

Tuesday, July 17, 2007

My brain and my ACL

My life, though generally fortunate, has been peppered by a small number of somewhat serious injuries: broken hand, broken collarbone, broken wrist, torn meniscus, ruptured anterior cruciate ligament (ACL). The hand and collarbone were consequences of my big sister tripping and crushing me, respectively, but the rest I managed to accrue on my own. (Even though I sustained these latter injuries on the soccer field, I was unaccompanied by the touch of another player).

My mom always insisted that my proclivity for injury was due to the intrinsic grace of my bones and joints—"elegant and delicate, like a bird!"—but a recent study in the June edition of the American Journal of Sports Medicine suggests something different is to blame: my brain.
A torn anterior cruciate ligament (ACL) is among an athlete's most-dreaded injuries, often requiring surgery and months of rehab, as has been the case with Philadelphia Eagles quarterback Donovan McNabb. While being tackled in football or hurtling into an embankment on an icy ski course can tear this major knee ligament, most athletes actually “do themselves in”--they don't collide with a person or object, they end up injuring themselves when they land off-balance during a jump or run.

“We had some data from previous research which suggested that these noncontact knee injuries occur when a person gets distracted or is 'caught off guard,'“ Charles Buz Swanik, the UD assistant professor of health sciences who led the study, said. These awkward movements have the biomechanical appearance of a knee buckling, but can be reproduced safely in the lab to study how people mentally prepare and react to unanticipated events.
Based on my personal experiences, the connection between sport-induced injury and distraction is not surprising. Both my wrist and my ACL/meniscus injuries (the latter being a simultaneous double-whammy) occurred while I wasn't particularly focused on the games. In both situations, I was slightly anxious because these were the only two soccer games my dad had attended since I was 10 (one in high school (wrist), one in college (knee)). Further, I was not involved in the plays that immediately preceded my injuries; it was precisely when ball unexpectedly approached, summoning my mildly reluctant participation, that my "delicate" limbs met with disaster.

But Swanik believes these momentary lapses in attention are indicative of more extensive deficiencies.
“This made me wonder if we could measure whether these individuals had different mental characteristics that made them injury-prone,” Swanik said.

To identify subjects for their study, the researchers administered neurocognitive tests to nearly 1,500 athletes at 18 universities during the preseason. This testing also provided baseline data for athletes who might sustain a concussion after the season started, Swanik said.

Visual memory, verbal memory, processing speed, and reaction time all were assessed.

...

In analyzing the data, the scientists found that the athletes who ended up with noncontact ACL injuries demonstrated significantly slower reaction time and processing speed and performed worse on visual and verbal memory tests when compared to the control group.
As Swanik writes in his report, "physical activity requires situational awareness of a broad attentional field to continuously monitor the surrounding environment, filter irrelevant information, and simultaneously execute complex motor programs. Increased arousal or anxiety changes an athlete's concentration, narrows their attentional field, and alters muscle activity, which has been associated with poor coordination and inferior performance."

These conclusions remind me of one of David Foster Wallace's essays in Consider the Lobster, "How Tracy Austin Broke My Heart." Wallace devotes this essay to the devastating contrast between Tracy Austin's brilliance on the tennis court (both physical and mental) and her abominable ability to intellectualize her experiences in her memoir. He ultimately concludes that the vacuity and lack of insight of sports memoirs, such as hers, is inextricably linked to the qualities that lead to great athletes in the first place.

Their ability to maintain exceptional focus under the scrutiny of thousands of viewers (including their parents), makes them incapable of an appreciation of their athletic genius, and thus of significant insight into its nature. During games that are crucial to the careers to which they have been devoted since childhood, they manage to "invoke for themselves a cliché as trite as 'One ball at a time' or 'Gotta concentrate here,' and mean it and then do it." Meanwhile, if the rest of us were under such circumstances, we would founder and crumple and fail precisely because we think too much about matters that have nothing to do with the direction and velocity of the ball, or the appropriate bending of the knee during complicated, high-velocity movements.

As Wallace writes, "those who receive and act out the gift of athletic genius must, perforce, be blind and dumb about it—and not because blindness and dumbness are the price of the gift, but because they are its essence." (Their vastly superior speed, strength, and visual acuity probably isn't trivial).

Another major risk factor for ACL injuries is gender. Girls are four to eight times more likely to tear or rupture their ACLs than men, with female soccer and basketball players at the highest risk. There are a few theories as to why this gender difference exists: 1) bone alignment of the pelvis/femur/tibia create excess stress on the ACL; 2) female hormones relax ligaments, muscles and joints, making joints more flexible and prone to injury; 3) as girls pass through adolescence, their muscular control of the knee may not keep up with their skeletal growth.

To incorporate Swanik's conclusion with the gender discrepancy, female hormones can also result in concentration deficits, which may result in a sub-optimal state of arousal in a given athletic situation. In any case, this study has interesting implications for injury prevention. Not only should female athletes stop running like girls, but perhaps cognitive exercises that train processing speed and reaction time may also benefit the accident prone.

Monday, July 16, 2007

Humans to the rescue

Images like the one above are familiar to any of us who have ever used webmail, Ticketmaster, or any other web service that wants to prevent automated spammers and scalpers from exploiting their systems. The distorted, fuzzy letters don't provide a challenge to humans, but are indecipherable to the most sophisticated computer algorithms. Our genius is facilitated by our "invariant" perceptual abilities; that is, we can recognize objects, faces, and letters independent of rotation, translation, and scale.

However, these CAPTCHAs (Completely Automated Public Turing Test to Tell Computers and Humans Apart) were designed with a wonderfully clever ulterior motive. In addition to preventing rogue bots from devastating our virtual lives, CAPTCHAs like the one above are actually exploiting you, the human, and the invariance of your human perception, to help digitize the world. The words presented in these CAPTCHAs are pulled from the book-scanning project of the Internet Archive, which aims to scan millions of public-domain books and put them online for free. One word of the CAPTCHA is known to the computer, and is used to verify your humanness, while the other was indecipherable to the Archive's scanners. When you type in that word, you're actually translating the image into text for the Archive.

There's a fantastic article in Wired Magazine about this type of "human computation," "the art of using massive groups of networked human minds to solve problems that computers cannot." The article profiles the work of Luis von Ahn, who designs clever ways to harness the powerful brains of bored web surfers to solve computing problems (e.g. judging random pictures as "pretty," tagging images and audioclips, etc.)

From Wired Magazine:
If people could so easily recognize pictures of letters and numbers, could [they] use this ability to identify and label the vast number of images on the Web?

...

The way to do it, he realized, was as a game. It would pull images off the Web, then randomly pair two players from around the world. They would be shown the same images, then each would type in as many words as they could to describe those images, hoping to hit upon the same ones as their anonymous partner. They'd get 50 points for each match, and two and a half minutes to earn as many points as possible. Von Ahn suspected that whenever the players agreed on a word — "meadow" to describe a tree-lined clearing, for example — they would be choosing a highly accurate label for the picture.

Von Ahn cobbled the game together in a week — "crappy, totally terrible code," he admits — and threw it online. He dubbed it The ESP Game and emailed the URL to a few friends. Within days it was Slashdotted, whereupon his server nearly crashed under the load of new players. Astonished, von Ahn watched for the next four months as 13,000 players produced 1.3 million labels for some 300,000 images — with a few hardcore fans clocking more than 50 hours of play. "It's like crack," as one player complained in an email to von Ahn. The labels his players generated were far more accurate than what other image-search technologies produced. Most search engines are limited to sniffing out words associated with a picture, such as the name given to the image, words in the page around it, or links pointing to it. That's inherently imprecise: When von Ahn recently searched for "dog" on Google, a third of the pictures showed no dogs at all. When he queried the ESP database, almost all the results contained canines. Better yet, players often generated labels that were subtle and nuanced. A search for "funny" found a picture of Ronald McDonald being hauled away by police and one of Queen Elizabeth picking her nose.
Even the DHS wants to employ your brainpower as you procrastinate on the web:
This spring, von Ahn got a call from the Department of Homeland Security. He went to Washington to meet with DHS officials, and together they devised a game in which people are challenged to find dangerous objects in images of x-rayed baggage. The pictures would be fed from airport scanners, and players would act as a second set of eyes for overtaxed security employees. If enough players noticed something amiss, an alert would be triggered.
Von Ahn's other games that capitalize on human superiority (supposedly available at Games with a Purpose "in July," but as of now the site isn't running yet) include:
1) Matchin' Players are shown the same pair of images, then each tries to pick the one they'll both agree is more attractive. Creates a database of images searchable by aesthetic value, a task no algorithm can perform.

2) Babble Two English-speaking players are shown a sentence in a foreign language that neither of them speak. A list of possible English meanings appears below each word. Players try to agree upon a set of English words that forms the most coherent sentence. Translates foreign text into English without requiring anyone fluent in both languages.

3) InTune Players listen to the same audioclip and then try to come up with the same phrase to characterize it. Tags sounds with searchable descriptive text.

4) Squigl Two players are shown the same picture and a word describing an element within the image (e.g., a picture of a dog and the word "leash"). They each draw a border around the element. Produces a set of pictures with their internal components tagged — terrific for very specific image searches.

5) Verbosity One player is given a word, and the other tries to guess that word by completing phrases such as "It is near a ____" or "It is a type of ____." The first player answers "true" or "false" but can't use the word itself. Creates a database of commonsense knowledge describing the objects.

Link to the full Wired article.

Friday, July 13, 2007

New and improved robot CPGs

In my first post ever, I discussed how specialized circuits in the spinal cord (called "central pattern generators," or CPGs) coordinate the intricate motions and muscle patterns involved in running and walking, without significant input from the brain. The autonomy of these circuits allows animals to run and walk while their mental efforts are otherwise engaged; for example, we can talk on the phone while walking to dinner, and decapitated chickens can still run away.

One of the most important features of CPGs is their adaptability. Whether running through a forest, walking on an oily surface, or dribbling a soccer ball, we need to continuously modify our movements. Thus, as opposed to generating rigid action patterns, CPGs provide a flexible template for coordinating our muscles and various joints. This template interacts with sensory information, allowing us to elegantly adapt to our unpredictable world. Flexibility, however, poses a challenging computational problem; not only must we decipher how circuits of neurons coordinate hundreds of muscles, but also how their output can be refined by incoming sensory information.

Without understanding these fundamental issues, it is difficult to produce machines that can move as intelligently as we. Honda's ASIMO, "The World's Most Advanced Humanoid Robot," is capable of executing an astounding range of human-like movements (running, walking smoothly, reaching for objects), but has previously stumbled and fallen down stairs. A recent article in PLoS Computational Biology describes a new and improved droid named RunBot, which is capable of adapting to unfamiliar terrain in an animal-like way.

Although not nearly as cute as ASIMO, RunBot's motor circuitry is more "intelligent" (i.e. more human). As I mentioned in my earlier post, the motor system is arranged in a hierarchy: the "higher" control centers give the signal to initiate a movement, recruiting the "lower" CPGs to take care of the details. These lower circuits respond to the environment reflexively, incorporating localized feedback to generate intricate adjustments in muscle tone. This responsiveness allows us to immediately compensate for small perturbations, such as unnoticed rocks on a trail. When we need to significantly modify our gait, however, such as stepping over a baby, we must enlist the higher centers, which will generate more dramatic modifications to the CPGs.

ASIMO lacks this hierarchy, requiring it to continuously calculate the position and motion of every joint. RunBot, however, has been engineered with several levels of control, allowing it to adapt to changes in terrain in a more computationally efficient manner. RunBot interprets the environment with an infrared sensor, which communicates with the lower circuits to regulate their activity. Thus, when RunBot encounters an alteration to its terrain and becomes unbalanced, this sensor modifies the pattern of the lower circuits, allowing the bot to change its gait.

However, like humans, RunBot must learn how to modify its movements with respect to sensory input. When we learn how to walk, our brains "train" our CPGs until they can execute the movement relatively independently. These same mechanisms come into play when a runner learns to hurdle or a soccer player learns a new move; these behaviors initially require significant concentration, but with practice can be executed with little mental effort. To replicate this learning process, RunBot's circuitry includes, according to the authors, "online learning mechanisms based on simulated synaptic plasticity." Thus, when RunBot first attempts to climb a slope, it falls over like poor ASIMO. With trial and error, however, its circuitry learns to properly compensate for the relevant sensory input, shortening and slowing its steps just like a human.

Tuesday, July 10, 2007

Why are blondes more attractive than brunettes?

As a young brown-eyed, brown-haired girl growing up in Orange County, CA, I found this "stereotype" repeatedly, bewilderingly, validated. Although I defended myself with Van Morrison and a sizeable artillery of blonde-jokes, behind my façade of self-assurance I continued to wonder: why are blonde hair and blue eyes "prettier"? Now, as a slightly more mature brunette with a more comprehensive understanding of natural selection, I still find the question intriguing. Why did Europeans evolve to prefer blonde hair and blue eyes? What do these features indicate about health and fertility?

Psychology Today has an excerpt from the book Why Beautiful People Have More Daughters, by Alan S. Miller and Satoshi Kanazawa, which explores "Ten Politically Incorrect Truths About Human Nature," including the mystery of the "blonde bombshell":
Long before TV—in 15th- and 16th- century Italy, and possibly two millennia ago—women were dying their hair blond. Women's desire to look like Barbie—young with small waist, large breasts, long blond hair, and blue eyes—is a direct, realistic, and sensible response to the desire of men to mate with women who look like her. There is evolutionary logic behind each of these features.

Blond hair is unique in that it changes dramatically with age. Typically, young girls with light blond hair become women with brown hair. Thus, men who prefer to mate with blond women are unconsciously attempting to mate with younger (and hence, on average, healthier and more fecund) women. It is no coincidence that blond hair evolved in Scandinavia and northern Europe, probably as an alternative means for women to advertise their youth, as their bodies were concealed under heavy clothing.

Women with blue eyes should not be any different from those with green or brown eyes. Yet preference for blue eyes seems both universal and undeniable—in males as well as females. One explanation is that the human pupil dilates when an individual is exposed to something that she likes. For instance, the pupils of women and infants (but not men) spontaneously dilate when they see babies. Pupil dilation is an honest indicator of interest and attraction. And the size of the pupil is easiest to determine in blue eyes. Blue-eyed people are considered attractive as potential mates because it is easiest to determine whether they are interested in us or not.

The irony is that none of the above is true any longer. Through face-lifts, wigs, liposuction, surgical breast augmentation, hair dye, and color contact lenses, any woman, regardless of age, can have many of the key features that define ideal female beauty. And men fall for them. Men can cognitively understand that many blond women with firm, large breasts are not actually 15 years old, but they still find them attractive because their evolved psychological mechanisms are fooled by modern inventions that did not exist in the ancestral environment.
The article also explains why men prefer women with small waists and large breasts (both are correlated with levels of estrogen and progesterone, indicating greater fecundity), why beautiful people have more daughters (physical attractiveness is more important for girls than boys, and parents can bias the sex ratio depending on the traits they can offer), why men sexually harass women (it's about respect), and why most suicide bombers are Muslim (the 72 virgins waiting patiently in heaven aren't trivial). Some of the hypotheses are a little dubious to me, but it's an interesting read nonetheless.

Link to the article.

Monday, July 9, 2007

Williams Syndrome and human sociality

There's a great article by David Dobbs in the New York Times Magazine about Williams Syndrome (WMS), a condition with a diverse and remarkable array of cognitive symptoms. I have vivid memories of my first exposure to WMS--watching a documentary hosted by Oliver Sacks for my "Psychology of Music" class. In one of the first scenes, Dr. Sacks introduces himself to a 6-year old girl with WMS, who eagerly and cheerfully insists "Don't be shy, Mr. Sacks." In another scene he takes her to a sandwich shop, where she enthusiastically engages the employees and fellow customers in conversation, offering hugs to all within reach. This behavior exemplifies one of the most remarkable endowments of children with WMS--endearing, socially fearless personalities, marked by extreme gregariousness and emotional empathy.

This charm is facilitated by a peculiarly rich vocabulary and proficiency with language; for example, when asked to name some animals, a WMS child responded "Brontosaurus, tyranadon, brontasaurus rex, dinosaurs, elephant, dog, cat, lion, baby hippopotamus, ibex, whale, bull, yak, zebra, puppy, kitten, tiger, koala, dragon..." quickly and fluidly naming exotic (though occasionally non-existent) animals as if reading them off a list. When striking conversations with strangers, they are extremely loquacious, to the point where they appear to burden the listener with verbosity.

Adding to the list of aptitudes of WMS people is a great affinity for music (hence learning about the condition in my "Psychology of Music" class). People with WMS can have savantlike musical skills, and those without notable musical gifts nevertheless feel "drawn" to music, an inclination likely aided by an acute sensitivity to sound. One scene of the documentary featured WMS children walking through the woods, commenting on how loud the bees and rustling leaves were (sounds which were more or less unnoticed by Sacks).

These remarkable virtuosities with language, social interaction, and music are accompanied, however, by profound cognitive impairments. The average IQ of a person with WMS is in the 60's, and the vast majority cannot live independently. Despite their seeming fluency with verbal communication, people with WMS have poor language comprenension, incapable of understanding the underlying meaning of most conversations. Their communication, though voluminous, lacks depth and subtlety, and rarely goes beyond "small talk."

This intriguing disconnect pervades social interactions beyond spoken language; in spite of their gregariousness, people with WMS often fail to grasp social cues, including facial expression and body language. Moreover, the extreme geniality of WMS people is indicative of an underlying problem: a complete lack of social fear. According to the article, "functional brain scans have shown that the brain’s main fear processor, the amygdala, which in most of us shows heightened activity when we see angry or worried faces, shows no reaction when a person with Williams views such faces. It’s as if they see all faces as friendly."

Children with WMS also have significant deficiencies in spatial processing and dealing with numbers. In another memorable scene from the documentary, Sacks presents the child with a plate of muffins, asking her how many she thought were on the plate. "3," she immediately and eagerly replied. There were clearly over 10. When then asked to make a + shape out of four rectangular pieces, she arranged them haphazardly, seemingly at random, at which point she cheerfully announced "Done!"

WMS thus provides a captivating mélange of cognitive strengths and weaknesses. Unlike most forms of mental retardation, in which most or all cognitive abilities are concurrently impaired, the distinct peaks and valleys of aptitudes in WMS allows a dissociation between specific abilities and "general intelligence." Further, the genetic basis of WMS is known: it arises from a deletion of ~28 known genes from chromosome 7. Thus, WMS offers a tantalizing opportunity to understand the genetic influences on complex brain functions, which I plan to explore in a future post.

This post, however, was inspired by a separate, equally captivating story woven by the WMS condition: the implications for human social behavior. Why, despite their affability and charm, do WMS people find it hopelessly difficult to make friends? According to Dobbs, this paradox "makes clear that while we are innately driven to connect with others, this affiliative drive alone will not win this connection. To bond with others we must show not just charm but sophisticated cognitive skills."

So why is it that all our relationships, even casual friendships, demand intelligence? Why was it so difficult to believe that Jenny would marry Forrest Gump? The article broaches two related and overlapping evolutionary theories, the "social brain" theory and the "Machiavellian-intelligence" theory. These theories propose, respectively, that humans evolved large brains to generate complex social relationships, and that deception and manipulation (and the ability to identify these two behaviors) are necessary to successfully compete amongst other members of society. Thus, as Steven Pinker suggests in The Language Instinct, "human evolution was propelled more by a cognitive arms race among social competitors than by mastery of technology and the physical environment."

Social life presents a convoluted tension, involving (as stated by Ralph Adolphs and quoted by Dobbs), a “complex and dynamic interplay between two opposing factors: on the one hand, groups can provide better security from predators, better mate choice and more reliable food; on the other hand, mates and food are available also to competitors from within the group.” Thus, our survival is contingent on a delicate balance between getting along with others and outperforming them. Requisite for maintaining this balance is a comprehensive understanding of subtle and complicated social dynamics, enabling both manipulation and the detection of manipulation by others. Dobbs writes that:

"People with Williams, however, don’t do this so well. Generating and detecting deception and veiled meaning requires not just the recognition that people can be bad but a certain level of cognitive power that people with Williams typically lack. In particular it requires what psychologists call “theory of mind,” which is a clear concept of what another person is thinking and the recognition that the other person a) may see the world differently than you do and b) may actually be thinking something different from what he’s saying.

...it’s clear that Williamses do not generally sniff out the sorts of hidden meanings and intentions that lie behind so much human behavior."
The article concludes with a fascinating question about being human: is our social behavior driven more by the urge to connect or the urge to manipulate the connection? Are we trying to make friends, or do we only care about being genetically more successful than our peers?
"We dominate the planet because we can think abstractly, accumulate and relay knowledge and manipulate the environment and one another. By this light our social behavior rises more from big brains than from big hearts.

...

The disassociation of so many elements in Williams — the cognitive from the connective, social fear from nonsocial fear, the tension between the drive to affiliate and the drive to manipulate — highlights how vital these elements are and, in most of us, how delicately, critically entwined. Yet these splits in Williams also clarify which, of caring and comprehension, offers the more vital contribution. For if Williams confers disadvantage by granting more care than comprehension, reversing this imbalance creates a far more problematic phenotype.

As Robert Sapolsky of the Stanford School of Medicine puts it: “Williams have great interest but little competence. But what about a person who has competence but no warmth, desire or empathy? That’s a sociopath. Sociopaths have great theory of mind. But they couldn’t care less.”"

Link to the NYTM article.

Thursday, July 5, 2007

Babies: cheating bastards

From The Globe and Mail:

Babies aren't as innocent as they look, according to new research out of the United Kingdom.

Sweet little infants actually learn to deceive before they can talk, says University of Portsmouth psychology department head Vasudevi Reddy in a study that challenges traditional notions of innocence while confirming many parents' suspicions about their sneaky babies.

Most psychologists have believed that children cannot really lie until about four years of age. But after dozens of interviews with parents, and years spent observing children, Dr. Reddy has determined that infants as young as seven months are quite skilled at pulling the wool over their parents' eyes.


Fake crying and laughing are the earliest and most common forms of deception, but as babies continue to develop their skills of subterfuge, they become far more calculating.

There was the 11-month-old who, caught in the act of reaching for the forbidden soil of a house plant, quickly turned his outstretched hand into a wave, his mother reported to Dr. Reddy, "as though he was saying, 'Oh, I wasn't really going to touch the soil, Mom, I was waving at you.' "

Babies also seem to think they are masters of the Jedi mind trick, using steady eye contact as a distraction technique. Another 11-month-old, upon being presented with toast she didn't want to eat, would hold eye contact with her mother while discreetly chucking the toast onto the floor.

"She's very sneaky," the mother told Dr. Reddy, "she thinks you can't see it."



Via OmniBrain.

Wednesday, July 4, 2007

Sexy neurogenesis

Animal communication is wonderfully diverse, ranging from the dance of a bee, to written language, to a dog urinating on a tree. For many (all?) animal species, the majority of animal communication is strategically targeted with one goal in mind: sex. Most species lack our oratory competence, yet seem to be procreating rather successfully, able to wordlessly identify and attract mates with desirable genetic backgrounds.

Although sight and sound dominate human communication, many animals use smell to exchange information, able to convey age, social status, sexual receptivity, gender, and health with the chemicals released by their bodies. In fact, many species can recognize individuals by their olfactory "signature" alone, allowing, for example, a mother to recognize her young, and preventing siblings from mating with each other.

This type of communication is mediated, in part, by poorly understood chemicals called pheromones. Mammalian pheromones can elicit immediate behavioral responses, such as aggression (when a male mouse detects the urine of another male mouse) or sexual behavior (when a female mouse detects the same). Of course, the behavioral effects of pheromones are context-dependent; in fact, the fiercest, most aggressive mouse fights I’ve ever witnessed arise when lactating females catch a whiff of a novel male mouse, upon which she unleashes a bloody, ferocious attack on his genitals.

Mammalian pheromones can also elicit long-lasting effects that alter the physiological state of the animal. For example, the detection of male pheromones by a juvenile female mouse may result in an advance in the onset of puberty. If, however, a pregnant female mouse detects the pheromones of a novel male mouse (e.g. one who has dueled and defeated her current suitor and the “father” of her embryos), she will terminate her pregnancy. The latter is an act of mercy--if she did not abort her pups, the male mouse would have killed them upon birth, ensuring that his chosen mate devotes her time and efforts solely to his genetic material.

Crucial to these pheromone-elicited behaviors is the ability to recognize and discriminate between pheromones. Such social recognition thus requires olfactory memories; just as the evanescent taste of a madeleine cookie evokes the Belle Époque world of Proust’s childhood, a female mouse can associate the scent and taste of a “special” male mouse’s urine with the protection he offers her and her pups. Such olfactory memories may require not only the olfactory bulb (the neural structure involved in perceiving odors), but also the hippocampus (a structure crucial for certain types of memory formation).

These structures happen to be the two primary locations where new neurons continue to be born into adulthood (a process called adult neurogenesis). Since I began research on adult neurogenesis, I have been captivated by the myriad of factors (e.g. running, stress, pregnancy, cognitive stimulation, a multitude of drugs…) that affect the birth, survival, and functionality of new neurons. Given that such modulation must be functionally advantageous, this plasticity has fascinating implications for the evolution of adult neurogenesis, as well as the impact these neurons may have on neural circuits and behaviors.

One matter that has always intrigued me is that the modulators of neurogenesis affect either hippocampal or olfactory bulb neurogenesis, but not both. Thus, I was excited to see an advance online publication in Nature Neuroscience that sought to link neurogenesis in both structures to mating behavior. The research, performed by Sam Weiss at the University of Calgary, focused on female mice, and the olfactory memories endowing them with the ability to identify and select prospective mates.

The researchers found that week-long exposure to male mouse urine simultaneously increased the birth of new neurons in the hippocampus and a region called the subventricular zone (SVZ, the birthplace of neurons destined for the olfactory bulb). Congruent with female preference for powerful men, this response was specific for the urine of dominant males; exposure to urine of subordinate did not result in enhanced neurogenesis.

Two weeks after exposure to either dominant or subordinate male urine, the females were placed in a test cage, in which they could smell and see, but not contact, both the dominant and subordinate male. Females primed with the dominant male pheromones had a preference for the dominant male (determined by quantification of “sniffing time”), whereas females exposed to the subordinate male did not show a preference. When neurogenesis was inhibited by a chemical treatment, however, the females did not show a preference regardless of the male with which she was "primed," implicating that male pheromone-induced neurogenesis was necessary for olfactory recognition and/or discrimination.

The results imply that the exposure to a dominant male may provide the impetus for a female to form a new olfactory memory, mediated by the birth of new neurons. Her olfactory system, constantly barraged with olfactory signals, lies in wait for a whiff of something enchanting and unique, which calls it to attention and prompts it to take action. These specialized olfactory memories allow her to distinguish the males with the greatest genetic gifts from the undesirables.

Thursday, June 28, 2007

Will a raw vegetarian diet make you dumber?

Well, no, but according to a recent news article in Science, the addition of meat and cooked foods to the Homo erectus diet may have led to the dramatic expansion of our ancestors' brains and cognitive abilities.

Between 1.9 million and 200,000 years ago, the brains of our ancestors tripled in size (from 500 cc in Australopithecus to about 1500 cc in Neanderthals), a feat that required a massive increase in energy supply. Brains are rather greedy structures, utilizing 60% of a newborn baby's energy expenditure, and 25% of a resting adult's. In contrast, the average ape brain uses only 8% of the animal's total energy expenditure, despite similar basal metabolic rates. So what led to this glorious caloric upsurge?

One well-supported theory proposes that calorie-dense meat provided the necessary fuel. The high caloric return (not necessarily the high protein content) of meat made it a far more efficient fuel, capable of supporting a 35-55% increase in caloric needs. Moreover, a diet with a greater proportion of meat permits a smaller gut, allowing the allocation of energy saved from digestion and tissue maintenance to feeding the voracious brain. One line of evidence that supports this hypothesis stems from correlational primate studies: capuchin monkeys, which eat an omnivorous diet and have small guts, are considered the most intelligent New World monkeys.; in contrast, Howler monkeys, while bereft of significant brainpower, have large guts to accompany their vegetarian diets.

According to Harvard primatologist Richard Wrangham, "a diet of wildebeest tartare and antelope sashimi alone isn't enough." By breaking down collagen and starches, cooking is a form of pre-digestion, thus lightening the load for the GI tract and allowing greater energy expenditure elsewhere. In one study, pythons fed cooked, ground meat spent 23.4% less energy digesting relative to those which ate raw meat; in another, mice raised on cooked meat gained 29% more weight than mice fed raw meat.

Theoretically, cooking and meat could have provided a great enough surge in calories to fuel the major expansion of our ancestors' brains and cognitive abilities, but the idea is still controversial. Back then, cooking required fire, and evidence for the earliest controlled fires is a bit ambiguous. The earliest such evidence is from about 800,000 years ago, and the earliest evidence for cooking (e.g. hearths) is from no earlier than 250,000 years ago, with questionable evidence dating to 300,000 to 500,000 years ago.

Nevertheless, it's an intriguing explanation for this feature of our evolutionary history. Of course, as we are no longer subjected to the same evolutionary pressures, it's not exactly a recipe for intelligence in modern society. It's possible, according to Wrangham, that "Western food is now so highly processed and easy to digest that...food labels may underestimate net calorie counts and may be another cause of obesity." That said, I love a good barbecue, and in the land of "raw foodies" and "fake stake [sic]," it's refreshing to see meat and cooking receive some due recognition for their delicious role in our natural history.

*For those without a Science subscription, Jake at Pure Pedantry has some key excerpts from the article

Wednesday, June 27, 2007

Estrogen and the aging brain

As women advance in age, pregnancy and childbirth become increasingly dangerous and destructive. Perhaps to protect us, we women have evolved to be infertile later in life: our ovaries stop producing estrogen, causing our reproductive systems to gradually cease operations. Thus rendered barren, we can devote our maternal resources to mentoring and supporting our children and grandchildren. The rosy "grandmother hypothesis" is, however, not the only theory for the evolutionary origin of menopause.

The cessation of estrogen production also results in a number of debilitating symptoms, such as hot flashes, loss of short-term memory, and declining abilities to concentrate and learn new tasks, which would have put older women at greater risk for predation. Accordingly, some have hypothesized that menopause evolved as a way to "thin the herd," eliminating non-reproductive members of society and leaving food and other resources for the young. (Love you, Gumma!)

[This "culling agent" theory receives little support; the predominant theory as to why cognitive abilities decline is that conditions manifesting later in life (especially after reproductive age) are simply not subjected to the pressures of natural selection.]

Regardless of prehistorical reality, humans have evolved the propensity to thwart nature, creating the pharmaceutical industry and one of its many gifts: hormone replacement therapy (HRT). HRT does not rescue infertility, but is intended to mitigate the other lamentable effects of menopause, such as those impacting cognitive function.

The aging brain, while not suffering from notable cell death (except in conditions like Alzheimer's and Parkinson's Disease), is afflicted by significant changes in the connections (synapses) between neurons, within otherwise intact neural circuits. Certain molecules with essential roles in synaptic communication (e.g. glutamate receptors) change in quantity and location. These molecular changes are accompanied by significant structural alterations to the synapses themselves. Two regions display the greatest vulnerability to these changes: the prefrontal cortex (PFC), involved in attention and working memory, and the hippocampus, involved in many types of memory formation. Although these changes are inevitable concomitants of brain aging, they are exacerbated by the drop in estrogen levels experienced by women undergoing menopause, particularly in the PFC.

Estrogen, like all hormones, acts by traveling through the membrane of a cell to the nucleus, where it switches certain genes on or off, thereby regulating protein production. Of the many genes under the direct control of estrogen are the NMDA receptor (a key molecule for synaptic communication, in particular synaptic plasticity), elements of the cholinergic system (involved in attention and working memory), and genes that influence neuronal survival and structure. In particular, estrogen is known to enhance the number and strength of connections in the PFC of female rhesus monkeys which have had their ovaries removed ("ovariectomized," or OVX). The relevance to the human menopausal situation, however, involving both age and estrogen loss, was heretofore unknown.

A new study by John Morrison at Mt. Sinai School of Medicine investigated this issue by OVXing old and young rhesus monkeys, and treating half of each group with estrogen. The group then tested the monkeys on a task of short-term memory (STM), a component of working memory, in which the monkeys had to remember the location of an object after an increasing delay. They found that aged OVX monkeys which had not received estrogen treatment performed significantly worse than any of the other three groups (aged OVX + estrogen (E), young OVX + E, young OVX), indicative of significant cognitive decline. Moreover, the two groups of young animals performed equivalently, regardless of whether they received estrogen treatment, and the aged OVX + E group performed equally well as the former two. This surprising finding indicates that the estrogen treatment in the aged monkeys was sufficient to improve their cognitive function to levels comparable to their younger peers.

After cognitive testing, the researchers analyzed the brains of all monkeys, discovering that, in the PFC, estrogen increased synaptic density in both young and old OVX monkeys. Highest synaptic density was observed in young OVX + E monkeys, followed by comparable levels between young OVX and aged OVX + E, and lowest density in aged OVX without E. Moreover, estrogen treatment resulted in a significant increase in a particular subpopulation of synapses, which exhibit high dynamism and plasticity.

These findings indicate a complex interplay between estrogen and age, by which "young monkeys without [estrogen] can sustain excellent cognitive function against a background of dynamic spine plasticity." The one-two punch of age and estrogen loss, however, may be sufficiently destructive to impair an animal's cognitive function. By promoting the growth of new, dynamic synapses, estrogen may partially compensate for the effects of aging.

The implication with respect to HRT is that the timing of treatment is crucial. It may be important to begin treatment when ovarian hormone levels just begin to fall, at perimenopause, while synaptic plasticity mechanisms are still robust and resilient. Thus, this study contributes to the enormous body of HRT research (which currently consists of heaps of conflicting information). It has been suggested that the timing of hormonal intervention may underlie many of these contradictory data, and this study may lend some credence to this hypothesis and clear these cloudy waters.

Reference: Hao J et al. Interactive effects of age and estrogen on cognition and pyramidal neurosn in monkey prefrontal cortex. PNAS 2007 Jun 25 [Epub ahead of print].

Saturday, June 23, 2007

Free Scientific American!

Scientific American, the oldest continuously published magazine in the United States, is unveiling a new, "appealingly bright, colorful design" and giving away the July issue for free (until June 30). Among the highlights of this issue: neuronal codes and memory formation, gravitational waves, and a debate between Richard Dawkins and Lawrence Krauss on the coexistence of faith and science.

Download your free issue of SciAm here.

Sibling rivalry

A new study in Science reports that the eldest children in families tend to have slightly higher IQs than their younger siblings. The report (brought to my attention by, not surprisingly, my older sister) concluded that the small but significant difference (2.3 IQ points) was not a result of biology, but rather social upbringing.

From The New York Times:
Norwegian epidemiologists analyzed data on birth order, health status and I.Q. scores of 241,310 18- and 19-year-old men born from 1967 to 1976, using military records. After correcting for factors known to affect scores, including parents’ education level, birth weight and family size, the researchers found that eldest children scored an average of 103.2, about 3 percent higher than second children and 4 percent higher than the third-born children. The scientists then looked at I.Q. scores in 63,951 pairs of brothers and found the same results. Differences in household environments did not explain elder siblings’ higher scores.

To test whether the difference could be caused by biological factors, the researchers examined the scores of young men who had become the eldest in the household after an older sibling had died. Their scores came out the same, on average, as those of biological first-borns.

Blame your parents:

Social scientists have proposed several theories to explain how birth order might affect I.Q. scores. First-borns have their parents’ undivided attention as infants, and even if that attention is later divided evenly with a sibling or more, it means that over time they will have more cumulative adult attention, in theory enriching their vocabulary and reasoning abilities.
...

Older siblings [also] consolidate and organize their knowledge in their natural roles as tutors to junior. These lessons, in short, could benefit the teacher more than the student.

Another potential explanation concerns how individual siblings find a niche in the family. Some studies find that both the older and younger siblings tend to describe the first-born as more disciplined, responsible, a better student. Studies suggest — and parents know from experience — that to distinguish themselves, younger siblings often develop other skills, like social charm, a good curveball, mastery of the electric bass, acting skills.
I have failed to develop any such skills (although my Wii-curveball is improving), but all is not lost, little ones! There is a glistening, titillating silver lining to this cloud of inferiority:
Younger siblings often live more adventurous lives than eldest siblings. They are more likely to participate in dangerous sports than eldest children and more likely to travel to exotic places, studies find. They tend to be less conventional in general than first-borns, and some of the most provocative and influential figures in science spent their childhoods in the shadow of an older brother or sister (or two or three or four).

Charles Darwin, author of the revolutionary “Origin of Species,” was the fifth of six children. Nicolaus Copernicus, the Polish astronomer who determined that the Sun, not the Earth, was the center of the planetary system, grew up the youngest of four. René Descartes, the youngest of three, was a key figure in the scientific revolution of the 16th century.

First-borns have won more Nobel Prizes in science than younger siblings, but often by advancing current understanding, rather than overturning it, Dr. Sulloway argued. “It’s the difference between every-year or every-decade creativity and every-century creativity,” he said, “between creativity and radical innovation.”

Link to the NYT article.

Wednesday, June 20, 2007

Working memory and neuronal calculus

The world offers an awesome, indescribably magnificent profusion of sensory riches. For our meager mortal brains, however, trying to process this deluge of information is akin to taking a drink from Iguaçu Falls: it's tremendously inefficient, and you will likely be violently ripped from your precipice and vanish in a ferocious torrent of natural wonder.

Because the world is too rich for our brains to process at once (or even in a lifetime), we are equipped with mechanisms that restrict the avalanche of information to a manageable trickle. At the level of the brain, this restrictive bottleneck is referred to as attention; when we attend to a certain stimulus, we select it for more comprehensive processing, while relegating the rest to a relatively superficial survey. Importantly, attention endows a capacity limitation onto our brains, not our sensory organs, which latter detect a remarkable embarrassment of sensory details. For example, the sensory neurons on the bottom of your feet are well aware of the pressure exerted by the floor, but you were probably not actively thinking about it until this sentence directed your attention to the sensation.

If our processing ended with attention, we would conduct our lives strictly from information received at the present instant, without any internal state of the mind or abstract thought. But instead of flitting whimsically in and out of our brain, information selected from the world by mechanisms of attention gain access to our working memory, which temporarily holds onto this information for detailed evaluation. For example, when ordering a pizza for delivery, you read from the menu "4-1-5, 6-9-5, 1-6-1-5," hold the sequence in your head, and punch it into your phone. In the interim between reading and dialing, the digits were stored in your working memory, and likely quickly forgotten once you heard the first ring and the number was no longer relevant. In more complex situations, the information in our working memory is the basis for decisions and planning of elaborate behavior, and is thus a critical component in many cognitive processes associated with human "intelligence," such as language.

So what is the neural manifestation of working memory? What happens in your brain between reading and dialing the pizza delivery number? Working memory is dependent on the prefrontal cortex (PFC), which is the region at the very front of the brain, directly behind the forehead. In the monkey PFC (and presumably in that of humans), there are neurons that seem to exhibit many properties of working memory; that is, they are activated by a specific stimulus, and if the stimulus will soon be relevant, they temporarily remain activated even after the stimulus disappears. For example, if a monkey must remember the location of a flash of light for a period of 4 seconds, a certain population of neurons will experience a surge in action potentials in response to the light, and proceed to fire at this elevated rate through the 4-second delay period. When the animal reports the stimulus location, the latter information is no longer relevant, and the population of neurons shuts down accordingly. Such neurons are said to exhibit persistent activity (also called "delay period" activity). Persistent neural activity is thought to represent information about a stimulus even after it is gone, thus reflecting the temporary storage of information, i.e. our working memory.

Persistent neural activity presents an interesting computational complication: action potentials are brief electrical pulses, so how does the system interpret a tonic, persisting pattern of neural activity? In a process called temporal integration (theoretically similar to mathematical integral calculations), the system accumulates information over a certain window of time and "remembers" the sum as a pattern of neural activity. That is, the network dynamics of the circuit can integrate a flurry of brief electrical pulses, and translate the sum into a persistent change in activity. One fundamental question is how the circuitry of neural integrators accomplishes this computational feat, which appears to be so fundamental to working memory.

Emre Aksay of Weill Cornell Medical College, in collaboration with David Tank at Princeton University, recently published a study in Nature Neuroscience that investigated this issue in the neural integrator that controls eye movements in goldfish (the goldfish "oculomotor integrator"). Although goldfish eye movement is not the most intuitive place to study working memory, the goldfish oculomotor integrator is a particularly tractable neural integrator, and may thus provide a framework for understanding similar mechanisms in, for example, our PFC.

Like most animals, goldfish spontaneously move their eyes around, fixating on items of interest (e.g. my finger on the glass of their tank). In order to keep the eyes in that stable, fixed position, the animal must have a sustained neural representation (i.e. a memory) of its eye position, which guides and maintains the activation of the appropriate eye muscles (even if my finger is briefly removed). The oculomotor integrator generates this internal representation by integrating the action potentials of neurons which signal changes in eye position.

When a goldfish is looking to the right, the neurons on the right side of the integrator increase their firing rates (behavior characteristic of a positive feedback system), while those on the left decrease their firing rates. Presumably, the positive feedback occurring on the right is critical for generating persistent firing, thereby enabling integration. However, the connective logic of the circuit that mediates this positive feedback is unknown.

It is known that the oculomotor integrator is a bilateral circuit, with two populations of excitatory neurons (one on each side of the brain); these populations are connected primarily by inhibitory neurons. In light of this neuroanatomy, there are two feasible mechanisms that may mediate positive feedback: a) disinhibition from (in this example) the left side of the integrator or b) excitatory connections between cells on the right side of the integrator.

By using drugs that targeted either excitatory or inhibitory neurons, Aksay and Tank sought to dissect the circuitry of the integrator and solve this dilemma. They found that the persistent activity of the integrator that underlies eye fixation did not require inhibitory neurons, but did require the excitatory connections. However, the inhibitory connections between the right and left sides of the integrator appeared to be important for coordinating the two sides, ensuring that only one has persistent activity at any one time (and thus that the eye only moves in one direction at a time).

And now for the tantalizing extrapolations to which neuroscience lends itself so wonderfully: although the persistent neural activity of discrete neural integrators, (holding a specific set of information in your working memory), does not require inhibitory pathways, the coordination between different integrator circuits (i.e. representations of different sets of information) does. The cornucopia of information presented by our surroundings may require these inhibitory connections to help liaise our working memory at a local level, lest the mayhem of our welter world prevail.

Thursday, June 14, 2007

Neuroscience topics explained in 120 seconds

If you're looking to mix some education into your procrastination, the Society for Neuroscience website has a series of free online newsletters "explaining how basic neuroscience discoveries lead to clinical applications." The articles are brief and quite accessible, and include a wide variety of interesting topics, like narcolepsy, phobias, memory enhancers, pheromones, and artificial vision.

Friday, June 8, 2007

Come here often?

Imagine being home on a moonless night when the power unexpectedly goes out. You are shrouded by silent darkness, instantly blind to your surroundings. Yet despite this sensory deprivation, you can navigate somewhat effortlessly around the futon, through the doorway of the kitchen, and across to the middle drawer where your lighter is stored, avoiding walls, furniture, and other familiar obstacles along the way. How, without vision or echolocation, did you remember where everything was in relation you and to everything else?

The brain's "spatial memory," as this ability is called, relies on the operation of neural "maps." Critical to these maps are specialized neurons known as "place cells," which are located in the hippocampal formation. These cells show place-specific firing patters; that is, a given place cell will become highly activated only when an animal is at a specific location within a particular environment. Theoretically, networks of place cells, each activated in a distinct but partially overlapping spatial region, form maps of every environment encountered. If an environment is experienced repeatedly, the map will be committed to long-term memory; the brain can then deduce its animal's location by interpreting the activation of place cells along the relatively stable map.

Importantly, place cell activation patterns are based on spatial clues. In the introductory example, you could navigate in darkness only because you knew your relative position at the time of the power outage. If, however, you were to close your eyes and twirl around on your toes, and open your eyes immediately after the outage began, your internal map (and thus you), would be spatially bewildered. Yet if you were to grope and fumble until you found the futon, your map would reorient, allowing you to immediately intuit the rest of your spatial world.

What about when two environments have similar spatial cues? For example, imagine two parallel streets in San Francisco, each lined by eminent Victorians, peppered with sushi restaurants, cafes, and liquor stores, a MUNI rail cutting a rugged metallic swath down the middle of each street. The spatial cues of these two environments would activate a somewhat overlapping pattern of place cells, yet the subtle differences on each street (an Indian-Pakistani restaurant on the north side of one, a pirate store on the south side of the other) would allow you to recognize the differences and navigate each uniquely. How does your brain recognize such relatively small differences to construct the distinct maps the environments deserve?

Researchers at the University of Bristol and MIT published a report in the early online edition of Science on June 7 that explored this question. The group focused on the role of a particular region of the hippocampal formation, the dentate gyrus, and found it to be crucial for distinguishing between similar locations. The dentate gyrus does not contain place cells, but it does serve as an interface between the hippocampus (where the place cells are located) and the rest of the brain (which would provide the sensory information, the spatial cues). Thus, it may provide the neural input necessary for "map" construction.

The group removed the NMDA receptor, a protein crucial for synaptic plasticity (the process by which the connection between two neurons adapts to become stronger or weaker, thus enabling learning and memory), specifically from the dentate gyrus. Although these mice perform normally in several learning and memory tasks, they had trouble discriminating between similar yet distinct environments. At the neuronal level, their place cells showed decreased spatial specificity, becoming activated in a significantly broader range.

This type of deficit is similar to what has been previously observed in aged animals; these results may thus help explain the disorientation experienced by some older people, who often struggle to adapt to new spatial locations. Perhaps a major component of their impairment is an age-related dysfunction in the dentate gyrus, which makes it difficult to encode subtle differences and form unique place cell maps for similar yet distinct places. Such individuals would also lose their bearings as a result of changes to familiar environments; e.g., a few years ago I moved some of my grandmother's icons around on her computer's desktop, and she was completely bewildered until I dragged them all back to their original, recognizable locations.

Reference:
McHugh TJ et al. "Dentate gyrus NMDA receptors mediate rapid pattern separation in the hippocampal network" Science. [Published online June 7 2007, DOI: 10.1126/science.1140263]

Thursday, June 7, 2007

Monkey see, monkey do mathematical calculations

Humans are constantly making decisions with uncertain outcomes—betting on a poker hand, predicting the weather, and selecting a lane of traffic, for example. Because the consequences of such decisions are not guaranteed, we must base our decisions on clues from the environment, determining the probabilities of potential outcomes before deciding on a rational course of action.

How does the brain perform these calculations? During the formation of a decision, what happens between sensation (our interpretation of the outside world) and behavior (the manifestation of our decision)?

To answer these questions, Tianming Yang and Michael Shadlen from the University of Washington trained Rhesus Monkeys to perform "simple" statistical calculations, and measured the activity of particular neurons during the decision-making process. The results were published on June 3 in an advance online publication in Nature.

In the task, the monkeys were presented with a random series of four abstract shapes on a video screen. They then directed their gaze toward either a red or a green target light, only one of which would be associated with a juice reward. The light that would give the reward was not fixed, but could be calculated probabilistically.

Each shape (there were a total of 10) represented the probability that the rewarding target was either red or green. For example, a square strongly favors the red target as rewarding (weighted 0.9), while a triangle indicates that green will be rewarding (0.9 in the opposite direction). A cone weakly indicates the red will be rewarding (0.5 towards red), and a pac-man weakly indicates green (0.3). Thus, the probability that the monkey will be rewarded by looking at a particular target is the sum of the probabilities for each of the shapes.

With 10 shapes, there are 715 unique combinations (and 10^4 permutations), thus precluding memorization of specific four-shape patterns, and encouraging the monkeys to learn the shapes and calculate the reward probability of each target. This is a far from trivial demand of a monkey, but eventually (after two months and over 130,000 trials), they chose the correct target 75% of the time, indicating that they had learned to base their decisions on the combined probabilities for reward. This capacity of monkeys to make such subtle probabilistic deductions is quite impressive, but is only the first half of the story.

After thus establishing a complex reasoning task, the researchers could begin exploring the neural basis for these types of decisions. They measured the activity of neurons in a particular area of the brain, called the lateral intraparietal area (LIP). This area lies intermediate between the visual input (the abstract shapes) and the behavioral output (the appropriate eye movement), and is thought to carry information involved in transforming visual signals into commands to move the eyes; i.e. in making decisions that result in eye movements.

What they found was awesome. When the monkeys saw a shape, the activity of their LIP neurons was proportional to the probability associated with that shape. With each sequential shape, the neurons altered their firing rates to match the updated probability. Although it is unknown how their brains converted information from each shape to their respective probabilities, the activity of these neurons indicates that they either play a role in the transformation, or represent the outcome during the decision-making process.

Apart from showing that monkeys are closer to furry calculators than previously thought, the study has grander implications. As the authors conclude, “the present study exposes the brain’s capacity to extract probabilistic information from a set of symbols and to combine this information over time.” A similar neural process may underlie our abilities to reason about alternatives, and make decisions based on subtle probabilistic differences.

Reference: Yang T & Shadlen MN (2007) Probabilistic reasoning by neurons. Nature (doi:10.103/nature05852)

Friday, June 1, 2007

Of Molecules and Memory, Pt. I

I've posted on memory a few different times, but thus far I've shied away from going into great molecular detail; in fact, I've pretty much avoided molecular and cellular neuroscience altogether on this "blog." This sidestepping results, to be honest, from laziness; it is easier to make gambling and ventriloquism widely appealing than it is to spice up intracellular mechanisms like gene regulation and protein folding, although I believe the latter two are actually quite intriguing and wholly relevant to understanding the mind.

Glossing over molecular details is actually somewhat at odds with my attitude towards neuroscience, a field which has appealed to me since middle school because it links causal, physical mechanisms with delightfully wondrous things like memorizing pieces of music (the actual moment of inspiration occurred while I was playing the piano). Since then, I have been fascinated by the idea that the biology--proteins, molecules, genes etc--of individual cells is directly related to the complexities of human thought, from kicking a soccer ball and catching a dodgeball to learning a language and dreaming.

In the days since middle school, however, I've come to realize that by attempting to bridge molecules to behavior, neuroscience is both marvelously exciting and incredibly problematic. In between these two levels are, in increasing levels of organization: the cellular, the intercellular (synaptic), the circuit (networks/pathways), the regional (e.g. fMRI studies), and the systems (e.g. motor systems), and a wide range of inter-level hierarchies upon which I won't begin to touch. Because of the enormous distance one must travel from specific molecules to the human mind, many cognitive neuroscientists dismiss "reductionism" as analyzing mechanisms which are too far removed from behavior to be directly relevant; they believe each level must be bridged before making any larger connections.

I agree that the mind cannot be understood by looking solely at the simplest biological components, but I also feel that knowledge of neural networks, etc., is meaningless unless we understand the biological basis. In other words, the cellular approach is necessary, but not sufficient, for understanding the brain. Most cognitive processes, in particular memory formation, have much to gain from molecular and cellular analyses.

Memory formation is endlessly fascinating on all levels. Conceptually, memory is (to quote Eric Kandel), "a form of mental time travel [which] frees us from the constraints of time and space"; mechanistically, it is a result of the brain's ability to embody, retain, and modify information in neural circuits. To further define 'memories' using neuronally (i.e. biologically)-relevant vocabulary, it is helpful to distinguish it from closely-related 'knowledge' and 'learning.'

'Knowledge,' in neuronal terms, is the perceived world converted into a neuronal form; it exists as "internal representations." These representations issue from the activity and connectivity of neurons (forming an assembly of neurons: a 'neural circuit'), and is thus inextricably linked to the biological properties of those neurons, particularly of their functional interconnections (i.e. synapses).

'Learning' is then the experience-dependent creation or modification of these internal representations; i.e. changes in the way the neurons are connected to each other in specific circuits, particularly the strength of their synapses ('synaptic plasticity'). 'Memory' is thus the retention of the aforementioned experience-dependent modifications. The salient idea is that specific biological properties must be altered, (in particular, those of the synapse) in order for memory to be established; moreover, these properties are products of universal cellular and molecular mechanisms that are employed throughout the body and the living world.

(So what are these cellular and molecular mechanisms? For those in need of some scientific background information on neuronal communication, this site is quite clear and comprehensive, or for a briefer version I've given a summary here.)

And now, leaping and bounding back through the conceptual hierarchy of neuroscience, these biological processes are linked to functional changes occurring within neural circuits, which latter ultimately guide behavior. Thus, although reductionist techniques attempt to experimentally link molecules directly to behavior, the overarching theoretical goal involves bridges between and amongst all levels. Modern molecular techniques are all the more powerful when combined with other levels of analyses: after intervening at the cellular or molecular level, well-accepted psychological and behavioral paradigms can be employed to determine whether a given biological process is correlated with (or necessary, or perhaps even sufficient for) the occurrence of a behavioral phenomenon.

One elegant example of the power of reductionism (which motivated this post, in particular the somewhat lyrical wax of an introduction) was just published online in Nature Neuroscience. The study, carried out by a group from UT Southwestern, manipulated a neuronal protein to assess its role in learning and memory, thus attempting to bridge the behavioral and the molecular pathway levels directly. I will go into more detail on the paper, and the neurobiology of memory, in the following post.

Of Molecules and Memory, Pt. II

This is Part II of a two-part series; click here for Part I.

Now for some neurobiological background on memory, on the biological changes that occur at synapses when "internal representations" are modified. A key experimental paradigm to understand is called long term potentiation (LTP), which is thought to simulate what happens in the brain during learning. Basically, experimenters take a slice of the hippocampus (a structure with a critical role in declarative learning and memory), and use an electrode to induce strong activity (i.e. a high frequency of action potentials) in a group of neurons located in a specific area of the hippocampus (called CA3). These regions project to neurons in another region (CA1), and connections between these regions are believed to be involved in learning and memory. Moreover, the experimental stimulation is thought to be similar to the kind of stimulation neurons receive during intense activity (e.g. learning), and results in the "potentiation," or strengthening, of the synapses between CA3 and CA1 neurons. In other words, the CA3 neurons become more effective at stimulating the post-synaptic CA1 neurons.

The hippocamal synapses at which LTP is thought to occur are excitatory (meaning their activation makes it more likely for the post-synaptic cell to fire an action potential), and use a small neurotransmitter called glutamate. Glutamate is by far the most prevalent excitatory neurotransmitter in the brain, and in most cases activates a mixture of NMDA and AMPA receptors on the surface of the post-synaptic cell. Now, I'm going to try to delve deep into the biology of NMDA receptors (with some hyperlinked help), because they have some quite unique features that are critical for synaptic plasticity (and by extension, learning, knowledge, and humanity).

NMDA receptors are ion channels (proteins that span the membrane and conduct specific charged particles into or out of a cell). Because NMDA is at excitatory synapses, it allows positively charged ions (like sodium and potassium) to flow into the neuron. One of the special features of NMDA receptors is they will only conduct these ions under very specific circumstances: 1) glutamate must be present (indicating the activation of an incoming neuron which has released glutamate) and 2) the neuron must already be somewhat "depolarized" (indicating the activation of other synapses from nearby cells; remember that each neuron receives thousands of inputs). Thus, NMDA channels at synapse A will only open if 1) synapse A's presynaptic neuron is activated and 2) the post-synaptic cell is already somewhat activated by activity at synapses B, C, and D. This specificity confers on the receptor the capacity to act as a molecular coincidence detector, only opening when the pre- and post-synaptic cell are activated in unison, e.g. if the synapse is highly active.

When the NMDA channel does open, it allows not only the entrance of sodium (Na+) and potassium (K+), which depolarize the cell, but also of calcium (Ca++). If the NMDA receptors are induced to open repeatedly in a short period of time, the levels of Ca++ in the cell will become high enough that they activate specific biochemical pathways. First, in the "early phase," the pathways lead to an increase of functional AMPA receptors (the other major kind of glutamate receptor, which cause activation of the post-synaptic cell but does not conduct calcium, nor act as a coincidence detector) on the post-synaptic cell, which means that when a certain amount of glutamate is released into the synapse, it will have a stronger effect because there are more receptors for it to act upon. However, this potentiation is short-lived unless other changes take place.

Persistently high calcium levels will eventually lead to "late phase" LTP, which involves changes in the expression of certain genes (i.e. the rate at which certain proteins are produced). This results in enduring changes such as reshaping the architecture of the dendrite, changing the number of functional receptor proteins, and even building new synapses. A structural change has now ensued, allowing synaptic potentiation to last for days, weeks, months, or even longer.

Thus, the NMDA receptor allows certain synapses--those which are frequently activated--to become more effective. Theoretically, when these changes occur at multiple synapses in a neural circuit, the activity and connectivity of the circuit is modified, thus changing the "internal representation" which the circuit underlies, and generating a "memory." But are these truly the molecular mechanisms of memory, particularly forms of memory relevant to mammalian behavior?

This brings us back to the paper, which intervenes directly with these molecular pathways and then measures the effects on memory. The focus of the study was a protein called cyclin-dependent kinase 5 (Cdk5) (a "kinase" is a protein which attaches a phosphate group (PO4) to a molecule (phosphorylation), a process which significantly alter the molecule's ability to interact with other molecules).

After using sophisticated genetic tools to remove the gene for Cdk5 in adult mice, the experimenters subjected both normal and "mutant" (those lacking Cdk5) mice to a number of well-established memory tests. In the first set of tests, the mice are trained to learn that an aversive stimulus (usually an electrical shock to the feet) is associated with a particular context (e.g. a room) or cue (e.g. a light or tone); these tests are called contextual and cued fear-conditioning, respectively. Once the association in learned, the neutral stimulus alone (the room or the light) is sufficient to elicit a state of fear (usually determined by observing whether the mouse becomes immobile or "freezes").

In another test, the "Morris water maze," a mouse is placed in a circular pool of opaque water (about 4-5 ft in diameter, typically clouded with milk powder or white paint) that contains a platform hidden about 1 cm below the surface. As rodents are highly averse to swimming, they desperately swim around in search of an exit until they find the platform and can "escape." A series of static visual cues are placed around the edge of the pool, which the rodent uses to determine and, after repeated trials, learn the spatial location of the platform. During the course of training, rodents should require progressively less time to find the platform; once learned, the spatial memory should endure after the training has been completed. This ability to remember the location of the platform depends on the hippocampus; if the hippocampus is damaged, the animals never learn the task.
These tests always seem much more brutal when I explain them like this, although at least they're not as cruel as testing the LD50 of LSD for elephants.
Anyways, the experimenters found that mice lacking Cdk5 performed significantly better in both sets of tests, indicating improved hippocampal learning abilities. The group then explored the mechanisms underlying these behavioral changes, and found that LTP was enhanced in the absence of Cdk5. Moreover, the mice had higher numbers of a subset of NMDA receptors--those containing a subunit called NR2B (NMDA Receptor 2B).

After a bit more probing, the group found evidence showing that Cdk5, by phosphorylating NR2B-containing NMDA receptors, was leading to the degradation of the receptor. Consequently, in the absence of Cdk5, levels of this specific class of NMDA receptors were increased, thus significantly affecting learning behavior, possibly through effects on synaptic plasticity.

And thus, by intervening with a molecular pathway, and tracking the effects using well-established memory tasks, this group linked a molecule to memory. Together with the anatomical circuits into which the neurons are embedded, these molecular pathways directly explain the behavior.

One of the main reasons I wanted to devote this post to reductionism is that I realized that in most of my posts on cognitive neuroscience, I more or less treat the brain like a "black box," which I'm worried may mislead people into thinking that the field of neuroscience doesn't know much about how the brain works. While there is an unimaginable amount of information that is yet to be revealed and understood, there is an amazing amount which we do know--so much that the wealth of available knowledge tends to intimidate me from attempting to explain it in a blog (which is why this post is so monstrously long; congratulations to those who have made it this far). As Eric Kandel, James Schwartz, and Thomas Jessell announce in their introduction to Principles of Neural Science,
"Neural science is attempting to link molecules to mind--how proteins responsible for the activities of individual nerve cells are related to the complexities of neural processes. Today it is possible to link the molecular dynamics of individual nerve cells to representations of perceptual and motor acts in the brain and to relate these internal mechanisms to observable behavior."
Again, I do not believe that molecular mechanisms can alone explain cognition. I would certainly never be satisfied by saying that memory arises from the activity of NMDA receptors and calcium signaling, but these molecular processes are essential for understanding the larger phenomenon, and I thought it was time I showed them their due respect.


Reference
:
Hawasli AH et al "Cyclin-dependent kinase 5 governs learning and synaptic plasticity via control of NMDAR degradation" Nature Neuroscience. Published online 27 May 2007.