Sunday, February 25, 2007

The evolution of language acquisition

The animal kingdom is full of noisy beasts. Screeching monkeys, barking dogs, squawking chickens…animals use sounds to communicate with each other, both friends and foes. These sounds are instinctual, and although the animals may learn, through observation, when a particular sound is appropriate, the quality of the sounds is not learned; the vocalizations are acoustically innate. What about human vocalizations? Excepting expressions of intense emotions (crying, screaming, laughing), most forms of human vocalizations, particularly speech, are acquired through a learning process. Babies learn to speak by perceiving sounds, imitating them, and then modifying their own vocal output in a process called “vocal learning.”

Vocal learning, or acquiring vocalizations through imitation rather than instinct, is not unique to humans, but is extremely rare in the animal kingdom. In fact, only three other distantly related groups of mammals (elephants, bats, and cetaceans (whales and dolphins)) and three distantly related groups of birds (hummingbirds, parrots, and songbirds) are capable of vocal learning.

The group that has received the most attention from neuroscientists, with respect to vocal learning, includes the songbirds. A growing body of research focuses on the brain structures that allow birds to perceive, learn, and generate songs, and the hope is that this research may contribute to understanding the neural mechanisms underlying human language acquisition. These structures, which include seven “vocal brain nuclei,” are involved in networks that mediate both the perception and production of sounds, allowing birds to hear themselves, hear others, and control the acoustic structure of their vocal output.

By looking at genes that are up- or down-regulated in response to singing behavior, researchers (notably Dr. Erich Jarvis at Duke University) found that these seven nuclei are strikingly similar in location, connectivity, and behaviorally-driven gene expression across all three bird groups, and speculates that similar brain structures are at play in other vocal learners.

This image shows comparable vocal and auditory brain areas among vocal learning birds and humans. Left hemispheres are shown, as that is the dominant side for human language. From Jarvis, Ann. N.Y. Acad. Sci 1026: 749-777 (2004).


Because human brain lesions and brain imaging studies do not allow for a high resolution analysis, the neuroanatomy of comparable vocal nuclei in humans has not been demonstrated. It is interesting to note however, that like birds, humans have brain regions in the cerebrum that control the acoustic structure of their vocal behavior. For example, Broca’s area plays a selective role in speech production; humans with damage to this region have difficulty speaking, but have little or no deficits in comprehension. Somewhat surprisingly, the neuroanatomy of similar structures in other vocal learning mammals (elephants, bats, cetaceans) has not been examined.

Given that parrots and hummingbirds diverged from one another about 65 million years ago, and that birds evolved 50-100 million years after mammals, the evolution of vocal learning is perplexing. Not only did this complex behavior evolve in phylogenically disparate groups, but also, in every species that has been examined, it is governed by similar neural structures. How did these striking similarities evolve?

Dr. Jarvis suggests three hypotheses. The first is convergent evolution; that is, similar structures evolved independently in all vocal learners. This implies significant constraints on how these structures can evolve to mediate this complex behavior. (For comparison, the “eye” is believed to have evolved independently eleven times, but each time has produced dramatically different structures…compare a human eye to those of a spider).

The second hypothesis is that all vocal learners came from a common ancestor, and that vocal learning was thus lost in all non-learners. Given that these animals diverged hundreds of millions of years ago, this would imply that the capacity for vocal learning was lost independently by every other animal; non-human primates would have lost this capacity multiple times before humans evolved with the trait intact. The third hypothesis modifies the first, positing that all animals have rudimentary neural structures for vocal learning, but that these structures have been independently amplified in vocal learners.

All three of these hypotheses are possible alternatives, and all are constrained by the rarity of vocal learning in the animal kingdom. Vocal learning has clear benefits: by permitting the modification of sounds, it allows for innovative and flexible communication. This system may be the foundation upon which spoken language in humans was established, and certainly contributes to reproductive success in songbirds. Further, these attributes may allow animals to maximize sound propagation in novel environments (e.g. if an animal must adjust from living in an open savannah to living on a heavily forested mountain). If such a useful behavior could evolve seven independent times, why didn’t it evolve more often? Alternatively, if vocal learning was present early on, why would most animals have lost the capacity?

Jarvis’s answer: predation. The ability to make novel, varied sounds, and to maximize their propagation, is an excellent way to advertise one’s presence to potential predators. Thus, for the majority of animals, the benefits of vocal learning are far outweighed by the hazards it brings.

Only seven known groups of distantly related animals are capable of vocal learning--this in itself is fascinating. The fact that seven similar brain structures have evolved in three of these learners only adds to the excitement. Have the brains of mammalian vocal learners evolved similar mechanisms for perceiving and producing sounds? More importantly, have humans? Did this ability to imitate and improvise sounds lead to the evolution of human language?

Friday, February 23, 2007

Wanna bet?

Imagine a coin toss in which you could win $50 for heads, but would lose $50 for tails. Would you take that bet?

What about winning $1,000,000 for heads, or losing $50 for tails?

Winning $75 or losing $50?

Most of us would not accept the first bet, but would certainly accept the second. The third option is a bit more ambiguous. Even though the potential gain is 50% greater than the potential loss, and the probability of each outcome is equal, most people would not take that third bet. Humans are peculiarly averse to risk; that is, we are more sensitive to potential losses than to potential gains. In fact, for the average person, losses are about twice as psychologically powerful as gains. Behavioral risk aversion (BRA) is the lowest Reward/Loss ratio an individual will accept. Since the average person will not take a 50/50 bet(such as a coin toss) unless the potential gain is at least twice as high as the potential risk, their BRA is about 2.

If you think about this logically, anyone with a BRA value greater or less than 1.0 is behaving irrationally (though not quite as irrationally as this man). If there is a 50/50 chance of gaining or losing money, one should feel neutral about accepting a bet for +$50/-$50, and accept all bets that have a greater potential pay-off than potential loss, including +$51/-$49. Accepting a +$51/-$49 gamble may seem foolhardy but if the game is fair, and enough coins are tossed, you will come out with a profit.

Of course, there are many instances in which our "irrational" aversion to loss is actually quite rational. For example, if a person only has $50, earning $50 would merely double their wealth. Although this outcome is desirable, it is too trivial to motivate risking complete bankruptcy. In this situation, a "rational" person should reject this gamble, for the sake of his or her survival. Thus, although our aversion to loss results from a distorted perception of reality (maintaining a greater subjective value for a loss relative to a gain), it can protect us from getting ourselves into dangerous situations.

Let's go back to the +$51/-$49 bet with 50/50 odds, and assume you have a bit more of a buffer in your bank account than in the previous example. Although the most logical decision would be to accept the bet, most people would not do so. Clearly, humans do not use their powers of reason alone. Our emotions play a powerful role in our assessment of risk, eliciting instinctive responses that are a product of millions of years of evolution. Indeed, loss aversion has been observed in capuchin monkeys and children as young as five, suggesting that it may be a fundamental adaptation of the primate brain.

How does our intuitive biology render the subjective impact of losses as significantly greater than that of gains? What happens in the brain? Are there specific circuits that deal with reason, contending with those that deal with emotion? Are there circuits that are triggered by potential loss, communicating with those triggered by potential gain? Or does risk evaluation involve a single neural system that assigns subjective value to both potential risks and losses? A team of researchers led by Russell Poldrack of UCLA explored these questions and claimed to find a link between certain brain areas and the innate aversion to risk, publishing their intriguing results in Science.

The researchers presented 16 college-aged subjects with 256 different combinations of potential gains and losses (e.g. +$36/-$20); all gambles were coin tosses bearing a 50% chance of either outcome. While the subjects decided whether or not to accept the bets, the researchers used functional magnetic resonance imaging (fMRI) to determine which areas of the brain were active. Analysis of the fMRI results revealed regions that became more active as the potential rewards grew and the bets became more attractive. These areas included the "reward centers," such as the prefrontal cortex and ventral striatum, which are also activated when eating chocolate, hearing pleasing music, and taking cocaine.

What about when the potential loss increased? Surprisingly, when the subjects evaluated the possibility of losing money, the areas associated with negative emotions, such as fear and anxiety, were not activated. In fact, there were no areas that became more activated in response to increased potential loss. Instead, such scenarios silenced the areas that had been activated in response to potential gain. Notably, these areas were turned down in response to potential loss more strongly than they were turned up by potential gain. In other words, the neural response to potential loss was stronger than the neural response to potential gain; the activity in these neural circuits thus mirrored the subjects' behavioral aversion to risk.

The researchers then looked at individual differences between subjects to determine the extent to which a person's brain activity could predict their aversion to loss. For each subject, the researchers analyzed the data from all 256 evaluations to determine their BRA. Across all subjects, the median BRA value was 1.93, and ranged from 0.99-6.75. When they focused on the brain activity of subjects who were least averse to risk (low BRAs), they found that these brains had the weakest responses to both potential losses and potential gains. These results indicate that relative to people who are risk averse, risk takers have an overall diminished response to both gains and losses. To take this result beyond a coin toss, this may provide a neural basis for why certain individuals are more likely to be involved in risky behaviors such as base jumping and stock trading: they seek greater gains regardless of the increasing potential loss because their brains are less sensitive to both.

Human irrationality and our inability to logically assess risk are fascinating phenomena, and this study is an exciting demonstration of a neural basis for this behavior.


Reference: Tom SM et al (2007). The neural basis of loss aversion in decision-making under risk. Science.

Friday, February 16, 2007

Mothers aren't always right

When I was a child, my mother told me never to head the ball during a soccer game. She believed the impact would kill brain cells, thus leading to an irreversible drop in my neuron count because the brain is incapable of forming new neurons after birth. Luckily, she was wrong. Like my overprotective mother, many people are under the impression that we are born with all of the neurons we will ever have; in fact, until about a decade ago, most scientists were under the same impression. This was a logical assumption, as mature neurons do not divide and are thus incapable of generating new neurons, and anatomical studies have shown that the size and structure of the brain remain constant from soon after birth. The assumption that new neurons are not added to the adult brain became one of the central tenets of neuroscience.

The story of how this dogma was overturned is a fascinating one, filled with drama and subterfuge, but I will investigate that saga in a later post. What you need to know is that since the mid-90s, it has been accepted that adult neurogenesis (the process by which new, functional neurons are generated) exists in restricted regions of the mammalian brain. These regions harbor "adult neural stem cells," which appear to be cells that, unlike mature neurons, continue to divide, producing cells that mature into new neurons in adults. [Note: These cells are vastly different from the embryonic stem (ES) cells that are the subject of political controversy; they are probably only able to generate neurons in any significant numbers, while ES cells can generate many different cell types. They also appear to be capable of only a limited number of divisions, while ES cells are quite proliferative. Also, just to clarify, the name's a bit misleading because they're not only present in adults, but in children and fetuses too.]

The majority of studies of mammalian neurogenesis have been conducted in mice and rats, in which at least 2 regions of the brain have these stem cells and thus retain a robust capacity for adult neurogenesis (although the rate of neurogenesis drops sharply as we age). These regions are the hippocampus, which is primarily associated with memory formation, and the "subventricular zone" (SVZ), which lines the ventricles (the fluid-filled chambers deep in the brain).

The primary destination of the new cells in the SVZ, it is believed, is the olfactory bulb, which receives sensory neurons from the nose and is thought to be involved in discriminating odors. This migration of cells from the SVZ (deep inside the brain) to the olfactory bulb (at the front of the brain) is not trivial; it is possibly the most complex and lengthy migratory routes exhibited in the nervous system. This route is called the rostral migratory stream (RMS), and contains "chains" of cells destined to become neurons (called neuroblasts), moving unidirectionally towards the olfactory bulb. So, "stem cells" in the SVZ give rise to neuroblasts that migrate along the RMS to the olfactory bulb, where they become neurons.

This image shows the RMS of a rodent, with stem cells and neuroblasts (1,2) migrating along the RMS (3) to the olfactory bulb (4). From Ming and Song, ARN 2005.28:223-250.


As for most biological phenomena, the knowledge about adult neurogenesis in humans is far less complete. In 1998, compelling evidence for adult neurogenesis was finally found for humans, although this phenomenon appeared to be limited to the hippocampus. For a long time, evidence for new neurons added to the adult human olfactory bulb was lacking. Many rationalized this conclusion with the assumption that rodents are more dependent on their sense of smell than we humans, and thus require the birth of new olfactory bulb neurons throughout life.

Unfortunately, the techniques for studying neurogenesis in humans is limited and highly debated. Each group of researchers has their own favored techniques, and is often highly critical of those used by others. In brief, these methods involve determining whether a neuron is "new" by using different ways to figure out when it last divided; in other words, when it was "born." Each technique has its drawbacks and must be complemented with other techniques (for example, proving that the recently-divided cell is, in fact, a neuron). So although a number of groups have found evidence for new neurons in the human olfactory bulb, their results were called into question and have not been widely accepted. Basically, evidence has been found for the birth of cells in the SVZ, but it continues to be a matter of debate as to whether these cells can form new neurons and integrate into a functional neural network. There has been some controversial evidence for the existence of "new" neurons in the olfactory bulb, but no one has yet been able to show a way for the cells to travel from the SVZ to the olfactory bulb. In short, no one had found the equivalent to the rodent RMS.

A study that was released yesterday in Science claims to have identified the human rostral migratory stream (RMS), and corroborated evidence for new neurons in the olfactory bulb (although, in poor form, they failed to cite the paper which first published this evidence). This group injected terminally ill cancer patients (30, randing from age 20-80) with a chemical called BrdU, which marks all dividing cells (unfortunately, it can also mark cells that are damaged and/or dying, so it's not perfect). After the patients died, the researchers analyzed their brains, and were able to locate cells containing BrdU in the olfactory bulb. This demonstrated that after the BrdU injection, a new cell was born and found its way to the olfactory bulb (thus distinguishing it from neurons that had been around for the life of the person). As I mentioned, this had been shown before, but no one knew how such cells managed to migrate to the olfactory bulb.

The real significant finding came when the the group sectioned the brains "sagitally," which exposes the plane that we would see from the side. They stained the tissue for a protein that marks neuroblasts, and found the cells distributed along a path that began at the SVZ and, in an intriguingly circuitous route, ended at the olfactory bulb, as in the picture on the right (adapted from Swaminathan, Sci Am 2007). These cells were at various stages of development (likewise in rodents, the neuroblasts are thought to mature as they travel along the RMS), and had the appearance of migrating cells. They then used magnetic resonance imaging (MRI) in 6 living patients to locate a "tube" which they believe ensheaths the RMS.

So there's an RMS, so what? If this finding is true, this has many therapeutic implications. Because these cells are naturally found in the adult brain, they may prove to be the ideal source for cell replacement in neurodegenerative disease and the injured brain (as opposed to grafting in embryonic stem cells from another human). Ideally, if we identify the signals that are guiding the new cells to the olfactory bulb, we might be able to direct their migration to an area that had been damaged by stroke.

Another intriguing question, and matter of intense debate in the field of adult neurogenesis, is why we need new neurons in our olfactory bulbs. Why here (and the hippocampus), and not other regions of the brain? One of the initial intellectual barriers to accepting that neurogenesis occurs in adults is that neuroscientists could not conceive of why or how one would add a new cell to a functional neuronal circuit. Now, hippocampal neurogenesis has been linked to the action of antidepressants and, in theory, to certain aspects of memory formation, but the reasons for olfactory bulb neurogenesis are far more elusive. Perhaps the addition of new neurons helps discriminate new odors from those we have perceived before? Maybe it's involved in associating certain smells with certain memories? Rodents have increased addition of neurons to their olfactory bulbs when pregnant; it would be interesting if a similar phenomenon occurred in pregnant women, and if this would be associated with innate feelings of compassion for the new babe.


Reference: Curtis MA et al. "Human neuroblasts migrate to the olfactory bulb via a lateral ventricular extension." Science 2007 (DOI: 10.1126/science.1136281)

Wednesday, February 7, 2007

How a chicken can run around with its head cut off

When's the last time you walked? Most of us, excepting those with disabilities, probably walked relatively recently. Walking is a routine behavior that's very routine, learned at an early age and performed without much mental effort. So a more difficult question: when's the last time you thought about what our bodies do when we walk? If you take the time to consider the behavior, walking is a tremendously complex task.

When we walk, we activate hundreds of muscles in an extremely precise, sequential manner. We need to alternate our legs, alternate our flexors (the muscles that bend a joint, e.g. quads) with our extensors (muscles that straighten a joint, e.g. hamstrings), bend our hip/knee/ankle/foot at the appropriate times, and do all of this with relative fluidity. When neuroscientists initially sought to understand the neural mechanisms underlying walking behavior, one of the major issues was whether it required conscious control.

To explore this issue, there was a pretty easy (if a bit blunt) solution: remove the cortex. So in the early 1900s, there were a number of studies performed in which the cortices of neonatal cats were removed. These "decorticate" cats matured into adults, and were able to stand and walk around. Thus, the researchers concluded that walking does not require descending input from the cortex.
[Importantly, certain other brain structures, such as the basal ganglia, were left intact. Damage to the basal ganglia can result in a phenomenon called "obstinate progresson," which is an amusingly fitting name for the behavior. A cat with severe obstinate progression will walk, and walk, and walk...and walk....even if it walks into a wall, its legs will continue to make walking movements!]
How does this work? There's been a fair bit of progress since the early days of decorticate cats. We now understand that the motor system is arranged in a hierarchy, as illustrated in the figure below. First, we make the decision to start walking (this requires the brain). The brain then gives the command ("start walking") to a different part of the nervous system: a circuit of neurons located in the spinal cord called the "central pattern generator," or CPG. Once activated, the CPG activates the relevant muscles and essentially takes care of all of the details. So that is the general strategy for locomotion: when we decide to walk, our brains "recruit" the appropriate CPG, and this CPG is responsible for activating the appropriate muscles at the appropriate time. And thus, although we consciously decide to start and stop walking, we don't need to think about it in between. Once initiated, the motion persists without cortical input.



[Side note: Precise patterns of muscle contractions aren't just involved in walking, but in all coordinated, rhythmic movements, including swimming, flying, breathing, chewing, even sneezing. (So someone unable to walk and chew gum at the same time has a defective spinal cord, not poor intellectual capacity).]

So the brain's (largely) out of the equation...how does the CPG handle locomotion? As I said, the CPG is a "neural circuit"... just like a computer circuit, a neural circuit has a number of units that communicate with each other to modulate the output (with walking, the output activates specific leg muscles). To simplify things, we can ignore the majority of muscles, such as those controlling the bending of the knee, ankle, and foot (and don't forget about our arms, which are coordinated with our legs when we walk), and think of 4 targets of the CPG output: the right and left hamstring, and the right and left quadricep. The important elements of walking are alternating left and right, and alternating flexor and extensor.

Consider a stride in which your right foot is on the ground and your left hip is bending to make the next step. In this case, the left quad is activated, while the right quad is not, nor is the left hammy. However, the hammy of our "stationary" foot is activated, to help straighten the right hip and propel us forward. So when thinking about the neuronal components underlying these properties, one can imagine that when the motor neuron (which is a neuron that innervates (directly communicates with) a muscle) innervating the left quad is active, the CPG ensures that the motor neurons innervating the right quad and left hammy are inhibited, while the motor neuron activating the right hammy is active.

The walking CPG can be translated to other activities: what about hopping? When we decide to hop, our brain activates the locomotor CPG accordingly. Since we want to move both legs together, the CPG coordinates the output such that the quads are activated together, and the hamstrings are activated together, but neither hamstring is ever activated when a quad is activated. So, you can see that a single circuit that controls the quads and hamstrings is flexible and adaptable.

[In a follow-up post, I'll go into the network logic of the CPG, and speculate how the CPG is able to coordinate the motor neurons with such precision...the knowledge in mammals is far from complete, but there have been some interesting recent studies!]

So, the infrastructure of the motor system is arranged such that the brain doesn't have too much responsibility when it comes to routine, rhythmic behaviors. Once it activates the appropriate spinal cord circuit to take care of the important details, it can move on to more interesting things. And thus, people can drink coffee and read the paper while walking to work, watch Lost and Top Chef while working out on the elliptical trainer, and chickens can run around with their heads cut off.