Wednesday, March 14, 2007

What do dodgeball and ventriloquism have in common?

Imagine a game of dodgeball, in which your attention is consumed by whizzing balls and the warlike cries of aggressive athletes, all reverberating off the gymnasium walls. Suddenly, you hear someone on the opposing side yell to his teammates that his next shot will be aimed at you. At that instant, you look over at his side of the court and see three hostile-looking boys holding dodgeballs, approaching the halfline, mouths moving as they boldly communicate strategy and support. Which one identified you as his target? Luckily, the same voice continues its warmongering, designating targets for his other teammates, and you are now able to match the movement of the mouth of one of the boys with the relevant sounds. Sound and sight have come together, and you are prepared for the appropriate ball.

Amidst that bombardment of sensory information, how did your brain find the appropriate auditory-visual correspondence to determine the origin of the battle cry? At every moment, in dodgeball and in life, our appreciation of the external world is due to a combination of sights, smells, sounds, touches, and tastes. Our brains must integrate this deluge of information to generate a coherent, seamless picture of the environment; this process is called multisensory integration. When integrated properly, the simultaneous acquisition of information from different sources helps us refine our percept of the world. However, our incoming sensory information is often fraught with uncertainty.

To explore this concept on sensory uncertainty, it is useful to focus on one sensory modality, such as vision. Getting back to the game: we know where the ball is coming from, but we still need to dodge (or catch!) it. Your assailant winds his bulging arm back like a catapult and mechanically releases the projectile; as it careens towards you, you attempt to calculate its speed and trajectory. Your eyes convey imperfect information about the ball's velocity, so your brain can only estimate it. Combining this information with your memory of his previous throws reduces the error in this estimate, but not all velocities are equally probable in theory; over the course of the game, there will be a probability distribution of velocities. Your best estimate, and your ability to dodge the ball, results from combining information about the distribution of prior velocities with evidence from sensory (visual) feedback. This interpretation of probabilities is called "Bayesian inference," and various studies have shown that the human brain performs Bayesian inference at a nearly optimal level.

Multisensory integration becomes far more complex when we consider this uncertainty inherent to our sensory information. Nevertheless, our brains are intriguingly capable of weighing different sensory signals according to their corresponding reliabilities; that is, our brains pay more attention to "reliable" sensory information, while disregarding "unreliable" information. This ability results in an "optimal" approximation of reality: we are (nearly) perfect maximum-likelihood integrators.

The complexity of this integration process is exposed when our perceptual world does not correspond to reality. One striking example of this vulnerability is ventriloquism. A good ventriloquist will thwart our multisensory integration process by synchronizing the movements of a puppet's mouth with his or her voice, while the movements of his or her mouth are imperceptible. Thus, we perceive the voice as originating from the puppet, as opposed to the ventriloquist.

This deception is a consequence of our brain's propensity to give more weight to visual information than auditory information during the integration process; the neural circuits have adapted to the fact that the visual system is far more reliable at determining location than the auditory system. The direction of a light source is directly determined by the position stimulated on the retina, whereas the direction of a sound is calculated by the differences in timing and intensity of stimulation in one ear relative to the other. Thus, our brain "trusts" the visual system more than the auditory system, and rightly so. If there is a discrepancy between the two, the visual information is favored in the generation of a unified percept of reality, and the puppet "speaks."

So, the big question is, as always: what are the neural mechanisms underlying this process? How does the brain weigh different signals according to their corresponding reliabilities when generating the most realistic percept? How is uncertainty represented at the neural level?

A group in Alex Pouget's laboratory recently published theoretical answers to these questions in Nature Neuroscience. The premise for their exploration was the fact that neurons in the cortex respond to identical stimuli with high variability. For example, think of a neuron in the visual cortex that responds to an object moving from left to right. When exposed to such a stimulus, it will not respond exactly the same way each time: the same neuron may respond by firing 9 times, or 14 times, or not at all. Although this particular cell is, on average, activated by left-to-right motion, its response to this stimulus may change dramatically from one presentation to the next.

Pouget's group hypothesized that this variability may represent sensory uncertainty. Let’s return our focus to the visual system and dodgeball. Your uncertainty of the speed at which the dodgeball is moving is related to the fact that neurons in your visual cortex do not fire in exactly the same way every time you see a ball moving towards you. Balls flying at your head can look different depending on your vantage point (and other factors, such as the physical properties of the ball itself), and thus give rise to different responses in your visual cortex every time. If an approaching dodgeball always elicited the same neural responses, you would be able to determine its speed with certainty, by evoking your memory of the ball's speed in past occasions.

The researchers showed mathematically that this variability could represent probability distributions for an object's location. Greater uncertainty (i.e. wider probability distributions) would thus be represented by higher variability in responses of neurons in the auditory cortex relative to those in the visual cortex. This internal representation of sensory uncertainty allows the brain a relatively straightforward (linear) way to combine neural activities: the Bayesian “decoder” of the brain can simply pool the probability functions of multiple neurons (which can represent multiple sensory channels) together to generate an optimal inference of an object's location.

So when watching a ventriloquist act, our visual system detects the movement of a puppet’s mouth, which neurons in our visual cortex register with low variability (high certainty and a narrower, more mathematically dominant probability function), while the neurons in our auditory cortex represent sound originating from the mouth of the ventriloquist with high variability (low certainty and a wider, less influential probability function). When the brain combines these functions together with its Bayesian decoder, the visual system "wins" and we think the sound came from the puppet.

Friday, March 9, 2007

These spinal networks were made for walking

The invasion of the land by animals was an astonishing evolutionary feat, necessitating a number of substantial changes to the body: limbs with digits, structures for obtaining oxygen from the air, a relatively waterproof covering to prevent dehydration, and sturdy structures to support the body in a medium much less buoyant than water, to name a few. When these pilgrims first bridged the immense gulf between land and water, almost every system in the vertebrate body underwent substantial modifications, but what about the nervous system?

The different optical and sound properties of water and air required significant adaptations for our visual and auditory systems, but these adaptations were largely peripheral: the lens changed shape to adapt to the different refractive indices, and the bones of the middle ear evolved from other bones of the face. Perhaps the most significant behavioral modification (which would thus require notable rewiring of the neural circuitry) was the transition from swimming to walking. Did animals need to invent completely new pathways to support a wider repertoire of locomotion?

First, it's necessary to have a general understanding of the neural basis of locomotion: central pattern generators (CPGs). I posted on CPGs a little while ago; the basic idea is they are networks of neurons, located in the spinal cord, which coordinate all of the muscles involved in locomotion without input from the brain. I focused on bipedal motion, but CPGs control rhythmic locomotory movements in all vertebrates, including those that swim and fly. Thus a more focused question is: when animals transitioned from swimming to walking, could the same CPGs that controlled aquatic locomotion handle the different coordination needed between a body and its limbs for walking?

An excellent paper was published today in Science that explored this question using a robotic salamander (named Salamander robotica because scientists are pretty bad at naming things). Salamanders are considered to be the most similar to the first terrestrial vertebrates, and are thus often used as a model system for studying the evolution of new anatomical structures for terrestrial (vs aquatic) locomotion.

Salamanders can rapidly switch from swimming (using undulations similar to those of primitive fish), to walking (using diagonally-opposed limbs that move together while the body forms an S-shape, like an alligator). Looking at the animal's movements from above, the body can be seen as either moving in a traveling wave or a standing wave, respectively, and neural activity along the spinal cord is likely to mirror this effect.

The group, led by physicist Auke Jan Ijspeert and neurobiologist Jean-Marie Cabelguen, designed Salamander robotica with an electronic "spinal cord" to determine whether the same spinal network could produce both swimming and stepping patterns, and how it might transition between the two.

Their spinal cord was controlled by an algorithm that incorporated essential known or speculated attributes of salamander locomotion. First, the group knew (from a study they did in 2003) that the transition between standing and traveling waveforms can be elicited simply by changing the strength of the excitatory drive from a specific region of the brainstem. In this experiment, a weak drive induced the slow, standing wave of the walking gait, while a stronger drive induced the traveling wave of the swimming motion. Second, the authors reasoned that there are two fundamental CPGs controlling salamander locomotion: the body CPG, located along the spinal cord, and the limb CPG, located at each of the limbs.

With these parameters in place, Salamander robotica set forth on her quest to traverse land and sea, to test whether her "primitive" swimming circuit (the body CPG) would be able to coordinate with the "newer" circuits of her phylogenically recent limbs to produce the waddling gait of her sentient inspiration. Watch the results:



So mechanistically, how does she do it? The group found that at low frequencies, both CPGs are active; the limbs then alternate appropriately, and are coordinated with the movements of the body. At higher frequencies, the limb CPGs are overwhelmed, and thus the limbs tuck in as the body CPGs take over.

One interpretation here is that the group is good at building robots, so Salamander robotica did exactly what they wanted it to do. Another interpretation, the one that got this study into Science and into my blog (quite selective, really), is that the spinal locomotor network controlling trunk movements has remained essentially unchanged during the evolutionary transition from aquatic to terrestrial locomotion.

I think this experiment was highly innovative. As you might imagine, it is quite difficult to study evolution in a controlled laboratory setting (global warming suffers from similar drawbacks, but that's a whole 'nother post), so using robotics as an experimental model is quite promising. The core finding, however, was not surprising to me.

The transition from water to land necessitated a daunting number of anatomical changes and, looking back 370 million years, it seems an unsurmountable divide. But the success of evolution hinges on the fact that it occurs gradually, and rarely involves unusual or extraordinary biological processes. It is thus logical that a common, underlying neural mechanism for propulsion can produce a variety of movements; we see this in modern humans, as well. As I pointed out in my earlier post, the same CPGs--in fact, the same neurons in the same CPG--are used for walking, running, hopping, and skipping. The fact that locomotion is largely independent of conscious control strengthens this rationale; it is much more straightforward to make small adjustments to the system by tinkering "downstream." Thus, when animals adapted to terrestrial locomotion, they used the most efficient (and thus most likely to be successful) strategy: recruiting the same neural circuits used for aquatic locomotion.

Which is not to say that this finding is any less wonderful. It is simply a powerful reminder that, as evolutionary biologist Neil Shubin wrote, "the ancient world was transformed by ordinary mechanisms of evolution, with genes and biological processes that are still at work, both around us and inside our bodies." This is, in his words, "something sublime."

For more information, The Neurophilosopher has an excellent, more detailed post on this paper.

Wednesday, March 7, 2007

Sleep is so last year

I remember when my body was so chemically-inexperienced that half a cup of coffee would transform me into a screaming ball of energy, bouncing indefatigably off the walls. Age, college finals, and early morning soccer games have dramatically altered my requirements for sufficient stimulation, but luckily, the beverage and drug industries seem to have co-evolved to meet my ever-increasing needs.

As manufacturers continue to force increasing concentrations of caffeine (sometimes marketed as yerba mate or guarana, which are simply the names of other plants that contain caffeine) into their beverages, we now have access to the most absurdly caffeinated beverages human societies have ever seen.

But some people want more.

If you're seeking a refreshing departure from run-of-the-mill stimulants like caffeine and amphetamines, you might be ready for the new generation of energy-enhancing drugs, eugeroics (Greek for "good arousal"). These drugs (only two at present, Modafinil and Adrafinil, both manufactured by the same company) are approved for the treatment of narcolepsy, and claim to be "the ultimate stimulants": they increase alertness, they have no side effects (e.g. "jitters" and post-euphoric crashes), and they are not addictive.

Seems like a convenient, effective way to stay alert through seminars and long drives, right? While searching around for more information on eugeroics (the Wikipedia article is tragically short), it became clear that the strongest proponents of the drugs have their sights set much higher than my trivial visions of enhanced student life. These drugs appear capable of usurping one of the deepest needs of human nature: sleep.

From an article that came out about a year ago in New Scientist:

"The more we understand about the body's 24-hour clock the more we will be able to override it," says Russell Foster, a circadian biologist at Imperial College London. "In 10 to 20 years we'll be able to pharmacologically turn sleep off. Mimicking sleep will take longer, but I can see it happening." Foster envisages a world where it's possible, or even routine, for people to be active for 22 hours a day and sleep for two. It is not a world that everyone likes the sound of. "I think that would be the most hideous thing to happen to society," says Neil Stanley, head of sleep research at the Human Psychopharmacology Research Unit in the University of Surrey, UK. But most sleep researchers agree that it is inevitable.
...
Modafinil has made it possible to have 48 hours of continuous wakefulness with few, if any, ill effects. It delivers natural-feeling alertness and wakefulness without the powerful physical and mental jolt that earlier stimulants delivered. In fact, its effects are so subtle that many users say they don't notice anything at all - until they need to.
...
Perhaps the most remarkable thing about modafinil is that users don't seem to have to pay back any "sleep debt". Normally, if you stayed awake for 48 hours straight you would have to sleep for about 16 hours to catch up. Modafinil somehow allows you to catch up with only 8 hours or so.


The implications here are intriguing. What if the drug allowed you to cut out 4 hours of sleep every night? You'd be awake for an extra 1460 hours (60 days) a year! Gain a year every 6 years! But before you go out and fake narcolepsy to your doctor, there's an important detail to take into consideration: no one knows how these drugs work.

Perhaps a more important consideration: how/when/why did sleep become so unfashionable? "...Perchance to dream"?!

More info:
How caffeine works
Peer-reviewed (but not FDA-approved) eugeroic
Eugeroics
How sleep works

Tuesday, March 6, 2007

Honeybees!


This post is devoted to my favorite invertebrate, Apis mellifera, the Western honeybee. The honeybee genome is the most recent genome to be fully sequenced (published October of 2006), and I have heard many demean this development as pointless and superfluous. I feel otherwise; as eusocial insects, these little critters lead fascinating lives, which may help us understand the evolution of social behavior. (They're not bad dancers, either). I've been meaning to post on honeybees for a while, and was finally inspired to do so by an article published today in PLOS Biology.

Eusocial insects, including termites, ants, and many bees and wasps, exist in groups numbering up to hundreds of thousands of individuals, yet these colonies seem to function as single organisms. Indeed, no individual is capable of nourishing, protecting, and reproducing itself. As a collection of thousands of specialized individuals, each dedicated to one task, however, such a “superorganism” can perform every task simultaneously.

Although considerably specialized, these divisions are dynamic; insect colonies respond to changing environmental conditions by adjusting the number of workers performing a given task. The western honeybee offers a striking example of this adaptability: a young bee times her maturation into a forager in response to the colony’s needs. For the first 2-3 weeks of her life (yes, "her": the females do all of the work in the hive, while males are haploid drones with have no role beyond insemination), a worker bee will work solely in the hive, performing activities such as caring for larvae as a nurse. At approximately 3 weeks, she will mature into a forager, leaving the hive to collect pollen and nectar for the remainder of her 4-6 week lifespan.

This stereotyped repertoire of behavioral maturation is highly dependent on the social context; the colony rations its 40,000 to 80,000 workers to different tasks depending on its needs. If a large number of foragers fall victim to predation, the remaining workers in the colony react by expediting their own development, and may become foragers at as early as 5 days. Correspondingly, if a brood disease kills many nurses in the hive, behavioral maturation is suppressed and the age at which a bee will become a forager is delayed; some bees already occupied as foragers may even revert back to nursing to make up for the deficiency. These considerable lifestyle changes require entirely new behaviors, and is accompanied by dramatic physiological changes, including alterations in exocrine gland activity, hormone and neurotransmitter levels, brain structure, responsiveness to certain stimuli, and gene expression levels.

A fascinating question is how the bee can sense when the colony requires more foragers, and regulate her development accordingly. This study, coming out of Gro Amdam's lab at the University of Arizona, looked at one particular protein, vitellogenin, that may be instrumental in regulating this behavioral maturation.

To determine the role for vitellogenin in coordinating social development, the authors used a technique called RNA interference (RNAi), which basically prevents specific proteins (vitellogenin, in this case) from being synthesized; thus, it "silences" specific genes. This is an incredibly useful genetic technique, and this paper marks the first time that it has been used in honeybees.

The researchers used their vitellogenin RNAi tool to silence the gene in a group of bees, which were then housed in observation hives with about 5,000 other adult bees of various ages and social statuses (the same was done with "normal" bees, injected with a green dye instead of the RNAi tool, as a control). (One of the cool things about honeybee research is it looks at natural behaviors in natural environments, as opposed to, say, testing rodent memory by forcing them to swim around in a giant pool of milk.)

The researchers found that bees in which vitellogenin had been silenced initiated foraging behavior significantly earlier than normal bees. This suggests that vitellogenin inhibits the onset of foraging, implying that it may be a molecular mediator for environmental cues that inhibit foraging behavior. The paper goes further to associate behavioral/social maturation with actual lifespan. Remember that the transition from nursing to foraging is a "maturation" process, and that the inhibition of foraging behavior is coupled with the perpetuation of nursing behavior. In a fascinating corroboration of this connection, the authors found that bees without vitellogenin had shorter lifespans than "normal" bees.

I tend to be excited about anything that draws attention to honeybee social behavior, but was particularly happy with this paper. Honeybees offer the unique opportunity to identify genes that influence social behavior and may be involved in social evolution, and the use of RNAi by Amdam is a tremendous step in furthering this fascinating field.

Whither new neurons?

The adult brain has long been seen as a stable circuit, often compared with computer hardware in that the components (neurons) are connected in an elaborate architecture that is not amenable to structural change. It was thought that learning occurred by altering the way the neurons communicated with each other, without any need to modify the composition of the system. As I discussed in an earlier post, this dogma was overturned just about a decade ago; it is now accepted that throughout life, new neurons are continually added to two areas of the brain: the olfactory bulb (see earlier post), and the hippocampus, which is essential for learning and memory and the focus of this post. Even more recently, it was demonstrated that these new neurons actually participate in preexisting networks.

Although the number of new neurons added per day (thousands!) is a miniscule fraction of the total number of neurons in the brain (about 100 billion), it is certainly high enough to significantly affect the functioning of the brain. However, due to technical limitations (reflecting the nascency of the field), it is unclear what these new neurons actually contribute to mature circuits, and how important they really are.

Speculation, of course, is rampant. One of the interesting things about young neurons is that, until a few months after they're born, they are functionally different from older, more mature neurons. New neurons are, in comparison with their elders, more easily excited, and more likely to enhance their connections with other neurons (aka LTP, a phenomenon believed to underlie learning and memory). These features indicate that new neurons may have a special role in the brain, beyond simply replacing old, dying neurons; specifically, the flexibility of their connections suggests that they may participate in learning and memory.

A recent paper, published in Nature Neuroscience, explored how these neurons are recruited into memory circuits when animals are learning new information. The researchers, led by Paul Frankland at the University of Toronto, focused on the hippocampus's role in the formation of spatial memories, which, as you may have guessed, are memories that are concerned with spatial locations (the Neurophilosopher recently had a fantastic post on different mammalian spatial memory systems).

The group used a cognitive test called a Morris Water Maze, which is the most common paradigm for assessing spatial memory in rodents. Basically, a rodent is placed in a circular pool of opaque water (about 4-5 ft in diameter, typically clouded with milk powder or white paint) that contains a platform hidden about 1 cm below the surface. As rodents are highly averse to swimming, they desperately swim around in search of an exit until they find the platform and can "escape." A series of static visual cues are placed around the edge of the pool, which the rodent uses to determine and, after repeated trials, learn the spatial location of the platform. During the course of training, rodents should require progressively less time to find the platform; once learned, the spatial memory should endure after the training has been completed. This ability to remember the location of the platform depends on the hippocampus; if the hippocampus is damaged, the animals never learn the task. Further, if the hippocampus is damaged after the location is learned, the animal will be unable to retrieve the memory.

In this study, the group looked at the brains of mice that were training on the Morris Water Maze, and were thus actively storing the relevant spatial information. To see which neurons were actively participating in this newly forming spatial memory network, they analyzed the expression patterns of genes called immediate early genes (IEGs), which are turned on when a neuron is activated. They combined this technique with a common strategy for determining when a cell has last divided, BrdU, and were thus able to determine the age at which these cells were born and incorporated into the network.

They found that new neurons were, indeed, incorporated into these spatial memory circuits. In fact, new neurons between the ages of 2 and 8 weeks were 2-3 times more likely to be activated during the task than their elders, which is a striking display of preference for these particular cells.

What about when neither the cells nor the memory are new? This was the most tantalizing part of the story. A month and a half after training (during which the mice did not perform the task), the researchers put the same mice back into the pool. They found that the cells born during the training period were still preferentially activated, indicating that they had become stable components of a network encoding the spatial location of the platform, presumably to endure for the lifetime of the animal.

The key here is that spatial memory formation and retrieval preferentially activate new neurons, which implies that such cells make a unique contribution to memory processing. Although the results do not answer the questions "Why new neurons? What are they good for?", it certainly fuels more informed speculation. Perhaps mature neurons are more sensitive to features they learned when they are young; thus, we may need to add new neurons to the network in order to form distinct spatial memories, e.g. when adapting to a new environment.