Thursday, June 28, 2007

Will a raw vegetarian diet make you dumber?

Well, no, but according to a recent news article in Science, the addition of meat and cooked foods to the Homo erectus diet may have led to the dramatic expansion of our ancestors' brains and cognitive abilities.

Between 1.9 million and 200,000 years ago, the brains of our ancestors tripled in size (from 500 cc in Australopithecus to about 1500 cc in Neanderthals), a feat that required a massive increase in energy supply. Brains are rather greedy structures, utilizing 60% of a newborn baby's energy expenditure, and 25% of a resting adult's. In contrast, the average ape brain uses only 8% of the animal's total energy expenditure, despite similar basal metabolic rates. So what led to this glorious caloric upsurge?

One well-supported theory proposes that calorie-dense meat provided the necessary fuel. The high caloric return (not necessarily the high protein content) of meat made it a far more efficient fuel, capable of supporting a 35-55% increase in caloric needs. Moreover, a diet with a greater proportion of meat permits a smaller gut, allowing the allocation of energy saved from digestion and tissue maintenance to feeding the voracious brain. One line of evidence that supports this hypothesis stems from correlational primate studies: capuchin monkeys, which eat an omnivorous diet and have small guts, are considered the most intelligent New World monkeys.; in contrast, Howler monkeys, while bereft of significant brainpower, have large guts to accompany their vegetarian diets.

According to Harvard primatologist Richard Wrangham, "a diet of wildebeest tartare and antelope sashimi alone isn't enough." By breaking down collagen and starches, cooking is a form of pre-digestion, thus lightening the load for the GI tract and allowing greater energy expenditure elsewhere. In one study, pythons fed cooked, ground meat spent 23.4% less energy digesting relative to those which ate raw meat; in another, mice raised on cooked meat gained 29% more weight than mice fed raw meat.

Theoretically, cooking and meat could have provided a great enough surge in calories to fuel the major expansion of our ancestors' brains and cognitive abilities, but the idea is still controversial. Back then, cooking required fire, and evidence for the earliest controlled fires is a bit ambiguous. The earliest such evidence is from about 800,000 years ago, and the earliest evidence for cooking (e.g. hearths) is from no earlier than 250,000 years ago, with questionable evidence dating to 300,000 to 500,000 years ago.

Nevertheless, it's an intriguing explanation for this feature of our evolutionary history. Of course, as we are no longer subjected to the same evolutionary pressures, it's not exactly a recipe for intelligence in modern society. It's possible, according to Wrangham, that "Western food is now so highly processed and easy to digest labels may underestimate net calorie counts and may be another cause of obesity." That said, I love a good barbecue, and in the land of "raw foodies" and "fake stake [sic]," it's refreshing to see meat and cooking receive some due recognition for their delicious role in our natural history.

*For those without a Science subscription, Jake at Pure Pedantry has some key excerpts from the article

Wednesday, June 27, 2007

Estrogen and the aging brain

As women advance in age, pregnancy and childbirth become increasingly dangerous and destructive. Perhaps to protect us, we women have evolved to be infertile later in life: our ovaries stop producing estrogen, causing our reproductive systems to gradually cease operations. Thus rendered barren, we can devote our maternal resources to mentoring and supporting our children and grandchildren. The rosy "grandmother hypothesis" is, however, not the only theory for the evolutionary origin of menopause.

The cessation of estrogen production also results in a number of debilitating symptoms, such as hot flashes, loss of short-term memory, and declining abilities to concentrate and learn new tasks, which would have put older women at greater risk for predation. Accordingly, some have hypothesized that menopause evolved as a way to "thin the herd," eliminating non-reproductive members of society and leaving food and other resources for the young. (Love you, Gumma!)

[This "culling agent" theory receives little support; the predominant theory as to why cognitive abilities decline is that conditions manifesting later in life (especially after reproductive age) are simply not subjected to the pressures of natural selection.]

Regardless of prehistorical reality, humans have evolved the propensity to thwart nature, creating the pharmaceutical industry and one of its many gifts: hormone replacement therapy (HRT). HRT does not rescue infertility, but is intended to mitigate the other lamentable effects of menopause, such as those impacting cognitive function.

The aging brain, while not suffering from notable cell death (except in conditions like Alzheimer's and Parkinson's Disease), is afflicted by significant changes in the connections (synapses) between neurons, within otherwise intact neural circuits. Certain molecules with essential roles in synaptic communication (e.g. glutamate receptors) change in quantity and location. These molecular changes are accompanied by significant structural alterations to the synapses themselves. Two regions display the greatest vulnerability to these changes: the prefrontal cortex (PFC), involved in attention and working memory, and the hippocampus, involved in many types of memory formation. Although these changes are inevitable concomitants of brain aging, they are exacerbated by the drop in estrogen levels experienced by women undergoing menopause, particularly in the PFC.

Estrogen, like all hormones, acts by traveling through the membrane of a cell to the nucleus, where it switches certain genes on or off, thereby regulating protein production. Of the many genes under the direct control of estrogen are the NMDA receptor (a key molecule for synaptic communication, in particular synaptic plasticity), elements of the cholinergic system (involved in attention and working memory), and genes that influence neuronal survival and structure. In particular, estrogen is known to enhance the number and strength of connections in the PFC of female rhesus monkeys which have had their ovaries removed ("ovariectomized," or OVX). The relevance to the human menopausal situation, however, involving both age and estrogen loss, was heretofore unknown.

A new study by John Morrison at Mt. Sinai School of Medicine investigated this issue by OVXing old and young rhesus monkeys, and treating half of each group with estrogen. The group then tested the monkeys on a task of short-term memory (STM), a component of working memory, in which the monkeys had to remember the location of an object after an increasing delay. They found that aged OVX monkeys which had not received estrogen treatment performed significantly worse than any of the other three groups (aged OVX + estrogen (E), young OVX + E, young OVX), indicative of significant cognitive decline. Moreover, the two groups of young animals performed equivalently, regardless of whether they received estrogen treatment, and the aged OVX + E group performed equally well as the former two. This surprising finding indicates that the estrogen treatment in the aged monkeys was sufficient to improve their cognitive function to levels comparable to their younger peers.

After cognitive testing, the researchers analyzed the brains of all monkeys, discovering that, in the PFC, estrogen increased synaptic density in both young and old OVX monkeys. Highest synaptic density was observed in young OVX + E monkeys, followed by comparable levels between young OVX and aged OVX + E, and lowest density in aged OVX without E. Moreover, estrogen treatment resulted in a significant increase in a particular subpopulation of synapses, which exhibit high dynamism and plasticity.

These findings indicate a complex interplay between estrogen and age, by which "young monkeys without [estrogen] can sustain excellent cognitive function against a background of dynamic spine plasticity." The one-two punch of age and estrogen loss, however, may be sufficiently destructive to impair an animal's cognitive function. By promoting the growth of new, dynamic synapses, estrogen may partially compensate for the effects of aging.

The implication with respect to HRT is that the timing of treatment is crucial. It may be important to begin treatment when ovarian hormone levels just begin to fall, at perimenopause, while synaptic plasticity mechanisms are still robust and resilient. Thus, this study contributes to the enormous body of HRT research (which currently consists of heaps of conflicting information). It has been suggested that the timing of hormonal intervention may underlie many of these contradictory data, and this study may lend some credence to this hypothesis and clear these cloudy waters.

Reference: Hao J et al. Interactive effects of age and estrogen on cognition and pyramidal neurosn in monkey prefrontal cortex. PNAS 2007 Jun 25 [Epub ahead of print].

Saturday, June 23, 2007

Free Scientific American!

Scientific American, the oldest continuously published magazine in the United States, is unveiling a new, "appealingly bright, colorful design" and giving away the July issue for free (until June 30). Among the highlights of this issue: neuronal codes and memory formation, gravitational waves, and a debate between Richard Dawkins and Lawrence Krauss on the coexistence of faith and science.

Download your free issue of SciAm here.

Sibling rivalry

A new study in Science reports that the eldest children in families tend to have slightly higher IQs than their younger siblings. The report (brought to my attention by, not surprisingly, my older sister) concluded that the small but significant difference (2.3 IQ points) was not a result of biology, but rather social upbringing.

From The New York Times:
Norwegian epidemiologists analyzed data on birth order, health status and I.Q. scores of 241,310 18- and 19-year-old men born from 1967 to 1976, using military records. After correcting for factors known to affect scores, including parents’ education level, birth weight and family size, the researchers found that eldest children scored an average of 103.2, about 3 percent higher than second children and 4 percent higher than the third-born children. The scientists then looked at I.Q. scores in 63,951 pairs of brothers and found the same results. Differences in household environments did not explain elder siblings’ higher scores.

To test whether the difference could be caused by biological factors, the researchers examined the scores of young men who had become the eldest in the household after an older sibling had died. Their scores came out the same, on average, as those of biological first-borns.

Blame your parents:

Social scientists have proposed several theories to explain how birth order might affect I.Q. scores. First-borns have their parents’ undivided attention as infants, and even if that attention is later divided evenly with a sibling or more, it means that over time they will have more cumulative adult attention, in theory enriching their vocabulary and reasoning abilities.

Older siblings [also] consolidate and organize their knowledge in their natural roles as tutors to junior. These lessons, in short, could benefit the teacher more than the student.

Another potential explanation concerns how individual siblings find a niche in the family. Some studies find that both the older and younger siblings tend to describe the first-born as more disciplined, responsible, a better student. Studies suggest — and parents know from experience — that to distinguish themselves, younger siblings often develop other skills, like social charm, a good curveball, mastery of the electric bass, acting skills.
I have failed to develop any such skills (although my Wii-curveball is improving), but all is not lost, little ones! There is a glistening, titillating silver lining to this cloud of inferiority:
Younger siblings often live more adventurous lives than eldest siblings. They are more likely to participate in dangerous sports than eldest children and more likely to travel to exotic places, studies find. They tend to be less conventional in general than first-borns, and some of the most provocative and influential figures in science spent their childhoods in the shadow of an older brother or sister (or two or three or four).

Charles Darwin, author of the revolutionary “Origin of Species,” was the fifth of six children. Nicolaus Copernicus, the Polish astronomer who determined that the Sun, not the Earth, was the center of the planetary system, grew up the youngest of four. René Descartes, the youngest of three, was a key figure in the scientific revolution of the 16th century.

First-borns have won more Nobel Prizes in science than younger siblings, but often by advancing current understanding, rather than overturning it, Dr. Sulloway argued. “It’s the difference between every-year or every-decade creativity and every-century creativity,” he said, “between creativity and radical innovation.”

Link to the NYT article.

Wednesday, June 20, 2007

Working memory and neuronal calculus

The world offers an awesome, indescribably magnificent profusion of sensory riches. For our meager mortal brains, however, trying to process this deluge of information is akin to taking a drink from Iguaçu Falls: it's tremendously inefficient, and you will likely be violently ripped from your precipice and vanish in a ferocious torrent of natural wonder.

Because the world is too rich for our brains to process at once (or even in a lifetime), we are equipped with mechanisms that restrict the avalanche of information to a manageable trickle. At the level of the brain, this restrictive bottleneck is referred to as attention; when we attend to a certain stimulus, we select it for more comprehensive processing, while relegating the rest to a relatively superficial survey. Importantly, attention endows a capacity limitation onto our brains, not our sensory organs, which latter detect a remarkable embarrassment of sensory details. For example, the sensory neurons on the bottom of your feet are well aware of the pressure exerted by the floor, but you were probably not actively thinking about it until this sentence directed your attention to the sensation.

If our processing ended with attention, we would conduct our lives strictly from information received at the present instant, without any internal state of the mind or abstract thought. But instead of flitting whimsically in and out of our brain, information selected from the world by mechanisms of attention gain access to our working memory, which temporarily holds onto this information for detailed evaluation. For example, when ordering a pizza for delivery, you read from the menu "4-1-5, 6-9-5, 1-6-1-5," hold the sequence in your head, and punch it into your phone. In the interim between reading and dialing, the digits were stored in your working memory, and likely quickly forgotten once you heard the first ring and the number was no longer relevant. In more complex situations, the information in our working memory is the basis for decisions and planning of elaborate behavior, and is thus a critical component in many cognitive processes associated with human "intelligence," such as language.

So what is the neural manifestation of working memory? What happens in your brain between reading and dialing the pizza delivery number? Working memory is dependent on the prefrontal cortex (PFC), which is the region at the very front of the brain, directly behind the forehead. In the monkey PFC (and presumably in that of humans), there are neurons that seem to exhibit many properties of working memory; that is, they are activated by a specific stimulus, and if the stimulus will soon be relevant, they temporarily remain activated even after the stimulus disappears. For example, if a monkey must remember the location of a flash of light for a period of 4 seconds, a certain population of neurons will experience a surge in action potentials in response to the light, and proceed to fire at this elevated rate through the 4-second delay period. When the animal reports the stimulus location, the latter information is no longer relevant, and the population of neurons shuts down accordingly. Such neurons are said to exhibit persistent activity (also called "delay period" activity). Persistent neural activity is thought to represent information about a stimulus even after it is gone, thus reflecting the temporary storage of information, i.e. our working memory.

Persistent neural activity presents an interesting computational complication: action potentials are brief electrical pulses, so how does the system interpret a tonic, persisting pattern of neural activity? In a process called temporal integration (theoretically similar to mathematical integral calculations), the system accumulates information over a certain window of time and "remembers" the sum as a pattern of neural activity. That is, the network dynamics of the circuit can integrate a flurry of brief electrical pulses, and translate the sum into a persistent change in activity. One fundamental question is how the circuitry of neural integrators accomplishes this computational feat, which appears to be so fundamental to working memory.

Emre Aksay of Weill Cornell Medical College, in collaboration with David Tank at Princeton University, recently published a study in Nature Neuroscience that investigated this issue in the neural integrator that controls eye movements in goldfish (the goldfish "oculomotor integrator"). Although goldfish eye movement is not the most intuitive place to study working memory, the goldfish oculomotor integrator is a particularly tractable neural integrator, and may thus provide a framework for understanding similar mechanisms in, for example, our PFC.

Like most animals, goldfish spontaneously move their eyes around, fixating on items of interest (e.g. my finger on the glass of their tank). In order to keep the eyes in that stable, fixed position, the animal must have a sustained neural representation (i.e. a memory) of its eye position, which guides and maintains the activation of the appropriate eye muscles (even if my finger is briefly removed). The oculomotor integrator generates this internal representation by integrating the action potentials of neurons which signal changes in eye position.

When a goldfish is looking to the right, the neurons on the right side of the integrator increase their firing rates (behavior characteristic of a positive feedback system), while those on the left decrease their firing rates. Presumably, the positive feedback occurring on the right is critical for generating persistent firing, thereby enabling integration. However, the connective logic of the circuit that mediates this positive feedback is unknown.

It is known that the oculomotor integrator is a bilateral circuit, with two populations of excitatory neurons (one on each side of the brain); these populations are connected primarily by inhibitory neurons. In light of this neuroanatomy, there are two feasible mechanisms that may mediate positive feedback: a) disinhibition from (in this example) the left side of the integrator or b) excitatory connections between cells on the right side of the integrator.

By using drugs that targeted either excitatory or inhibitory neurons, Aksay and Tank sought to dissect the circuitry of the integrator and solve this dilemma. They found that the persistent activity of the integrator that underlies eye fixation did not require inhibitory neurons, but did require the excitatory connections. However, the inhibitory connections between the right and left sides of the integrator appeared to be important for coordinating the two sides, ensuring that only one has persistent activity at any one time (and thus that the eye only moves in one direction at a time).

And now for the tantalizing extrapolations to which neuroscience lends itself so wonderfully: although the persistent neural activity of discrete neural integrators, (holding a specific set of information in your working memory), does not require inhibitory pathways, the coordination between different integrator circuits (i.e. representations of different sets of information) does. The cornucopia of information presented by our surroundings may require these inhibitory connections to help liaise our working memory at a local level, lest the mayhem of our welter world prevail.

Thursday, June 14, 2007

Neuroscience topics explained in 120 seconds

If you're looking to mix some education into your procrastination, the Society for Neuroscience website has a series of free online newsletters "explaining how basic neuroscience discoveries lead to clinical applications." The articles are brief and quite accessible, and include a wide variety of interesting topics, like narcolepsy, phobias, memory enhancers, pheromones, and artificial vision.

Friday, June 8, 2007

Come here often?

Imagine being home on a moonless night when the power unexpectedly goes out. You are shrouded by silent darkness, instantly blind to your surroundings. Yet despite this sensory deprivation, you can navigate somewhat effortlessly around the futon, through the doorway of the kitchen, and across to the middle drawer where your lighter is stored, avoiding walls, furniture, and other familiar obstacles along the way. How, without vision or echolocation, did you remember where everything was in relation you and to everything else?

The brain's "spatial memory," as this ability is called, relies on the operation of neural "maps." Critical to these maps are specialized neurons known as "place cells," which are located in the hippocampal formation. These cells show place-specific firing patters; that is, a given place cell will become highly activated only when an animal is at a specific location within a particular environment. Theoretically, networks of place cells, each activated in a distinct but partially overlapping spatial region, form maps of every environment encountered. If an environment is experienced repeatedly, the map will be committed to long-term memory; the brain can then deduce its animal's location by interpreting the activation of place cells along the relatively stable map.

Importantly, place cell activation patterns are based on spatial clues. In the introductory example, you could navigate in darkness only because you knew your relative position at the time of the power outage. If, however, you were to close your eyes and twirl around on your toes, and open your eyes immediately after the outage began, your internal map (and thus you), would be spatially bewildered. Yet if you were to grope and fumble until you found the futon, your map would reorient, allowing you to immediately intuit the rest of your spatial world.

What about when two environments have similar spatial cues? For example, imagine two parallel streets in San Francisco, each lined by eminent Victorians, peppered with sushi restaurants, cafes, and liquor stores, a MUNI rail cutting a rugged metallic swath down the middle of each street. The spatial cues of these two environments would activate a somewhat overlapping pattern of place cells, yet the subtle differences on each street (an Indian-Pakistani restaurant on the north side of one, a pirate store on the south side of the other) would allow you to recognize the differences and navigate each uniquely. How does your brain recognize such relatively small differences to construct the distinct maps the environments deserve?

Researchers at the University of Bristol and MIT published a report in the early online edition of Science on June 7 that explored this question. The group focused on the role of a particular region of the hippocampal formation, the dentate gyrus, and found it to be crucial for distinguishing between similar locations. The dentate gyrus does not contain place cells, but it does serve as an interface between the hippocampus (where the place cells are located) and the rest of the brain (which would provide the sensory information, the spatial cues). Thus, it may provide the neural input necessary for "map" construction.

The group removed the NMDA receptor, a protein crucial for synaptic plasticity (the process by which the connection between two neurons adapts to become stronger or weaker, thus enabling learning and memory), specifically from the dentate gyrus. Although these mice perform normally in several learning and memory tasks, they had trouble discriminating between similar yet distinct environments. At the neuronal level, their place cells showed decreased spatial specificity, becoming activated in a significantly broader range.

This type of deficit is similar to what has been previously observed in aged animals; these results may thus help explain the disorientation experienced by some older people, who often struggle to adapt to new spatial locations. Perhaps a major component of their impairment is an age-related dysfunction in the dentate gyrus, which makes it difficult to encode subtle differences and form unique place cell maps for similar yet distinct places. Such individuals would also lose their bearings as a result of changes to familiar environments; e.g., a few years ago I moved some of my grandmother's icons around on her computer's desktop, and she was completely bewildered until I dragged them all back to their original, recognizable locations.

McHugh TJ et al. "Dentate gyrus NMDA receptors mediate rapid pattern separation in the hippocampal network" Science. [Published online June 7 2007, DOI: 10.1126/science.1140263]

Thursday, June 7, 2007

Monkey see, monkey do mathematical calculations

Humans are constantly making decisions with uncertain outcomes—betting on a poker hand, predicting the weather, and selecting a lane of traffic, for example. Because the consequences of such decisions are not guaranteed, we must base our decisions on clues from the environment, determining the probabilities of potential outcomes before deciding on a rational course of action.

How does the brain perform these calculations? During the formation of a decision, what happens between sensation (our interpretation of the outside world) and behavior (the manifestation of our decision)?

To answer these questions, Tianming Yang and Michael Shadlen from the University of Washington trained Rhesus Monkeys to perform "simple" statistical calculations, and measured the activity of particular neurons during the decision-making process. The results were published on June 3 in an advance online publication in Nature.

In the task, the monkeys were presented with a random series of four abstract shapes on a video screen. They then directed their gaze toward either a red or a green target light, only one of which would be associated with a juice reward. The light that would give the reward was not fixed, but could be calculated probabilistically.

Each shape (there were a total of 10) represented the probability that the rewarding target was either red or green. For example, a square strongly favors the red target as rewarding (weighted 0.9), while a triangle indicates that green will be rewarding (0.9 in the opposite direction). A cone weakly indicates the red will be rewarding (0.5 towards red), and a pac-man weakly indicates green (0.3). Thus, the probability that the monkey will be rewarded by looking at a particular target is the sum of the probabilities for each of the shapes.

With 10 shapes, there are 715 unique combinations (and 10^4 permutations), thus precluding memorization of specific four-shape patterns, and encouraging the monkeys to learn the shapes and calculate the reward probability of each target. This is a far from trivial demand of a monkey, but eventually (after two months and over 130,000 trials), they chose the correct target 75% of the time, indicating that they had learned to base their decisions on the combined probabilities for reward. This capacity of monkeys to make such subtle probabilistic deductions is quite impressive, but is only the first half of the story.

After thus establishing a complex reasoning task, the researchers could begin exploring the neural basis for these types of decisions. They measured the activity of neurons in a particular area of the brain, called the lateral intraparietal area (LIP). This area lies intermediate between the visual input (the abstract shapes) and the behavioral output (the appropriate eye movement), and is thought to carry information involved in transforming visual signals into commands to move the eyes; i.e. in making decisions that result in eye movements.

What they found was awesome. When the monkeys saw a shape, the activity of their LIP neurons was proportional to the probability associated with that shape. With each sequential shape, the neurons altered their firing rates to match the updated probability. Although it is unknown how their brains converted information from each shape to their respective probabilities, the activity of these neurons indicates that they either play a role in the transformation, or represent the outcome during the decision-making process.

Apart from showing that monkeys are closer to furry calculators than previously thought, the study has grander implications. As the authors conclude, “the present study exposes the brain’s capacity to extract probabilistic information from a set of symbols and to combine this information over time.” A similar neural process may underlie our abilities to reason about alternatives, and make decisions based on subtle probabilistic differences.

Reference: Yang T & Shadlen MN (2007) Probabilistic reasoning by neurons. Nature (doi:10.103/nature05852)

Friday, June 1, 2007

Of Molecules and Memory, Pt. I

I've posted on memory a few different times, but thus far I've shied away from going into great molecular detail; in fact, I've pretty much avoided molecular and cellular neuroscience altogether on this "blog." This sidestepping results, to be honest, from laziness; it is easier to make gambling and ventriloquism widely appealing than it is to spice up intracellular mechanisms like gene regulation and protein folding, although I believe the latter two are actually quite intriguing and wholly relevant to understanding the mind.

Glossing over molecular details is actually somewhat at odds with my attitude towards neuroscience, a field which has appealed to me since middle school because it links causal, physical mechanisms with delightfully wondrous things like memorizing pieces of music (the actual moment of inspiration occurred while I was playing the piano). Since then, I have been fascinated by the idea that the biology--proteins, molecules, genes etc--of individual cells is directly related to the complexities of human thought, from kicking a soccer ball and catching a dodgeball to learning a language and dreaming.

In the days since middle school, however, I've come to realize that by attempting to bridge molecules to behavior, neuroscience is both marvelously exciting and incredibly problematic. In between these two levels are, in increasing levels of organization: the cellular, the intercellular (synaptic), the circuit (networks/pathways), the regional (e.g. fMRI studies), and the systems (e.g. motor systems), and a wide range of inter-level hierarchies upon which I won't begin to touch. Because of the enormous distance one must travel from specific molecules to the human mind, many cognitive neuroscientists dismiss "reductionism" as analyzing mechanisms which are too far removed from behavior to be directly relevant; they believe each level must be bridged before making any larger connections.

I agree that the mind cannot be understood by looking solely at the simplest biological components, but I also feel that knowledge of neural networks, etc., is meaningless unless we understand the biological basis. In other words, the cellular approach is necessary, but not sufficient, for understanding the brain. Most cognitive processes, in particular memory formation, have much to gain from molecular and cellular analyses.

Memory formation is endlessly fascinating on all levels. Conceptually, memory is (to quote Eric Kandel), "a form of mental time travel [which] frees us from the constraints of time and space"; mechanistically, it is a result of the brain's ability to embody, retain, and modify information in neural circuits. To further define 'memories' using neuronally (i.e. biologically)-relevant vocabulary, it is helpful to distinguish it from closely-related 'knowledge' and 'learning.'

'Knowledge,' in neuronal terms, is the perceived world converted into a neuronal form; it exists as "internal representations." These representations issue from the activity and connectivity of neurons (forming an assembly of neurons: a 'neural circuit'), and is thus inextricably linked to the biological properties of those neurons, particularly of their functional interconnections (i.e. synapses).

'Learning' is then the experience-dependent creation or modification of these internal representations; i.e. changes in the way the neurons are connected to each other in specific circuits, particularly the strength of their synapses ('synaptic plasticity'). 'Memory' is thus the retention of the aforementioned experience-dependent modifications. The salient idea is that specific biological properties must be altered, (in particular, those of the synapse) in order for memory to be established; moreover, these properties are products of universal cellular and molecular mechanisms that are employed throughout the body and the living world.

(So what are these cellular and molecular mechanisms? For those in need of some scientific background information on neuronal communication, this site is quite clear and comprehensive, or for a briefer version I've given a summary here.)

And now, leaping and bounding back through the conceptual hierarchy of neuroscience, these biological processes are linked to functional changes occurring within neural circuits, which latter ultimately guide behavior. Thus, although reductionist techniques attempt to experimentally link molecules directly to behavior, the overarching theoretical goal involves bridges between and amongst all levels. Modern molecular techniques are all the more powerful when combined with other levels of analyses: after intervening at the cellular or molecular level, well-accepted psychological and behavioral paradigms can be employed to determine whether a given biological process is correlated with (or necessary, or perhaps even sufficient for) the occurrence of a behavioral phenomenon.

One elegant example of the power of reductionism (which motivated this post, in particular the somewhat lyrical wax of an introduction) was just published online in Nature Neuroscience. The study, carried out by a group from UT Southwestern, manipulated a neuronal protein to assess its role in learning and memory, thus attempting to bridge the behavioral and the molecular pathway levels directly. I will go into more detail on the paper, and the neurobiology of memory, in the following post.

Of Molecules and Memory, Pt. II

This is Part II of a two-part series; click here for Part I.

Now for some neurobiological background on memory, on the biological changes that occur at synapses when "internal representations" are modified. A key experimental paradigm to understand is called long term potentiation (LTP), which is thought to simulate what happens in the brain during learning. Basically, experimenters take a slice of the hippocampus (a structure with a critical role in declarative learning and memory), and use an electrode to induce strong activity (i.e. a high frequency of action potentials) in a group of neurons located in a specific area of the hippocampus (called CA3). These regions project to neurons in another region (CA1), and connections between these regions are believed to be involved in learning and memory. Moreover, the experimental stimulation is thought to be similar to the kind of stimulation neurons receive during intense activity (e.g. learning), and results in the "potentiation," or strengthening, of the synapses between CA3 and CA1 neurons. In other words, the CA3 neurons become more effective at stimulating the post-synaptic CA1 neurons.

The hippocamal synapses at which LTP is thought to occur are excitatory (meaning their activation makes it more likely for the post-synaptic cell to fire an action potential), and use a small neurotransmitter called glutamate. Glutamate is by far the most prevalent excitatory neurotransmitter in the brain, and in most cases activates a mixture of NMDA and AMPA receptors on the surface of the post-synaptic cell. Now, I'm going to try to delve deep into the biology of NMDA receptors (with some hyperlinked help), because they have some quite unique features that are critical for synaptic plasticity (and by extension, learning, knowledge, and humanity).

NMDA receptors are ion channels (proteins that span the membrane and conduct specific charged particles into or out of a cell). Because NMDA is at excitatory synapses, it allows positively charged ions (like sodium and potassium) to flow into the neuron. One of the special features of NMDA receptors is they will only conduct these ions under very specific circumstances: 1) glutamate must be present (indicating the activation of an incoming neuron which has released glutamate) and 2) the neuron must already be somewhat "depolarized" (indicating the activation of other synapses from nearby cells; remember that each neuron receives thousands of inputs). Thus, NMDA channels at synapse A will only open if 1) synapse A's presynaptic neuron is activated and 2) the post-synaptic cell is already somewhat activated by activity at synapses B, C, and D. This specificity confers on the receptor the capacity to act as a molecular coincidence detector, only opening when the pre- and post-synaptic cell are activated in unison, e.g. if the synapse is highly active.

When the NMDA channel does open, it allows not only the entrance of sodium (Na+) and potassium (K+), which depolarize the cell, but also of calcium (Ca++). If the NMDA receptors are induced to open repeatedly in a short period of time, the levels of Ca++ in the cell will become high enough that they activate specific biochemical pathways. First, in the "early phase," the pathways lead to an increase of functional AMPA receptors (the other major kind of glutamate receptor, which cause activation of the post-synaptic cell but does not conduct calcium, nor act as a coincidence detector) on the post-synaptic cell, which means that when a certain amount of glutamate is released into the synapse, it will have a stronger effect because there are more receptors for it to act upon. However, this potentiation is short-lived unless other changes take place.

Persistently high calcium levels will eventually lead to "late phase" LTP, which involves changes in the expression of certain genes (i.e. the rate at which certain proteins are produced). This results in enduring changes such as reshaping the architecture of the dendrite, changing the number of functional receptor proteins, and even building new synapses. A structural change has now ensued, allowing synaptic potentiation to last for days, weeks, months, or even longer.

Thus, the NMDA receptor allows certain synapses--those which are frequently activated--to become more effective. Theoretically, when these changes occur at multiple synapses in a neural circuit, the activity and connectivity of the circuit is modified, thus changing the "internal representation" which the circuit underlies, and generating a "memory." But are these truly the molecular mechanisms of memory, particularly forms of memory relevant to mammalian behavior?

This brings us back to the paper, which intervenes directly with these molecular pathways and then measures the effects on memory. The focus of the study was a protein called cyclin-dependent kinase 5 (Cdk5) (a "kinase" is a protein which attaches a phosphate group (PO4) to a molecule (phosphorylation), a process which significantly alter the molecule's ability to interact with other molecules).

After using sophisticated genetic tools to remove the gene for Cdk5 in adult mice, the experimenters subjected both normal and "mutant" (those lacking Cdk5) mice to a number of well-established memory tests. In the first set of tests, the mice are trained to learn that an aversive stimulus (usually an electrical shock to the feet) is associated with a particular context (e.g. a room) or cue (e.g. a light or tone); these tests are called contextual and cued fear-conditioning, respectively. Once the association in learned, the neutral stimulus alone (the room or the light) is sufficient to elicit a state of fear (usually determined by observing whether the mouse becomes immobile or "freezes").

In another test, the "Morris water maze," a mouse is placed in a circular pool of opaque water (about 4-5 ft in diameter, typically clouded with milk powder or white paint) that contains a platform hidden about 1 cm below the surface. As rodents are highly averse to swimming, they desperately swim around in search of an exit until they find the platform and can "escape." A series of static visual cues are placed around the edge of the pool, which the rodent uses to determine and, after repeated trials, learn the spatial location of the platform. During the course of training, rodents should require progressively less time to find the platform; once learned, the spatial memory should endure after the training has been completed. This ability to remember the location of the platform depends on the hippocampus; if the hippocampus is damaged, the animals never learn the task.
These tests always seem much more brutal when I explain them like this, although at least they're not as cruel as testing the LD50 of LSD for elephants.
Anyways, the experimenters found that mice lacking Cdk5 performed significantly better in both sets of tests, indicating improved hippocampal learning abilities. The group then explored the mechanisms underlying these behavioral changes, and found that LTP was enhanced in the absence of Cdk5. Moreover, the mice had higher numbers of a subset of NMDA receptors--those containing a subunit called NR2B (NMDA Receptor 2B).

After a bit more probing, the group found evidence showing that Cdk5, by phosphorylating NR2B-containing NMDA receptors, was leading to the degradation of the receptor. Consequently, in the absence of Cdk5, levels of this specific class of NMDA receptors were increased, thus significantly affecting learning behavior, possibly through effects on synaptic plasticity.

And thus, by intervening with a molecular pathway, and tracking the effects using well-established memory tasks, this group linked a molecule to memory. Together with the anatomical circuits into which the neurons are embedded, these molecular pathways directly explain the behavior.

One of the main reasons I wanted to devote this post to reductionism is that I realized that in most of my posts on cognitive neuroscience, I more or less treat the brain like a "black box," which I'm worried may mislead people into thinking that the field of neuroscience doesn't know much about how the brain works. While there is an unimaginable amount of information that is yet to be revealed and understood, there is an amazing amount which we do know--so much that the wealth of available knowledge tends to intimidate me from attempting to explain it in a blog (which is why this post is so monstrously long; congratulations to those who have made it this far). As Eric Kandel, James Schwartz, and Thomas Jessell announce in their introduction to Principles of Neural Science,
"Neural science is attempting to link molecules to mind--how proteins responsible for the activities of individual nerve cells are related to the complexities of neural processes. Today it is possible to link the molecular dynamics of individual nerve cells to representations of perceptual and motor acts in the brain and to relate these internal mechanisms to observable behavior."
Again, I do not believe that molecular mechanisms can alone explain cognition. I would certainly never be satisfied by saying that memory arises from the activity of NMDA receptors and calcium signaling, but these molecular processes are essential for understanding the larger phenomenon, and I thought it was time I showed them their due respect.

Hawasli AH et al "Cyclin-dependent kinase 5 governs learning and synaptic plasticity via control of NMDAR degradation" Nature Neuroscience. Published online 27 May 2007.