Friday, May 25, 2007

If I had to choose one neurological disorder to be afflicted by...

In 1954, RCA released the first color television set, transforming shades of gray into rich, dazzling hues at dinner tables around the country. Imagine experiencing that tantalizing technicolor transformation when you hear a piece of music, or look at everyday objects like numbers, letters, and days of the week? There is a rare condition in which people do experience the world in this extraordinary way, such as Nobel prize-winning physicist Richard Feynman:

"When I see equations, I see the letters in colors – I don't know why. As I'm talking, I see vague pictures of Bessel functions from Jahnke and Emde's book, with light-tan j's, slightly violet-bluish n's, and dark brown x's flying around. And I wonder what the hell it must look like to the students."


This condition, in which certain sensory stimuli trigger unusual additional sensory experiences, is called "synesthesia." Grapheme-color synesthetes, like Feynman (as well as Vladmir Nabokov), look at printed black letters or numbers and see them in color, each a different hue. For example, 2 might appear dark green, 5 might be red, and 7 may be tinted orange, even though the synesthete is well aware that the numbers are black. Others see or "experience" colors when they hear certain musical tones ("sound-color synesthetes," most famous are Duke Ellington and Wassily Kandinsky); in others, individual words of spoken language evoke vivid taste sensations in the mouth.

This fascinating mingling of the senses was first brought to the attention of the scientific community in 1880, when Francis Galton (cousin of Charles Darwin) described the phenomenon in Nature. He described individuals with grapheme-color and sound-color synesthesia, and proposed that the condition was inheritable (a hypothesis recently supported by work from Simon Baron-Cohen, cousin of Sacha Baron-Cohen).

Galton's work was followed by a brief period of scientific interest in synesthesia, but because the condition could not be observed by anyone but the beholder, it was soon brushed aside as a curious anomaly, presumably the product of insanity, drugs, and/or an overactive imagination. In addition to the questionable neural basis, it was doubtful whether there were any significant implications beyond the phenomenon itself, thus offering little to tempt the scientific community. In the 1990's, however, internal states like consciousness became respectable areas of investigation, and attention returned to synesthesia.


In order to demonstrate that synesthesia was a real phenomenon, researchers designed clever cognitive tests that would reveal their abilities. V.S. Ramachandran, for example, showed that synesthetes could perceptually group graphemes according to their synesthetic colors. He designed a task in which a triangle composed of 2's was embedded in a background of 5's. As you can see in the top box, the numbers are similar enough (they are mirror images of vertical and horizontal lines) that they blend together. In order to discern the shape, one must actively search for the 2's in the sea of 5's. But imagine that the 2's are green and the 5's are red (as in the bottom box). It is now effortless (for most of us) to segregate the two numbers by color, and normal humans instantly perceive the shape. Similarly, a grapheme-color synesthete who perceives 2's and 5's as distinct colors can look at the top array of black digits and effortlessly discern the triangle of 2's.

Another intriguing clinical test for synesthesia, also designed by Ramachandran, takes advantage of the visual phenomenon known as the "crowding effect." If a person is staring straight ahead, and a number (e.g. 5) is presented off to one side, it is easy to discern. However, if the 5 is flanked by other numbers ("distractors," e.g. 3), the average person finds it difficult to recognize the middle number (an effect thought to result from limits in visual attention). Likewise, a synesthete will be unable to discern the middle number, but will still be able to identify it "because it looks red [or whichever color he or she associates with 5]"! Thus, even though the individual is not consciously aware of the number, it still evokes its respective color.

These studies, along with earlier research by Baron-Cohen, have established that syensthesia is clearly a very real sensory/perceptual phenomenon. Understanding the neural basis for this curious interweaving of the senses thus has enticing potential for linking the organization of the brain to perception and sensory experience.

So what is it that differs the brain of a synesthete from my brain, which perceives black numbers and letters as their dreary black selves? What happens to the visual information in the synesthetic brain such that it is transformed in an extraordinary way? In order to begin developing theories of how grapheme-color synesthesia might work, it's important to have an understanding of how the brain processes visual information. (There are, as I mentioned, many types of synesthesia: sound-color, sound-taste, grapheme-taste, texture-taste etc. Grapheme-color synesthetes, however, are the most common subset (representing 68% of all synesthetes), and are the easiest to study, hence this discussion, and most research, is limited to latter condition).

After light reflected from an object hits the cones in the back of the retina, the neural signal travels along several layers of neurons to the retinal ganglion cells. These cells send their axons out the back of the eye, through the optic nerve, to a small part of the thalamus called the lateral geniculate nucleus (LGN), which relays the stimulus directly to the primary visual cortex ("V1", aka "area 17" aka "striate cortex") in the back of the brain. If you locate that bump on the back of your head, V1 is right on the other side of the skull. (There is another major visual pathway--the retinotectal pathway--which bypasses the LGN and V1, but this pathway does not transmit information about color so I'll ignore it in this post). In V1, the visual information is partitioned into visual attributes such as color, form, and motion. Information from these categories is then communicated to the respective processing regions; for color, this is region V4, which is located in the fusiform gyrus of the inferior temporal cortex. After V4, the information is relayed to cognitively "higher" processing centers, including a region of the temporoparietal-occipital (TPO) junction, a structure on which multiple sensory pathways converge.

What about numbers and visual graphemes? Lo! Studies in humans and monkeys have shown that the shapes of numbers and letters are also processed in the fusiform gyrus, in a region adjacent to V4. Moreover, numerical concepts, such as sequence and quantity, are processed in the TPO.

The similarity and proximity of the color and grapheme processing routes has led to the hypothesis that there is some abnormal form of communication occurring between the two in the synesthetic brain. As a result, any time there is an activation of neurons representing numbers, there may be a corresponding activation of color neurons.

This insight has given rise to two neural models for synesthesia. According to one idea, synesthesia results from abnormal connections between the relevant brain areas. During development, the human fetus has dense interconnections between V4 and other inferior temporal regions (, most of which are removed through a process of pruning later in development. Synesthesia may result from a partial failure of this normal pruning process, resulting in excess connections between normally isolated sensory areas. Perceptually, this would lead to a blurring of the boundaries that normally exist between the senses.

The other neural model is called the "disinhibited feedback" theory, which posits that the connections in the brain of a synesthete are no different from those in the normal human adult. Remember that the TPO is a multisensory integration center, receiving information from multiple sensory pathways. This nexus also sends reciprocal feedback connections back to the contributing sensory areas (e.g. the TPO responds to input from V4 by sending output back to V4). This feedback is inhibited when its respective area is not activated (TPO will not send feedback to V4 if V4 has not provided it with input). In synesthetes, this theory proposes, one type of sensory information (e.g. a grapheme) may induce abnormal disinhibition of the feedback pathway of a different sensory pathway (e.g. color), thus propagating its information down the "wrong" pathway to a functionally distinct area. Thus, viewing a particular number may disinhibit the pathway that activates neurons representing a particular color in V4. This hypothesis is supported by accounts of synesthesia being induced by hallucinogenic drugs, implying that the experiences rely on normally existing circuitry, as opposed to the formation of new connections.

Although the cross-activation theory seems to have a bit more support in the scientific community, there was, until now, little evidence demonstrating that one theory was more accurate than the other. This week, however, Romke Rouw and H Steven Scholte of the Univeristy of Amsterdam made an important contribution to the field by examining the structural connectivity of the synesthetic brain. Their results, published online in Nature Neuroscience, demonstrated that the degree of structural connectivity was correlated with the existence and nature of the synesthetic experience.

One of the important methodological issues with this study is the acknowledgment of the heterogeneity of synesthetes (although the study was confined to grapheme-color synesthetes). Synesthetes with the most dramatic experiences of synesthesia actually see colors projected onto letters or numbers, and are referred to as "projectors." The majority of synesthetes, however, do not experience their colors in external space; instead, they use phrases like "in my mind's eye" or "in my head." The colors are just as specific and repeatable as those perceived by projectors, but are sensed internally. Such synesthetes are called "associators."

In line with the "cross-activation" theory explained above, the researchers explored whether there were, in fact, more connections in synesthetes than non-synesthetes, focusing their analysis on the fusiform gyrus. To examine the neural connectivity, the researchers used a technique called diffusion tensor imaging (DTI), which measures the direction of movement of water molecules. In most brain tissue, water molecules diffuse chaotically, at random. Along the myelin sheaths of axons, however, water movement is restricted, thus following the path of the axon. This technique allows the visualization of bundles of axons; more (or more densely bundled) connections will yield a higher signal. Thus, the strength of the DTI signal is related to the strength of the connection.

Their results confirmed that the brains of synesthetes have increased connectivity in the inferior parietal cortex (near the fusiform gyrus), as well as the frontal and parietal cortices (involved in controlling spatial attention) than in normal individuals. Morevover, subjects with the strongest connectivity at the fusiform gyrus were projectors, while associators had connections stronger than controls but weaker than the former. This hyperconnectivity is thus tightly correlated with the synesthetic experience, offering a neural basis for this sensory fusion.

The study also found significantly higher levels of connectivity in the frontal and parietal cortices of synesthetes, with no difference between associators and projectors. These areas had not been previously linked to synesthesia, but the authors mention that they may be involved in perceptual transitions, such as the fluctuations that occur when viewing bistable figures like Rubin's face-vase figure (on the right) or during binocular rivalry. This may be related to the synesthetic experience; many synesthetes see the color only transiently or as a flickering perception.

This study is the first to demonstrate that increased connectivity in specific areas of the brain is related to synesthesia. It is certainly possible that this structural phenomenon is supplemented by abnormal disinhibited feedback, or that it accounts for only a subset of synesthetic cases, and more studies are needed to support these theories. Moreover, it will be interesting to see whether similar structural abnormalities are present in cross-modal synesthesia, such as sound-color or sound-taste, which sensory centers are more isolated than the adjacent color and grapheme perceptual centers.

References:
Feynman, Richard (1988). What Do You Care What Other People Think? New York: Norton. P. 59.Rouw R and Scholte HS. Increased structural connectivity in grapheme-color synesthesia. Nature Neuroscience [Published online May 21, 2007]

Wednesday, May 23, 2007

Sensible irrationality?

I recently posted on neuroeconomics, which attempts to understand the neural basis for human decision-making. One of the primary motivations of the field is understanding the irrationality of our decisions; i.e. why they are often contrary to the logical choices that would maximize one's personal outcome. Mind Hacks has a post on a recent Scientific American article that discusses "how [some of] our decisions are often irrational in game theory terms, but can still be more beneficial than the supposed rational choice." The article is free, and offers an interesting perspective on "rationality" verus "common sense."

Jesus Shark!

From the San Francisco Chronicle:

A female hammerhead shark was mysteriously born at Omaha's Henry Doorly Zoo in December 2001, in a tank that held three adult, female hammerheads but no males.

Although the three females had been caught before they reached sexual maturity and held in captivity for more than three years, researchers initially thought one of them had stored sperm from a male shark before fertilizing an egg. But the team -- which included scientists at Nova Southeastern University in Florida, Queen's University Belfast and the Omaha zoo -- determined that the baby shark's genetic makeup perfectly matched one of the females in the tank, with no sign of a male parent.

Nova Southeastern's Guy Harvey Research Institute director Mahmood Shivji -- one of the paper's authors -- said he and his colleagues determined that a byproduct formed when sharks produce eggs, known as a sister polar body, had fused with an unfertilized egg to produce the baby shark, whose DNA had only half as much genetic variability as the mother.

"Yes, indeed this is a virgin birth," Shivji said in an interview.

...

Researchers have observed parthenogenesis [the ability to give birth without having sex] in certain species of birds, reptiles, amphibians and bony fishes, but the new finding suggests that vertebrates' ability to reproduce without sex evolved much earlier than scientists had thought.

Clever girl! Why did it take 6 years to confirm this event of parthenogenesis, even though 79% of Americans insist that certain human beings can do it?

Monday, May 21, 2007

Adult Entertainment

The biology of brain plasticity, including adult neurogenesis, synaptic plasticity, axon regrowth, and synaptic reorganization, is currently one of the most intensively studied areas of neuroscience. One of the burgeoning avenues of research for this field explores how altered plasticity may account for some of the behavioral and neural changes afflicting the aging brain, and is leading to efforts of fostering plasticity and thus "rejuvenate" the brain.

Perhaps the most popular products to emerge from this research are "brain games," which are activities designed to enhance cognitive function. These games have encountered a healthy bit of well-deserved skepticism from the scientific community; few of them have been validated by techniques even remotely "scientific" (by including, for example, controls), yet many make grand claims of improving some general notion of "intelligence" and well-being. There are, however, a few exceptions: "brain fitness" products emerging from within the scientific community, such as Posit Science, intended for people in their 60s and 70s, and Lumosity, which targets a younger population (i.e. baby boomers.) Since many functions, such as processing speed, working memory, and attention, begin declining around the age of 30, it seems reasonable to start on the early side.

Lumosity is a new program, so the games are still in "beta" phase and thus free, and these games are easily the most entertaining of any I've previously played (evoking behavior reminiscent of my childhood Tetris addiction.) More importantly, the games are inspired by research on human cognition; the company's head of neuroscience research studied with Jon Cohen at Princeton and John Gabrieli when the latter was at Stanford, among others, and there are a number of cognitive neuroscientists on the board of advisors. I recently met one of the founders at Stanford, and after discussing my research on adult hippocampal neurogenesis, I ended up joining these cognitive neuroscientists as a fellow scientific advisor. Anyways, the group at Lumos Labs performed a randomized, controlled study, which I can personally endorse, showing that Lumosity users improved on tasks of working memory and visual attention (there's an SFN poster and white paper available for your scrutiny as well).

Further scientific validation of the program's ability to improve various cognitive functions is certainly needed, and is in progress. Most importantly, of course, will be evidence that this sort of cognitive training can have long-term effects that translate to "real-world" functional improvements. In the meantime, the games are fun (with enticingly impressive high score lists) and certainly can't hurt.

Encephalon #23

Aloha and welcome to the 23rd installment of Encephalon, a blog carnival devoted to the study of the nervous system and the understanding of thought, emotion, and behavior. I received a lovely diversity of posts, and I've made a mildly successful attempt at categorizing them to help guide you in your explorations. Enjoy!

Approaches to understanding the mind
Chris at Developing Intelligence has an intriguing argument against rigorous reductionism, which dictates that the mysteries of human thought can be explained by understanding the most reduced biological mechanisms. Exploring the unique human capacity for symbol use, he stresses the importance of an integrative reconstructionist approach that uses computational analyses to understand neural networks.

The Neurocritic discusses Joaquin Fuster’s model of cognitive organization, which involves extensive, overlapping networks as opposed to functionally independent “modules.” Paul at Memoirs of a Postgrad also discusses Fuster’s work, examining the hierarchy of the neural loops that coordinate behaviors with stimuli (the “perception-action cycle.”)

Bora of A Blog Around the Clock does a great analysis of experiments that examine Drosophila chronobiology. I've always been somewhat disturbed by the ruthlessly standardized and reduced paradigms used to study fruit fly behavior; Bora reports on novel experimental conditions that attempt a more natural setting.

Johan at the Phineas Gage Fan Club writes about a powerful new way to study sleep, one of the great mysteries of evolutionary biology and neuroscience. To explore its purpose, the researchers develop a method to induce deep sleep in humans (somewhat) at will.

Memoirs of a Postgrad also links the understanding of the mind with the creation of artificial intelligence. He has a fantastic discussion of the concept of embodiment, and looks at its theoretical and methodological implications for AI.


Evolution
The Thinking Meat Project reports on a symposium focused on the evolution of the human brain, touching on paleoneurology, human paleontology, archaeology, primatology, and cognitive science.

And what discussion of evolution would be complete without bringing up the mammalian eye? One of the most puzzling features of the retina is that it is “inverted,” meaning light beams must pass through a forest of neural fibers before reaching the photoreceptors, presumably leading to unavoidable light scatter and degradation. The Neurophilosopher discusses a fascinating, unexpected solution to this enigma.


Cognitive abilities

Neurontic reviews Carved in Sand, a book that gives a first-person account of the progressive memory loss faced by older adults. The book's author presents a thorough analysis of a variety of interventions, but maybe she just needs to run harder: Neurozone discusses the positive effects of high-impact running on cognitive function.

Addressing the changes that occur on the opposite end of life, Dave at Cognitive Daily looks at how the ability to track multiple objects improves with age, discussing the possible influences of learning and attention.

Of course, losing one's mind needn't wait until the end of life. Vaughan of Mind Hacks writes about recent research exploring the relationship between cannabis and psychosis, including reports from the 2nd International Cannabis and Mental Health Conference in London. The Neurophilosopher has a couple rather chilling posts, drawing attention to the neglected neurological health of our troops. In one post, he describes the consequences of reckless destruction of nerve gas-containing weapons. In the second, he looks at traumatic brain injury, including that resulting from shockwaves.

The Neurocritic has two posts reporting on “The Cognitive Neuroscience of Prospective Thought,” a symposium of the 2007 meeting of the Cognitive Neuroscience Society in New York. He discusses neural mechanisms involved in envisioning the future, and reviews Randy Buckner’s intriguing hypothesis that default brain activity contributes to the development of Alzheimer’s disease.


Language and behavior
Neurozone discusses embodied meaning in language, exploring how mirror neurons may mediate an interaction between language and motor systems.

PsyBlog has two posts on non-verbal communication, speculating on the origins, meaning, and purposes of gesture and the influence of culture on facial expressions. Vaughan from Mind Hacks takes an interesting look at one of the most popular nonverbal behaviors, writing on the possible neuroendocrinological link between sex and trust.


That's it for now! Thanks to everyone who contributed. The twenty-fourth edition of Encephalon will be hosted by Johan of The Phineas Gage Fan Club on June 4. Happy Monday!

Friday, May 18, 2007

Old, wise, and happy

As we grow older, we experience a number of cognitive changes, such as poorer working memory, declining ability to encode new memories, and slower processing speeds. By contrast, a number of critical abilities (short-term memory, autobiographical memory, semantic knowledge) remain stable. One of the major avenues of research for the cognitive neuroscience of aging explores how these behavioral changes correlate with changes in neural structure and function. Such studies, which rely heavily on neuroimaging techniques, have revealed that older adults have lower volumes of grey matter than do younger adults, primarily as a result of decreased synaptic density (i.e. the number of connections ("synapses") between neurons). Particularly affected are the prefrontal cortex (PFC), highly involved in processing speed, attention, and working memory, and medial temporal structures such as the hippocampus, which is involved in encoding information into episodic memories.

One ability which is not believed to decline with age is emotional processing. In fact, recent behavioral studies suggest that healthy older adults may actually perform better on tasks involving the processing of emotional stimuli than younger adults, and tend to have an enhanced experience of positive emotions and/or reduced experiences of negative emotions. To date, however, little is known of the structural and functional integrity of the neural regions associated with emotional processing, such as the anterior cingulate cortex (ACC, located in the middle of the brain right behind the PFC, may be important for one's conscious subjective emotional awareness), insula (important for "bodily" experiences of emotion, e.g. heart rate, breathing), and ventral striatum (involved in motivation and goal-directed positive emotion). How is the activation of these regions during emotional processing affected by age?

To understand the biology of such age-associated changes, Brian Knutson and Laura Carstensen at Stanford used fMRI to examine brain activity during emotional "incentive processing" tasks (i.e. anticipation of a loss or gain) in younger (19-27) and older (65-81) adults, and recently published their results online in Nature Neuroscience.

Participants viewed one of six cues, which displayed the amount of money that could be gained or lost on a certain trial (+$0, +$0.50, +$5.00, or - the same amounts). They were then presented with a target, and if they responded quickly enough they either gained or avoided losing the specified amount. Both age groups performed equivalently, earning similar amounts.

According to data reported by the participants, both younger and older adults felt similarly in anticipation of gaining money, but younger adults responded more strongly in response to anticipation of monetary loss. In other words, the older adults experiences less negative emotion in response to the same cues. This difference has been previously observed, but it may have been due to a bias in self-reports; thus, the researchers used fMRI to look for neural correlates of these behavioral differences.

They found that during reward anticipation (after participants saw the cue, but before they responded to the target), both younger and older adults showed equal levels of activation of the ventral striatum, anterior insula, and medial caudate (part of the dorsal striatum). During loss anticipation, however, younger adults had greater activation of the medial caudate and anterior insula than their elders. Thus, both affective and neural data indicate that older adults experience diminished negative emotions in response to loss anticipation, but retain their abilities to respond positively to reward anticipation. The asymmetry with which age affects the functioning of these regions is intriguing; clearly, they are still capable of normal levels of activation, but are somehow dampened during certain negative emotions.

The authors comment that the "age-related sparing of positive emotional experience may be related to efforts to optimize emotional experiences as one approaches the end of life. One aspect of this optimization may involve reducing negative arousal during anticipation of negative events." This sounds quite rosy to me, but the authors also mention that reduced emotional reactions to anticipated loss may have negative effects. Older people may have, for example, altered abilities of risk assessment, which may lead to sub-optimal decision making. Nevertheless, the overall enhancement of well-being may be worth the risk.

Reference: Larkin GRS, Gibbs SEB, Khanna K, Nielsen L, Carstensen LL, Knutson B. (2007). Anticipation of monetary gain but not loss in healthy older adults. Nature Neuroscience Apr 29; [Epub ahead of print]

Thursday, May 17, 2007

The gay science

Over the centuries since Adam Smith, economists have developed mathematical frameworks for maximizing economic success. However, despite the intellectual power of these theories and the often simple logic involved in their calculations, humans continue to amass credit card debt, default on loans, fail to save for retirement, and on the whole refuse to do what these rational, reward-maximizing equations tell them to do.

The irrationality of human decision-making attracts the fierce interest of two very different fields: neuroscience and economics. Economic theories of human decision-making are essentially based on two parameters: what something is worth and the probability of its occurrence. Neuroscientists, on the other hand, think of decision-making as a product of physical neural circuits: sensory information enters the brain, journeys through the brain where a decision is "made," and eventually exits the brain to evoke bodily responses. Economics ignores these biological, more proximal roots of behavior, whereas neuroscience ignores the economic goals that ultimately guide our decisions.

These two approaches have recently been integrated in the hybrid field of neuroeconomics. Neuroeconomics attempts to unify abstract economic variables with neuroanatomy, and thus understand the physical mechanisms by which our brains make decisions. The basic premise is that somewhere along the sensory-motor circuit are the neural substrates that represent "value" and "probability." These areas must interact and influence the flow of information along the circuit, thereby prompting a certain decision and its subsequent behavior. The most pressing questions, then, are
how and where these abstract variables are combined in the brain, and the dynamics of the neural computation which engenders a "decision."

Inherently, neuroeconomics is not a means to exploit the free market by, for example, scanning the brains of consumers to calculate the maximum price they will willingly pay for a good. Although such endeavors are opportune beneficiaries of this sort of research, I believe neuroeconomics to have grander, more noble intentions. As a neuroscientist, I view neuroeconomics with bright, hopeful eyes, eager for the insight that economics can lend the neurobiological study of human behaviors. Although the former "dismal science" is abstract and far removed from biological mechanisms, it offers one thing behavioral studies tend to lack: great mathematical beauty.

Because economists base their models on optimal behavior, they have the ability to develop a precise, unified framework for interpreting human behavior; the thesis is, essentially, that humans choose alternatives that maximize rewards. Neuroeconomics draws upon the precision and rigor of the formal models of economics to go beyond the sensory-motor circuit, allowing opportunities for understanding the neural basis of more abstract economic ideas, such as value and the profitabilities of outcomes (a bit more challenging to study than sensory and motor systems). Thus, the principles of economics allows neuroscientists to explore the physical mechanisms underlying high level cognitive processes.

Particularly intriguing subjects for these studies are human choices that violate simple logic,; those which are neither selfish nor generous but blatantly, unbiasedly, irrational. I've previously explored irrational behavior in my post on risk aversion; another interesting example is "time inconsistency." When people make decisions about the distant future, they tend to behave as rationally as economic equations dictate. In contrast, when faced with the same decision relating to the near future, they are reckless and impulsive, unwilling to delay gratification. For example, when people are offered the choice of $20 now or $22 in a month, they often choose to receive the smaller amount immediately. However, if given the choice between $20 in a year or $22 in a year and one month, they will choose the higher, delayed amount. This is irrational; in both situations, the time delay (1 month) and financial gain ($2) are equal, so the decision should be the same (the higher amount should always be chosen.)

Another example of irrational impulsivity is less quantitative than the above, but involves a more flagrant demonstration of vice versus virtue. If offered the choice of a chocolate bar now or an apple now, most people demand immediate gratification and will choose chocolate. But if offered to receive a chocolate bar in one week or an apple in one week, people will consider the long-term effects of each and prefer the apple.

Back in 2004, Jon Cohen, Director for the Study of Brain, Mind, and Behavior of Princeton University, teamed up with George Loewenstein of Carnegie Mellon to take a neuroeconomic approach to this perplexing behavior. Using fMRI, they searched for changes in brain activity as the subjects made decisions between small immediate rewards or larger delayed rewards, attempting to link irrational displays of time-inconsistency with brain activity. The results, published in
Science, suggested that decisions involved with the possibility of immediate reward activated the limbic system, which is associated with emotion, while both short- and long-term decisions activated the prefrontal cortex (PFC), associated with logical, abstract reasoning.

Interestingly, when students had the choice of an immediate reward but chose the larger, delayed option, the PFC was more strongly activated than the limbic system. In contrast, when they chose the immediate reward, the activity of the two regions was similar (with a trend toward more activity in the limbic system.) This data suggests that both systems are involved in the neural representation of "value," and that the decision-making process is guided by, as the authors state rather poetically, "a competition between the impetuous limbic grasshopper and the provident prefrontal ant within each of us."

Thus, by exploring the neural processes by which the brain generates economic decisions, the authors were able to gain insight into the circuit-level computations that may govern complex behaviors. The extent to which the computations of economic theory can truly be generalized to the computations performed by the brain (as well as to more complex decision tasks) is unknown, but the aims and progress of this field are promising. From the economist's point of view, neuroeconomics may be far "messier" than economics, but the theoretical analysis of what humans should do isn't, to me, nearly as fascinating as understanding what they actually do, and neuroeconomics brings us far closer to reality.

Reference:
McClure SM, Laibson DI, Loewenstein G, Cohen JD. Separate neural systems value immediate and delayed monetary rewards. Science 306(5695):503-7 (2004).

Tuesday, May 15, 2007

Encephalon #23

Aloha!

The 23rd biweekly installment of Encephalon, a blog carnival devoted to neuroscience, will be hosted here on Monday, May 21, which also marks the birthdays of Alexander Pope, Henri Rousseau, Laurence Tureaud (Mr. T), Jeffrey Dahmer, and Christopher George Latore Wallace (The Notorious B.I.G.), and the wedding anniversary of Humphrey Bogart and Lauren Bacall. To join these legendary individuals in their various celebrations, please send permalinks for up to 3 blog posts to encephalon{dot}host{at}gmail{dot}com sometime before noon on the 20th.

Thanks!

How the Mind Works

From Mind Hacks:
The Technology, Entertainment, Design conference has strayed from its original focus and now hosts a wide-ranging set of talks, including a number on 'How the Mind Works', all of which are available online as streamed video.

...

Helen Fisher talks about the psychology and biology of love, Daniel Gilbert talks about happiness and why we are so bad at understanding it, Ray Kurzweil talk about how we're shortly all to become super evolved drug-enhanced semi-robots.
The other speakers include Daniel Dennett, Tony Robbins, Peter Donnelly, and many others, so I think it's definitely worth some exploration. Full post here, watch the videos here.

Sunday, May 13, 2007

Young, restless neurons may bully their elders

Neurogenesis--the birth and integration of neurons--occurs in the adult brains of all mammals, including humans. I've posted on the phenomenon of adult neurogenesis here and here, and it happens to the focus of my thesis in grad school. One of the major issues in the field is how the generation of new neurons translates into a functional change in the brain. Birth is the first of a number of daunting challenges: new cells must then survive, make connections, and integrate into the circuitry of the mature brain. It is this last undertaking--joining existing networks without disrupting the circuit--which has proved the most conceptually challenging.

Since Alan Turing and the dawn of computer science, the prevailing model of the brain has been based on a computer analogy. The networks of neurons composing the brain were likened to the hard-wired circuits of computer hardware, cemented in place at an early age. Plasticity, which refers to the brain's ability to learn and adapt to the environment, was attributed solely to changes occurring within the operating circuitry, between its existing components (i.e. synaptic plasticity). In line with the computer analogy, this synaptic plasticity is thought to resemble the process of changing computer operations by adding software, which operate within the context of a hard-wired network and thus do not require structural reorganization of the circuitry.

Although synaptic plasticity is a crucial mechanism underlying the brain's remarkable adaptive capabilities, it is not the only mechanism. In the 60s and 70s, theories of structural plasticity, such as axonal elongation and synaptic reorganization, began to emerge and gradually creep into the analogy with computers. However, these activities involved neural pathways being reorganized between existing neurons; new neurons continued to be excluded from the conceptual framework.

About a decade ago, adult neurogenesis finally gained widespread acceptance in the scientific community, initiating a gradual shift in the concept of brain plasticity and adaptability. These new neurons, which integrate into existing neural networks, provided a previously unrecognized, and far more dramatic, form of structural and functional plasticity. The crucial question of how new neurons impact the adult neuronal circuitry remains a challenge, and is dictated by two parameters: their physiological properties and their synaptic connectivity.

Recent work has shown that new neurons in the adult hippocampus (a structure crucial for the formation of memories, and one of the two major locations of adult neurogenesis) have unique physiological properties: they are more excitable (i.e. more likely to fire action potentials subsequent to a given stimulation) and have an enhanced potential for synaptic plasticity. But what about their connectivity? Do the existing neurons send out new axons to accommodate new neurons, or do the new neurons incorporate themselves into the existing circuits?

New findings from Rusty Gage’s lab at the Salk Institute for Biological Sciences, published their results online last week in Nature Neuroscience, support the latter. The researchers tracked the fate of newborn neurons by injecting a retrovirus engineered to carry the gene for green fluorescent protein (GFP) into the hippocampus. This technique is useful because retroviruses can only infect dividing cells (with the exception of lentiviruses like HIV, which can infect nondividing cells such as mature neurons). Thus, cells that dividing at the time of the injection, including newly born neurons, are labeled green, and can be tracked as they make the initial connections with the existing hippocampal circuitry.

They used an impressive combination of high-resolution imaging techniques to examine the fine structural details of the emerging connections pioneered by these new neurons. Their analysis allowed them to digitally reconstruct synapses during formation, and showed that initially, new neurons send out small protrusions, called filopodia, to make contact with synapses that already exist between two "older" neurons. When the filopodia mature, they become functional dendritic "spines," which are small protrusions from dendrites that actually participate in synaptic communication. Thus, the new neurons join into a preexisting synapse, forming a "multi-synapse" connection, suggesting that new neurons are making connections with established networks.

Once they've elbowed their way into an existing communication, the new neurons take their intrusion up another notch. As the neurons aged, they became less likely to be involved in "multi-synapse" connections, and concomitantly more likely to be involved in single-synapse connections. This suggests that the formerly multi-synapse connections were transforming into single-synapse connections; in other words, new neurons were taking over the connections of older neurons, effectually replacing those neurons in a particular part of the circuit. The Gage lab has previously shown that new neurons depend on neuronal input to survive; it's possible that new neurons have to compete with older neurons for those connections, and that their unique physiological properties may give them a competitive advantage.

This hypothesized replacement mechanism, which must still be tested by live-cell imaging, makes the incorporation of new neurons into functional networks easier to conceptualize. Instead of forming new circuits willy-nilly, or forcing old networks bring them in, new neurons muscle their way into the established circuitry, occasionally replacing older, less vigorous neurons. One of the implications here is new neurons may be able to functionally replace dead or dying neurons in neurodegenerative diseases such as Parkinson's or Alzheimer's. Further, these new neurons, with their enhanced plasticity, may continually "reinvigorate" an old hippocampus, even in healthy adults, allowing learning and memory to continue at higher levels that would otherwise be possible.

Reference:
Toni N, Teng EM, Bushong EA, Aimone JB, Zhoa C, Consiglio A, van Praag H, Martone ME, Ellisman MH, Gage FH. Synapse formation on neurons born in the adult hippocampus. Nature Neuroscience. 2007, May 7. Advanced online publication.

Friday, May 4, 2007

The evolution of the central nervous system

The first animals to evolve on Earth did not have nervous systems. They spent their days and nights sitting on the sea floor, passively filtering water for food particles, a lifestyle that continues in their extant descendants: the sponges. Their lifestyle does not require movement or the ability to sense and respond to the environment, making a nervous system a useless ornamentation.

Clearly, movement and sensation can be beneficial, and the next group of animals to evolve perform these deeds, and quite beautifully. This lineage gave rise to the jellyfish and comb jellies of today, which control their movements using a network of neurons distributed diffusely throughout their bodies (called a "nerve net"). They do not have brains; they do not even have clusters of neurons or major nerve trunks. Thus, although they can sense touch and detect chemicals, they cannot decipher where a given stimulus is. For this reason, these animals respond to stimuli with a reflexive movement of the entire body, regardless of where on the body the stimuli was detected.

Then about 590 million years ago, give or take a large margin of error, the Earth witnessed the dawn of bilateral animals; that is, animals exhibiting one, and only one, plane of symmetry, endowing them with not only a top and a bottom, but a left and a right, and a front and a back (as opposed to jellyfish, which have only a top and a bottom). "Bilateria," as this group is called, includes most extant animal species, from leeches and butterflies to sea squirts and kittens.

One of the consequences of bilateralism is cephalization, which is the evolutionary trend toward concentrating sensory structures and nervous tissue at the anterior (front) end, which is whichever end tends to lead during movement. This cluster of nervous tissue (e.g., the brain, or "ganglia" in most other species) needs to connect to the rest of the body in order to control movement and collect additional sensory information, thus necessitating some sort of nerve trunk (e.g. the vertebrate spinal cord). The net result of this concentration of nervous tissue in discrete parts of the body is a centralized nervous system (CNS).

How and when did the centralization of the nervous system occur? We know that jellyfish and comb jellies did not have centralized nervous systems, but extant Bilateria do (I'll get to the exception, hemichordates, later on). To understand the complexity of reconstructing evolutionary history, it's important to understand a little about how the Bilateria are classified. Recent phylogenetic studies have grouped Bilateria into three main branches: he deuterostomes, which consists primarily of chordates (e.g. fish, amphibians, mammals, birds, jawless fish, etc) and echinoderms (e.g. sea stars, sand dollars, sea cucumbers, etc), and two branches of protostomes, Ecdyzoa (insects and roundworms (e.g. nematodes)) and Lophotrochozoa (molluscs and annelid worms (e.g. earthworms and leeches)). At some point in history, these three bilateral branches all had a common ancestor, and the name of this hypothetical beast is Urbilateria (the slide on the right is from a presentation I did of this paper).

Urbilateria represents a crucial position in our evolutionary history, marking not only the origin of Bilateria, but also the last common ancestor of the deuterostomes and protostomes. Thus when trying to understand the evolution of Bilateria-specific traits, such as a CNS, it is of critical importance to paint a clear and comprehensive picture of Urbilateria.

In order to reconstruct ancestral forms such as Urbilateria, which fossil remains are unspecified or unknown, biologists draw inferences from comparative analysis, using different extant lineages. For example, a characteristic present in fruit flies, nematodes, and mice was most likely present in their last common ancestor (Urbilateria). Conversely, a characteristic present in one lineage (e.g. mice) but not the others (fruit flies and nematodes) was most likely an evolutionary innovation of the former (mouse) lineage. One crucial point is the characteristics must bear some sort of similarity between lineages, or a common evolutionary origin is unlikely. For example, our eyes are markedly different from the compound eyes of fruit flies, so our eyes must have evolved independently from those of the fruit fly, indicating that Urbilateria did not have eyes (although it may have had some sort of light sensor/photodetector).

What about the nervous system of Urbilateria? How was it organized (if at all), and how complex was it? Which characteristics of the nervous systems of today's animals are novelties of their personal evolutionary lines, and which have been retained these hundreds of millions of years? The extant species of Bilateria have CNSs, but there is one key difference: in deuterostomes, the central nerve cord is on the dorsal (back) side, while in both branches of protostomes, the nerve cord is on the ventral (belly) side. This difference has made the reconstruction of Urbilateria's nervous system quite difficult and controversial, but there are two main hypotheses.

The first is called the inversion hypothesis, and was proposed by the zoologist Anton Dohrn in 1875. He theorized that Urbilateria was a worm-like creature with a ventral nerve cord, and that the protostomes retained this orientation, whereas an ancestor of the deuterostomes turned itself upside down and gave rise to animals with dorsal nerve cords. The other hypothesis is known as the gastroneuralia-notoneuralia hypothesis, and predicts that Urbilateria had a diffuse nervous system. After branching off from one another, the deuterostomes and protostomes independently evolved CNS's on opposite sides of the body.

For over a century, these hypotheses have been reasserted and rejected time and time again, as people find striking similarities (supporting a common origin) and crucial differences (supporting an independent origin) between the development of the CNS in vertebrates and fruit flies. So what is the significance of these similarities and differences? What does it all mean? Here we get to the main rub of evolutionary biology: in essence, it is a question of probability, and is exasperatingly subjective. The overall probability of an independent versus common origin has to be estimated, and that estimation is based on putative homologies between different branches. Thus, one must determine the degree of similarity at which two structures may be deemed the result of evolutionary conservation.

Not unexpectedly, when one is tracing hundreds of millions of years of evolutionary history, there are many question marks. One concept that is important to grasp is that those hundreds of millions of years have not treated all lineages equally; some animals have evolved more than others. Across Bilateria, the rates of gene loss and gene alteration are remarkably asymmetrical. By comparing the genomes of Bilateria with genomes of animals that diverged before Bilateria (e.g. jellyfish), one can determine how much a Bilaterian animal has evolved from the last common ancestor. For example, a gene present in humans but not fruit flies, but which is present in jellyfish, must have been present in Urbilateria but lost in the lineage that gave rise to fruit flies. By doing this sort of analysis between man, fruit flies, and nematodes, geneticists discovered that the genome of man had changed the least of the three. Both fruit flies and nematodes, then, underwent periods of rapid rates of molecular evolution, with large gene losses, meaning that we have many genes that were lost along their lineages.

For those familiar with research in developmental biology, something a bit disturbing is now evident. The model organisms used in this field are fruit flies, nematodes, mouse, zebrafish, African clawed frog, and sea urchins. The first two are members of Ecdyzoa, while the rest are deuterostomes, leaving the entire branch of Lophotrochozoa unexplored. Given the rapid rate of evolution of the Ecdyzoa, it's clear that piecing together evolution by merely comparing vertebrates to insects is severely limited. In fact, comparisons with pre-Bilaterian animals has demonstrated that the Lophotrochozoans retain more ancestral features than both the Ecdyzoans and deuterostomes, and have more gene conservation with vertebrates than any Ecdyzoan.

In the most recent issue of Cell, Alexandru Denes of Detlev Arendt's lab explored the CNS of a member this hitherto ignored branch, Platynereis durmerilii, an adorable marine ragworm which has deviated little from the Urbilaterians, prompting Dr. Arendt to call it a "living Urbilateria." Platynereis lives in the same environment that Urbilateria would have (shallow marine waters), has a "prototypical" invertebrate nerve cord (arranged like a rope ladder instead of a hollow cord like ours), and undergoes an "ancient" type of cell division in its earliest stages as a blastula (spiral cleavage).

The development of all animals with any sort of tissue specification (i.e. anything but sponges) is controlled by genes that specify what a given cell in the embryo will become (e.g. skin, muscle, nerve, etc). The developing nervous system then undergoes further specification, in which "neural specification genes" divide the primordial CNS into different regions; the progeny of one region will control muscle movement, the progeny of another will convey sensory information, etc. This group looked for these genes by using a method called in situ hybridization, and found that between vertebrates and Platynereis, the domains of eight different genes have largely identical spatial relations to each other (the picture on the left is the authors' cartoon version of the expression patterns; the bottom is the middle of the CNS, moving up goes outward toward the side of the CNS). Moreover, similar neuron types emerge from the corresponding domains, and send axons to similar destinations! The CNS of Platynereis is actually more similar to the vertebrate CNS than the latter is to the fruit fly's, indicating that the CNS of both vertebrates and Platynereis are likely to be more similar to the ancestral form than that of the fly.

The striking similarities between these two animals, separated by hundreds of millions of years of evolution, implies a common evolutionary origin from an equally complex ancestral pattern. In other words, Urbilateria must have had these same sets of genes, in the same spatial orientation, patterning its nervous system, which must have likewise been centralized. Of course, this is still a matter of probability, but it seems highly unlikely that such a complex arrangement of genes could have been recruited independently to specify evolutionarily unrelated cell populations.

The question now is how this inversion happened. Flipping over is not trivial; there are many differences between up and down, and this potential burden is one of the major impediments to acceptance of the inversion hypothesis. "Down" is the sea floor, which combined with gravity becomes the source of friction; "Up" is the source of falling objects and other dangers, as well as sunlight. An inversion of the body axis would have thus involved a significant change in lifestyle, which is difficult to imagine, but not unprecedented.

Many animals swim "upside down," such as brine shrimp and upside-down catfish, both of which are more efficient feeders in this orientation. Perhaps in a few hundred million years, natural selection will reshape the other anatomical details to fit these new lifestyles, dorsal will become ventral, and zoologists will thus classify them as having their nerve cords on the opposite side as their closest relatives. To some extent, this process has already begun with the catfish. Most fish have light underbellies and dark backs, to aid in camouflage. The upside-down catfish, on the other hand, actually has a dark "underbelly" and a light "back," to help it camouflage in its preferred orientation. Thus, for the catfish, swimming upside down began as a behavioral change, which made its genome permitting of certain genetic changes (i.e. coloration and a propensity and eventual instinct for swimming on its back).

It is not unreasonable, then, that the evolutionary trend towards a dorsalized nerve cord began as a change in a behavioral trait, and was later followed by genetic evolution. A similar sequence of events was likely involved in the animal invasion of the land and flying.