Throughout most of human history, our hunter-gatherer ancestors had to engage in physical activity to obtain food. But nowadays we can drive to the supermarket, briefly walk through its aisles, check-out, then drive back home. This may seem like a luxury, but evolution hasn’t prepared us for such a drastic shift in behaviour.
A possible explanation for the “runner’s high,” a feeling of intense euphoria associated with going on a long run, is that our brains are stuck thinking that lots of exercise should be accompanied by a reward. Perhaps our ancestors who were able to achieve the runner’s high while hunting for food ran more often than those who could not achieve the high. These ‘high-achievers’ (no pun intended) would gather more food as a result of their enhanced motivation, and would be more fit to pass on their genes to the next generation.
Anecdotal reports of the runner’s high often come from endurance runners. However, there has been little scientific study of the runner’s high, so it is difficult to speculate about its role or mechanism. The traditional, widely-publicized explanation for the runner’s high is an “endorphin rush” that inhibits pain during vigorous exercise. However, other chemicals that potentially contribute to the high are epinephrine, serotonin, dopamine and endocannabinoids.
In a study recently published in Experimental Neurology, investigators deleted the gene for the cannabinoid receptor CB1 in mice, and examined how this change to the endocannabinoid system affects voluntary running. The mice with CB1 deletions exhibited 30-40% less running activity than mice that did not get deletions. The knockout mice also had reduced hippocampal neurogenesis, or neuron birth that is known to be induced by exercise, but they were able to increase neurogenesis at a regular rate when they exercised.
These findings indicate that the endocannabinoid system is somehow involved in the regulation of voluntary running activity. In particular, a reduction in CB1 levels could lead to less binding of endocannabinoids to receptors in brain circuits that drive motivation to exercise. It appears that the endocannabinoid system does not play a major role in controlling neurogenesis caused by exercise.
It is easy to point to endocannabinoids as a candidate mediator of the runner’s high, since endocannabinoids are the body’s natural tetrahydrocannabinol (THC), the psychoactive ingredient of marijuana. The study described here doesn’t directly speak much to this proposed parallel, but if the motivation to exercise is considered to be related to the runner’s high, then endocannabinoids may be a driving factor to achieve the runner’s high.
Physical activity has been associated with obtaining rewards throughout evolution. Today we might be left with a certain high associated with the prospect of obtaining a reward – a motivational high mediated by endocannabinoids. This ‘pre-runner’s high’ is an anticipation of the runner’s high, so the two experiences cannot necessarily be thought of as separate. That is – of course – assuming that the runner’s high happened often enough in history that our brains continue to develop to anticipate it. But even if the runner’s high was not common throughout our past, the peaceful feeling that almost everyone experiences after an exhausting run or bike ride should be adequate motivation to start moving.
Endocannabinoids have previously been shown to increase in blood levels after exercise, so there is still a possibility that endocannabinoids mediate the runner’s high. It is most likely, however, that many chemicals converge on brain circuits that underlie the experience. Given the newly discovered role of endocannabinoids in motivation for exercise, it would be unsurprising if endocannabinoids played an important part in directly inducing the runner’s high.
So kids out there: don’t smoke weed if you wish to activate your CB1 receptors. Run.
References:
Dubreucq S, Koehl M, Abrous DN, Marsicano G, & Chaouloff F (2010). CB1 receptor deficiency decreases wheel-running activity: consequences on emotional behaviours and hippocampal neurogenesis. Experimental neurology, 224 (1), 106-13 PMID: 20138171
Fuss J, & Gass P (2010). Endocannabinoids and voluntary activity in mice: runner's high and long-term consequences in emotional behaviors. Experimental neurology, 224 (1), 103-5 PMID: 20353785
Thursday, July 29, 2010
Sunday, July 25, 2010
The neural response to emotional robots
When it comes to robotics, Japan is way ahead of the rest of the world. Reality is quickly catching up with science-fiction as robots are being used and developed for increasingly complex behaviours. There are now Japanese robots that function as security guards, trainers for professional skills, and even pets and social companions.
Robots are taking over roles that were once thought to make humans unique. An archetypal example of this is the use of robots for cognitive therapy. Robots are also being used for social assistance and personal care of the elderly. But how does robot social support fare against human support? Is it really possible to empathize with and emotionally respond to a robot while simultaneously knowing that it is just a robot?
Researchers from – you guessed it – Japan, recently collaborated with an international team to investigate how our brains respond to robots expressing human emotions. A humanoid robot, called WE4-RII, was specially designed to make facial expressions for emotions such as disgust, anger and joy. Study participants had their brains scanned with fMRI as they watched either WE4-RII expressing emotions or a human expressing the same emotions. The subjects were asked to attend to either the way the robot/human face was moving or to the emotion being depicted.
When subjects attended to emotions, it was found that brain activity in areas involved in processing emotions (such as the left anterior insula for the perception of disgust) was reduced for robot compared to human stimuli. However, when participants attended to emotions rather than motions, it was also found that brain activity was increased in the ‘motor resonance’ (i.e., the controversial mirror neuron) circuit.
The experimenters suggested this means we don’t emotionally resonate with robots as well as we do with humans, but we relate to robot facial motions better when we attend to their emotions.
The latter claim may seem counterintuitive, but it matches behavioural experiments that suggest a ‘motor interference effect’ for robotic movements. That is, when we attend to robotic movements, we don’t think of these movements as something that we do when we move, so we don’t resonate with them. But when we stop attending to the movements, we don’t notice the robotic nature of these movements anymore, and we therefore imitate them to a greater degree. I wonder if this would be much different for people who are really good at dancing the robot.
Still the experiment doesn’t demonstrate that we can’t emotionally relate to robots to the same degree as to humans. The robot used was clearly mechanical compared to the human actors. This means that subjects knew the robot was a robot. If you can’t tell whether a face is robot or human, however, it would be expected that the brain would respond in the same way.
An interesting follow-up study could address this problem by using more human-looking robots and including two groups of subjects: those who are aware of the robots and those who are not. Then we’d be able to see how previous knowledge affects our innate reactions to emotional facial expressions.
And another central question remains. Now that robots are taking over traditional human roles, when will we start scanning their ‘brains’ and investigating whether they can empathize with us? I won’t be visiting any hot-shot robot psychotherapist until we have data that shows they care.
Thierry Chaminade1,2*, Massimiliano Zecca3,4,5, Sarah-Jayne Blakemore6, Atsuo Takanishi, Chris D.Frith1, Silvestro Micera, Paolo Dario, Giacomo Rizzolatti, Vittorio Gallese11, & Maria Alessandra Umilta (2010). Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures PLoS ONE
Robots are taking over roles that were once thought to make humans unique. An archetypal example of this is the use of robots for cognitive therapy. Robots are also being used for social assistance and personal care of the elderly. But how does robot social support fare against human support? Is it really possible to empathize with and emotionally respond to a robot while simultaneously knowing that it is just a robot?
Researchers from – you guessed it – Japan, recently collaborated with an international team to investigate how our brains respond to robots expressing human emotions. A humanoid robot, called WE4-RII, was specially designed to make facial expressions for emotions such as disgust, anger and joy. Study participants had their brains scanned with fMRI as they watched either WE4-RII expressing emotions or a human expressing the same emotions. The subjects were asked to attend to either the way the robot/human face was moving or to the emotion being depicted.
When subjects attended to emotions, it was found that brain activity in areas involved in processing emotions (such as the left anterior insula for the perception of disgust) was reduced for robot compared to human stimuli. However, when participants attended to emotions rather than motions, it was also found that brain activity was increased in the ‘motor resonance’ (i.e., the controversial mirror neuron) circuit.
The experimenters suggested this means we don’t emotionally resonate with robots as well as we do with humans, but we relate to robot facial motions better when we attend to their emotions.
The latter claim may seem counterintuitive, but it matches behavioural experiments that suggest a ‘motor interference effect’ for robotic movements. That is, when we attend to robotic movements, we don’t think of these movements as something that we do when we move, so we don’t resonate with them. But when we stop attending to the movements, we don’t notice the robotic nature of these movements anymore, and we therefore imitate them to a greater degree. I wonder if this would be much different for people who are really good at dancing the robot.
Still the experiment doesn’t demonstrate that we can’t emotionally relate to robots to the same degree as to humans. The robot used was clearly mechanical compared to the human actors. This means that subjects knew the robot was a robot. If you can’t tell whether a face is robot or human, however, it would be expected that the brain would respond in the same way.
An interesting follow-up study could address this problem by using more human-looking robots and including two groups of subjects: those who are aware of the robots and those who are not. Then we’d be able to see how previous knowledge affects our innate reactions to emotional facial expressions.
And another central question remains. Now that robots are taking over traditional human roles, when will we start scanning their ‘brains’ and investigating whether they can empathize with us? I won’t be visiting any hot-shot robot psychotherapist until we have data that shows they care.
Thierry Chaminade1,2*, Massimiliano Zecca3,4,5, Sarah-Jayne Blakemore6, Atsuo Takanishi, Chris D.Frith1, Silvestro Micera, Paolo Dario, Giacomo Rizzolatti, Vittorio Gallese11, & Maria Alessandra Umilta (2010). Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures PLoS ONE
Labels:
empathy,
fMRI,
mirror neurons,
social cognition,
virtual reality
Sunday, July 18, 2010
Inception
Inception is the latest popular film in which a central theme is the questioning of reality. Similar to the Matrix, dreams are used as a window into this questioning.
“Inception” is defined in the film as the act of planting an idea into someone’s subconscious mind, which can be accomplished by entering their dream. Cobb (played by Leonardo DiCaprio), the main character in the film, is put on a mission to perform inception on an enemy. As an experienced thief who is able to enter the dreams of others, Cobb attempts to attain his goal using a complex scheme to avoid being caught by the enemy. Part of his strategy involves working with associates who design dreams, dreams within dreams, and dreams within dreams within dreams. At each layer, reality is brought into question.
Such questions are brought up: is the deepest level of dreaming actually reality? Is the first level that is not perceived as a dream actually a dream?
Characters in Inception struggle with these sorts of issues throughout the film. These sorts of issues are also reasons to be interested in and study neuroscience.
Although the concept of inception may be thought of in the film as a strategy for attack, perhaps there is another conception hidden beneath this layer of understanding, a conception that requires introspection at its core level. Inception, or the planting of an idea into the mind, could represent our basic notion of what is real. Perhaps we are born with the idea planted in our minds that reality is the world we are experiencing. Perhaps our brains develop in such a manner that we have a natural inclination to assume our senses do not deceive us.
When we start to question this form of inception, this trap of certainty, we are going against our instincts. We are no longer blindly accepting what we perceive.
Therein lies the beauty of neuroscience.
“Inception” is defined in the film as the act of planting an idea into someone’s subconscious mind, which can be accomplished by entering their dream. Cobb (played by Leonardo DiCaprio), the main character in the film, is put on a mission to perform inception on an enemy. As an experienced thief who is able to enter the dreams of others, Cobb attempts to attain his goal using a complex scheme to avoid being caught by the enemy. Part of his strategy involves working with associates who design dreams, dreams within dreams, and dreams within dreams within dreams. At each layer, reality is brought into question.
Such questions are brought up: is the deepest level of dreaming actually reality? Is the first level that is not perceived as a dream actually a dream?
Characters in Inception struggle with these sorts of issues throughout the film. These sorts of issues are also reasons to be interested in and study neuroscience.
Although the concept of inception may be thought of in the film as a strategy for attack, perhaps there is another conception hidden beneath this layer of understanding, a conception that requires introspection at its core level. Inception, or the planting of an idea into the mind, could represent our basic notion of what is real. Perhaps we are born with the idea planted in our minds that reality is the world we are experiencing. Perhaps our brains develop in such a manner that we have a natural inclination to assume our senses do not deceive us.
When we start to question this form of inception, this trap of certainty, we are going against our instincts. We are no longer blindly accepting what we perceive.
Therein lies the beauty of neuroscience.
Thursday, July 8, 2010
The neural basis of synesthesia
Wikipedia has a page on the neural basis of synesthesia, but not yet described there is a new study in press by Vilayanur S. Ramachandran’s group that provides interesting insights.
Synesthesia is a neurological condition in which affected individuals experience one sense (e.g. hearing) as another sense (e.g. visual colours). Ramachandran’s latest study investigated grapheme-colour synesthetes who experience specific colours when they view specific graphemes (i.e., letters and numbers). The results demonstrate that two brain areas – for grapheme and colour representation respectively – are activated at virtually the same time in the brains of synesthetes who are viewing letters and numbers. On the other hand, normal controls viewing the same thing exhibit activity in the grapheme region but not the colour region.
This is the first study of synesthesia to demonstrate simultaneous activation of the two brain areas, known as the posterior temporal grapheme area (PTGA) and colour area V4 (pictured below in the brain of a representative synesthete). The finding was made possible because the researchers used a neuroimaging technique called magnetoencephalography (MEG) to measure weak magnetic fields emitted by specific areas of the brain while the subjects viewed graphemes. Compared to other neuroimaging techniques, such as fMRI and EEG, MEG offers the best combination of temporal and spatial precision in measuring brain activation.
If you read the Wikipedia page, you know that there are two main theories that attempt to explain how synesthesia occurs in the brain: the cross-activation theory and the disinhibited feedback theory. Let’s call them Theory 1 and Theory 2 for simplicity. Theory 1 posits that the grapheme and colour brain areas are ‘hyper-connected’ such that activity in the grapheme area evoked by viewing a letter or number immediately leads to activity in the colour area and conscious perception of colour. Theory 2 maintains that there are ‘executive’ brain areas that control the communication between the grapheme and colour areas, and in synesthetes this control is disrupted. To reiterate, Theory 1 says that normal brains are anatomically different than synesthete brains, whereas Theory 2 says that normal brains are the same as synesthete brains but the two brains act differently.
The results of Ramachandran’s group support Theory 1, the cross-activation theory, since this model predicts that the colour and grapheme areas should be activated at roughly the same time in synesthetes looking at graphemes.
This is perhaps the strongest evidence for the cross-activation theory of synesthesia to date. But to complicate things, Ramachandran’s group proposed a new theory called ‘cascaded cross-tuning model,’ which is essentially a refinement of the cross-activation model (let’s call it Theory 1.1).
According to Theory 1.1, when a synesthete views a number, a series of simultaneous activations lead to perception of a colour. First, a subcomponent of the grapheme area responds to features of the number (e.g. the “o” that makes up the top of the number 9). This leads to activity in other subcomponents of the grapheme area representing possible numbers that the feature is part of (e.g. the “o” could be a component of the numbers 6, 8, or 9) as well as the colour area V4. At this point however, colour is not consciously perceived. Next, when the grapheme area identifies the number 6 (based on monitoring by other brain areas), activity in V4 is triggered, leading to conscious perception of the colour associated with the number 6.
Cool theory? Cool theory.
Note, however, that it only applies to ‘projector’ synesthetes who see colours in the outside world when they see numbers, but not ‘associator’ synesthetes who perceive the colours in the “mind’s eye.” Also, it doesn’t yet apply to other forms of synesthesia, such as acquired synesthesias (e.g. synesthesia for pain).
Yeah, it’s only a matter of time before Theory 1.2 takes over.
Reference:
Brang D, Hubbard EM, Coulson S, Huang M, & Ramachandran VS (2010). Magnetoencephalography reveals early activation of V4 in grapheme-color synesthesia. NeuroImage PMID: 20547226
Synesthesia is a neurological condition in which affected individuals experience one sense (e.g. hearing) as another sense (e.g. visual colours). Ramachandran’s latest study investigated grapheme-colour synesthetes who experience specific colours when they view specific graphemes (i.e., letters and numbers). The results demonstrate that two brain areas – for grapheme and colour representation respectively – are activated at virtually the same time in the brains of synesthetes who are viewing letters and numbers. On the other hand, normal controls viewing the same thing exhibit activity in the grapheme region but not the colour region.
This is the first study of synesthesia to demonstrate simultaneous activation of the two brain areas, known as the posterior temporal grapheme area (PTGA) and colour area V4 (pictured below in the brain of a representative synesthete). The finding was made possible because the researchers used a neuroimaging technique called magnetoencephalography (MEG) to measure weak magnetic fields emitted by specific areas of the brain while the subjects viewed graphemes. Compared to other neuroimaging techniques, such as fMRI and EEG, MEG offers the best combination of temporal and spatial precision in measuring brain activation.
If you read the Wikipedia page, you know that there are two main theories that attempt to explain how synesthesia occurs in the brain: the cross-activation theory and the disinhibited feedback theory. Let’s call them Theory 1 and Theory 2 for simplicity. Theory 1 posits that the grapheme and colour brain areas are ‘hyper-connected’ such that activity in the grapheme area evoked by viewing a letter or number immediately leads to activity in the colour area and conscious perception of colour. Theory 2 maintains that there are ‘executive’ brain areas that control the communication between the grapheme and colour areas, and in synesthetes this control is disrupted. To reiterate, Theory 1 says that normal brains are anatomically different than synesthete brains, whereas Theory 2 says that normal brains are the same as synesthete brains but the two brains act differently.
The results of Ramachandran’s group support Theory 1, the cross-activation theory, since this model predicts that the colour and grapheme areas should be activated at roughly the same time in synesthetes looking at graphemes.
This is perhaps the strongest evidence for the cross-activation theory of synesthesia to date. But to complicate things, Ramachandran’s group proposed a new theory called ‘cascaded cross-tuning model,’ which is essentially a refinement of the cross-activation model (let’s call it Theory 1.1).
According to Theory 1.1, when a synesthete views a number, a series of simultaneous activations lead to perception of a colour. First, a subcomponent of the grapheme area responds to features of the number (e.g. the “o” that makes up the top of the number 9). This leads to activity in other subcomponents of the grapheme area representing possible numbers that the feature is part of (e.g. the “o” could be a component of the numbers 6, 8, or 9) as well as the colour area V4. At this point however, colour is not consciously perceived. Next, when the grapheme area identifies the number 6 (based on monitoring by other brain areas), activity in V4 is triggered, leading to conscious perception of the colour associated with the number 6.
Cool theory? Cool theory.
Note, however, that it only applies to ‘projector’ synesthetes who see colours in the outside world when they see numbers, but not ‘associator’ synesthetes who perceive the colours in the “mind’s eye.” Also, it doesn’t yet apply to other forms of synesthesia, such as acquired synesthesias (e.g. synesthesia for pain).
Yeah, it’s only a matter of time before Theory 1.2 takes over.
Reference:
Brang D, Hubbard EM, Coulson S, Huang M, & Ramachandran VS (2010). Magnetoencephalography reveals early activation of V4 in grapheme-color synesthesia. NeuroImage PMID: 20547226
Monday, July 5, 2010
Coming soon: your brain on shrooms
For the first time, people under the influence of psilocybin, the psychoactive ingredient in magic mushrooms, laid down in what appeared to be an fMRI brain scanner.
However, unlike an fMRI machine, the device didn’t generate any magnetic fields. In fact the device didn’t even generate an image of the brain or measure brain activity at all. The device was made out of wood.
In a study on the safety of administering psilocybin intravenously and conducting an fMRI scan, nine subjects who had previous experience with hallucinogenic drugs were injected with 2 milligrams of psilocybin and were then asked to lie down in the wooden mock-fMRI setting. The researchers determined that this dose of psilocybin should be considered tolerable and safe for conducting a brain scan.
It was important that this study be conducted before any real fMRI study on psilocybin because psychedelic drug experiences tend to be sensitive to the surrounding environment of the treated individual. Furthermore, it is difficult get good data out of fMRI. The subject has to keep their head as still as possible for the duration of the scan, since slight movements can ruin the quality of the acquired data. The subjects in the mock-fMRI scanner were able to keep very still despite reporting that they were strongly affected by the drug.
Research on psilocybin has been gaining a respectable reputation in scientific and medical communities, as outlined in a New York Times article that I previously reviewed. Guidelines for safety in human hallucinogen research already exist, and the findings from this pilot study on mock-fMRI will build upon these guidelines. With fMRI studies, the reputation of psilocybin in research will likely improve, as will our understanding of how the drug exerts its baffling effects. There are currently two ongoing studies investigating whether psilocybin can ease psychological suffering associated with cancer. If there is an effect on mental well-being, studies of the brain could help us uncover the mechanism. And of course, news agencies will likely jump on the opportunity to describe the mystical experiences associated with psilocybin use as a simple product of neural patterns.
As in all aspects of neuroscience, however, fMRI will not tell us the whole story. The cellular and molecular level of psilocybin’s effects should be considered in conjunction with information obtained from macro-level brain activity studies.
It is also important to realize that just because psilocybin is being taken seriously in research, this does not justify irresponsible use of the drug. Whenever a research study identifies a positive effect of cannabis or another illicit substance, proponents of using that drug often take the findings out of proportion and context. Learning how psilocybin works may help us understand how to best use it, but harmful effects as well as the limitations of research studies should always be considered.
Expect to hear a lot more about psilocybin brain scans in the near future.
[Edit: see Mind Hacks for more discussion of the study]
Reference:
Carhart-Harris RL, Williams TM, Sessa B, Tyacke RJ, Rich AS, Feilding A, & Nutt DJ (2010). The administration of psilocybin to healthy, hallucinogen-experienced volunteers in a mock-functional magnetic resonance imaging environment: a preliminary investigation of tolerability. Journal of psychopharmacology (Oxford, England) PMID: 20395317
Subscribe to:
Posts (Atom)