It is often the case that meatless lifestyles are chosen for ethical reasons related to valuing animal rights. As a consequence of their food choices, vegetarians and vegans are often accused of and taunted for loving animals more than people. But do most vegetarians care less for fellow humans than animals, care for humans and animals equally, or care more for humans than animals but still care more for animals than omnivores do?
A study published yesterday in PLoS ONE has attempted to parse out differences among omnivores, vegetarians and vegans in brain responses to human and animal suffering. The three groups were first given the Empathy quotient questionnaire, and it was determined that vegans and vegetarians scored significantly higher in empathy than omnivores. Next, the subjects had their brains scanned with fMRI as they viewed images of human suffering, animal suffering and “neutral” natural landscapes. Many differences were found among the brains of those with different feedings habits.
Firstly, vegetarians and vegans had higher engagement than omnivores of “empathy related areas,” such as the anterior cingulate cortex (ACC) and left inferior frontal gyrus (IFG), while observing both human and animal suffering. This seems to suggest that there is a neural basis for those with meatless lifestyles having greater empathy for all living beings.
However, when viewing animal suffering but not human suffering, meat-free subjects recruited additional empathy related areas in prefrontal and visual areas and reduced their right amygdala activity. This may be interpreted as evidence that vegetarians and vegans care more about the emotions of animals than those of humans. It is important to consider how the study was conducted, though, before reaching such a conclusion.
The authors themselves note a couple of weaknesses in their design. The subjects’ brain activities while they viewed human or animal suffering were compared to the baseline/control condition of “neutral” scenes that did not include living beings, faces, or suffering of any kind, which are all factors that should have been considered. The subjects were also simply asked to look at the images of the different conditions without being asked about their thoughts or feelings, so it is impossible to confidently attribute their brain responses to specific emotions. But even if the demonstrated brain activity represented empathy, there is also the possibility that the subjects were desensitized to images of human suffering that appear daily on the news. Desensitization to an image does not necessarily reflect empathetic feelings toward fellow humans. So a claim that vegetarians/vegans love animals more than humans because they have more empathetic neural activity while viewing suffering animals than suffering humans is unsubstantiated at this point.
Another major finding in this study was differences in neural representations of cognitive empathy between vegetarians and vegans. All of these subjects had chosen not to eat meat for ethical reasons, but the authors suggest that these differences in vegan and vegetarian brain responses indicate that the groups experience empathy for suffering differently, possibly due to differences in reasons for their diet choices. Again, these results should be taken as preliminary because of weaknesses in the study’s design.
Overall the study is interesting, but it remains to be seen whether this will spark further research that will ultimately demonstrate findings significant enough to affect public policy and animal cruelty regulations. For now, we have a bit of a clearer picture of the brain’s representation of empathy and a lot of extra material for the never-ending ethical debate over man’s right to meat.
Filippi M, Riccitelli G, Falini A, Di Salle F, Vuilleumier P, Comi G, & Rocca MA (2010). The Brain Functional Networks Associated to Human and Animal Suffering Differ among Omnivores, Vegetarians and Vegans PLoS ONE
Thursday, May 27, 2010
Tuesday, May 25, 2010
Mindreading by looking at the eyes: do we improve as we age?
Do you think you’re good at understanding people by looking them in the eye? This skill is not only important for making money playing poker but for social situations, relationships and everyday professional interactions.
Recently, scientific interest in mindreading by looking others in the eye has increased, mainly within the context of ‘theory of mind’ – the general capacity to understand one’s own and other people’s mental states (e.g. emotions, desires, beliefs). A test that is commonly administered is the ‘Reading the Mind in the Eyes’ test, which you can try yourself here. You may be surprised at how accurate your abilities are (I scored 26/36, which is considered within the normal range). But might it be possible that more experience can improve your score?
A new study in press in the journal Neuropsychologia employed this test to study differences in abilities to mindread between younger and older individuals. Ilaria Castelli and colleagues used fMRI to study the brain’s responses of 21-30 year olds versus 60-78 year olds during performance of the Reading the Mind in the Eyes test. They found that young and old people did not differ in their abilities to understand mental states represented in the eyes, but the groups recruited different neural circuitry to complete the task. Some areas were activated only or more extensively in the younger group, and vice versa.
So the results suggest that aging doesn’t help us understand others through the eyes. However, the researchers didn’t comment on controlling for amount of experience with people, which could be a factor that would lead to improvements in mindreading abilities. Also, since there were only 12 subjects in each group, the study doesn’t necessarily provide the final verdict on whether aging/experience can improve mindreading in the eyes. Nonetheless, the study may pave the way to more research on aging and theory of mind.
Additionally, the study may provide insight to changes in the brain’s cognitive strategies as we age. Indeed some of the results appear to be in line with other fMRI studies on aging. For example, older subjects relied more on the frontal cortex, which is known to be more active in other cognitive tasks in elderly individuals.
The authors also suggest that both groups used areas that are implicated in the (controversial) mirror neuron system devoted to empathy, but older people also recruited areas implicated in the mirror neuron system for language understanding. It should be understood, however, that the concepts and existences of these mirror neuron systems in humans are still subjects for debate.
Even though I think some of the results should be interpreted cautiously, I’m still a fan of this study. By applying the study of theory of mind to aging, we may be able to gain insight to the brain’s aging process and the mechanism of cognitive decline. Theory of mind studies can teach us more than just things that are cool and interesting – practical applications could come about in the future.
Castelli I, Baglio F, Blasi V, Alberoni M, Falini A, Liverta-Sempio O, Nemni R, & Marchetti A (2010). Effects of aging on mindreading ability through the eyes: An fMRI study. Neuropsychologia PMID: 20457166
Recently, scientific interest in mindreading by looking others in the eye has increased, mainly within the context of ‘theory of mind’ – the general capacity to understand one’s own and other people’s mental states (e.g. emotions, desires, beliefs). A test that is commonly administered is the ‘Reading the Mind in the Eyes’ test, which you can try yourself here. You may be surprised at how accurate your abilities are (I scored 26/36, which is considered within the normal range). But might it be possible that more experience can improve your score?
A new study in press in the journal Neuropsychologia employed this test to study differences in abilities to mindread between younger and older individuals. Ilaria Castelli and colleagues used fMRI to study the brain’s responses of 21-30 year olds versus 60-78 year olds during performance of the Reading the Mind in the Eyes test. They found that young and old people did not differ in their abilities to understand mental states represented in the eyes, but the groups recruited different neural circuitry to complete the task. Some areas were activated only or more extensively in the younger group, and vice versa.
So the results suggest that aging doesn’t help us understand others through the eyes. However, the researchers didn’t comment on controlling for amount of experience with people, which could be a factor that would lead to improvements in mindreading abilities. Also, since there were only 12 subjects in each group, the study doesn’t necessarily provide the final verdict on whether aging/experience can improve mindreading in the eyes. Nonetheless, the study may pave the way to more research on aging and theory of mind.
Additionally, the study may provide insight to changes in the brain’s cognitive strategies as we age. Indeed some of the results appear to be in line with other fMRI studies on aging. For example, older subjects relied more on the frontal cortex, which is known to be more active in other cognitive tasks in elderly individuals.
The authors also suggest that both groups used areas that are implicated in the (controversial) mirror neuron system devoted to empathy, but older people also recruited areas implicated in the mirror neuron system for language understanding. It should be understood, however, that the concepts and existences of these mirror neuron systems in humans are still subjects for debate.
Even though I think some of the results should be interpreted cautiously, I’m still a fan of this study. By applying the study of theory of mind to aging, we may be able to gain insight to the brain’s aging process and the mechanism of cognitive decline. Theory of mind studies can teach us more than just things that are cool and interesting – practical applications could come about in the future.
Castelli I, Baglio F, Blasi V, Alberoni M, Falini A, Liverta-Sempio O, Nemni R, & Marchetti A (2010). Effects of aging on mindreading ability through the eyes: An fMRI study. Neuropsychologia PMID: 20457166
Thursday, May 20, 2010
A virtual slap in the face (isn't there an iPhone app for that?)
Researchers from the group who recently reported the illusion of owning a virtual hand have come out with a new study on the sense of body ownership that has garnered media attention.
The study, conducted by Mel Slater and colleagues, is summarized as follows at livescience.com:
Male volunteers donned virtual reality goggles and took on the view of a virtual teenage girl sitting in a living room. The virtual girl's mother appeared to stroke her shoulder at the same time a real lab assistant stroked the shoulders of the volunteers.
Suddenly, the virtual mother slapped her daughter about the face three times with accompanying sound effects. The male volunteers all experienced a strong bodily reaction measured as rapid deceleration of their heart rates in response to the sudden threat, because they reacted to the virtual slap as if it were real.This is pretty cool, but the findings just confirm what all hardcore video gamers already know. ‘First person shooter’ games are largely successful because they make players feel like they are actually part of the warzone displayed on the screen. Gamers would probably report similar feelings of body transfer if they were administered the questionnaires given to subjects in this experiment, and if you’ve ever played a first-person perspective video game, you’ve probably experienced fluctuations in your heart rate when being threatened by enemies.
...[The researchers] showed that a first-person perspective seems to play the biggest role in helping people inhabit a virtual body. When participants had more of a third-person perspective of the girl-slap (they didn't feel like they inhabited the girl's body), they didn't show the same physiological reactions.
News reports on this study are calling it a demonstration of an out-of-body experience, but whether the findings should be considered in such a way is up for debate. An out-of-body experience is often defined as the experience in which a person who is awake sees his or her body from a location outside the physical body. A 2007 study by H. Henrik Ehrsson, published in Science, demonstrates a much more convincing induction of an out-of-body experience. In this study, subjects wore head-mounted video displays that showed them a live film recorded by video cameras behind the subjects. Subjects thus observed their own backs from the perspective of someone sitting behind, and they reported experiences of being outside their own bodies and observing themselves.
So the new study by Slater and colleagues on virtual reality and the sense of body ownership is not necessarily a vivid demonstration of an out-of-body experience. Furthermore, the ‘slap-in-the-face’ design and the transfer of male body ownership to a virtual female that make interesting headlines (including the title of this article) are not the significant parts of the study. If this study is to make an impact on future work, it will be its introduction of the first-person video game (or ‘immersive virtual reality’) paradigm to research on the concept of self-consciousness.
The study importantly demonstrates that the sense of body ownership is more dependent on visual perspective (e.g. first-person or third-person) than it is on sensations of touch and movement. Combining this design with neuroimaging and neurophysiology techniques may prove useful in the development of technologies that use virtual reality for brain training therapeutic strategies in cases of neurological disorders and/or injuries and amputations. We’re currently far away from these sorts of applications, but at least the door to immersive virtual reality in neuroscience research is now open.
References:
Mel Slater, Bernhard Spanlang, Maria V. Sanchez-Vives, Olaf Blanke (2010). First Person Experience of Body Transfer in Virtual Reality PLoS ONE
Ehrsson HH (2007). The experimental induction of out-of-body experiences. Science (New York, N.Y.), 317 (5841) PMID: 17717177
Monday, May 17, 2010
The neural basis of déjà vu
Reading a book for the second time can be an enlightening experience. At the same time, aspects of this experience can be confusing. During a second visit to a work previously read I suspect that to a degree everyone plays a game that involves trying to determine which parts of the story you remember well and which parts you completely forgot. But there are also parts that lie somewhere in the middle; it is these parts that boggle our minds by leaving us uncertain of whether or not the details are familiar to us. Perhaps a nuance in the storyline that strikes you as familiar is something you actually skimmed over and ignored the first time you read the book. You have some awareness of your ignorance and begin to question your feeling of familiarity.
This clash of evaluations lies at the heart of déjà vu, the experience of recognizing a situation as familiar while simultaneously being aware that the situation may not have actually occurred in the past. Chris Moulin and Akira O’Connor, researchers who have attempted to study déjà vu in their laboratory, have recently published a paper outlining the current state and challenges of scientific research on this inherently subjective phenomenon. They discuss two broad categories of recent research: déjà vu in clinical populations (e.g. associations with epilepsy and dementia), and déjà vu in nonclinical populations.
Importantly, Moulin and O’Connor point out that these categories may be distinct, and that caution should be exerted when making comparisons between the two. A lot of research on déjà vu in clinical populations is not actually study of déjà vu, but of a slightly different experience called déjà vecu (also known as ‘recollective confabulation’) in older adults with dementia. Déjà vecu instances involve inappropriate feelings of familiarity, like in déjà vu, but the feelings are not necessarily accompanied by awareness that it is inappropriate. The validity of extending evidence from studies on déjà vecu to the casual experience of déjà vu is questionable.
However, there has been a movement in experimental cognitive psychology toward studying déjà vu by generating the phenomenon in nonclinical populations. These studies use techniques such as hypnotic suggestion and familiarity questionnaires about images that are previously shown or not shown to subjects. There are few studies using these techniques, and the applicability to déjà vu experiences in everyday life is still being questioned.
So a solid scientific theory of déjà vu is still nonexistent. But there have been some recent neuroscientific investigations that have shed light on the neural basis of déjà vu. Deep brain stimulation and brain lesions studies both implicate areas in the mesial temporal cortex in the generation of déjà vu. Moulin and O’Connor argue that this doesn’t necessarily mean we can label this region as the ‘déjà vu cortex’ of the brain; rather, if we are to make progress in understanding déjà vu in the brain, we should examine how mesial temporal structures interact with whole neural networks during instances of déjà vu. For example, the authors hypothesize that “mesial temporal structures may aberrantly indicate a sensation of familiarity despite the rest of the hippocampo-cortical network indicating the overarching nonrecognition state that ultimately presides.”
In other words, when you’re re-reading a book or article that was edited with a minor detail after you read it the first time, your mesial temporal regions are telling you that the minor detail is familiar, but the rest of your brain is telling you that you never read that minor detail the first time. What’s happening in the rest of the brain, and why the brain decides to confuse you like this in the first place are questions for further research. That means that in the labs of Moulin, O’Connor and déjà vu researchers alike, it will be, in the famous words of Yogi Berra, “déjà vu all over again.”
O'Connor AR, & Moulin CJ (2010). Recognition without identification, erroneous familiarity, and déjà vu. Current psychiatry reports, 12 (3), 165-73 PMID: 20425276
This clash of evaluations lies at the heart of déjà vu, the experience of recognizing a situation as familiar while simultaneously being aware that the situation may not have actually occurred in the past. Chris Moulin and Akira O’Connor, researchers who have attempted to study déjà vu in their laboratory, have recently published a paper outlining the current state and challenges of scientific research on this inherently subjective phenomenon. They discuss two broad categories of recent research: déjà vu in clinical populations (e.g. associations with epilepsy and dementia), and déjà vu in nonclinical populations.
Importantly, Moulin and O’Connor point out that these categories may be distinct, and that caution should be exerted when making comparisons between the two. A lot of research on déjà vu in clinical populations is not actually study of déjà vu, but of a slightly different experience called déjà vecu (also known as ‘recollective confabulation’) in older adults with dementia. Déjà vecu instances involve inappropriate feelings of familiarity, like in déjà vu, but the feelings are not necessarily accompanied by awareness that it is inappropriate. The validity of extending evidence from studies on déjà vecu to the casual experience of déjà vu is questionable.
However, there has been a movement in experimental cognitive psychology toward studying déjà vu by generating the phenomenon in nonclinical populations. These studies use techniques such as hypnotic suggestion and familiarity questionnaires about images that are previously shown or not shown to subjects. There are few studies using these techniques, and the applicability to déjà vu experiences in everyday life is still being questioned.
So a solid scientific theory of déjà vu is still nonexistent. But there have been some recent neuroscientific investigations that have shed light on the neural basis of déjà vu. Deep brain stimulation and brain lesions studies both implicate areas in the mesial temporal cortex in the generation of déjà vu. Moulin and O’Connor argue that this doesn’t necessarily mean we can label this region as the ‘déjà vu cortex’ of the brain; rather, if we are to make progress in understanding déjà vu in the brain, we should examine how mesial temporal structures interact with whole neural networks during instances of déjà vu. For example, the authors hypothesize that “mesial temporal structures may aberrantly indicate a sensation of familiarity despite the rest of the hippocampo-cortical network indicating the overarching nonrecognition state that ultimately presides.”
In other words, when you’re re-reading a book or article that was edited with a minor detail after you read it the first time, your mesial temporal regions are telling you that the minor detail is familiar, but the rest of your brain is telling you that you never read that minor detail the first time. What’s happening in the rest of the brain, and why the brain decides to confuse you like this in the first place are questions for further research. That means that in the labs of Moulin, O’Connor and déjà vu researchers alike, it will be, in the famous words of Yogi Berra, “déjà vu all over again.”
O'Connor AR, & Moulin CJ (2010). Recognition without identification, erroneous familiarity, and déjà vu. Current psychiatry reports, 12 (3), 165-73 PMID: 20425276
Saturday, May 15, 2010
Type of music influences exercise performance
If you work out, and you like to listen to music while you work out, you probably have a set of favourite songs or artists that you always listen to when exercising. You might even have a playlist on your iPod called ‘running music,’ ‘lifting tunes,’ or ‘workout beatz.’ Personally, as a heavy metal fan, music from bands like Metallica and Iron Maiden get me focused during a workout. I have a playlist called ‘Heart attack music’ for times that I need the heaviest of the heavy screamer bands to get me pumped up enough to run that extra mile.
But no matter what type of music you prefer, perceived beneficial effects of listening to your favourite music while exercising are probably real. A new study showed that men who cycled at high-intensity while listening to preferred music were able to go a greater distance and had lower ratings of perceived exertion than when they listened to non-preferred music or no music. The study was unfortunately limited in that it only included 15 subjects and all were male, but the findings are quite compelling – subjects were able to cycle for an average of 9.8km when they listened to preferred music as opposed to 7.1km when they listened to non-preferred music (they went 7.7km when listening to no music).
The researchers didn’t report what type of music the subjects preferred, but it is mentioned that faster tempos in songs (a mean rhythm of 117 beats/min) were selected over slower tempos (95 beats/min). These faster tunes were probably chosen to match the elevated heart rates induced by high intensity cycling. However, a fair comparison between ‘preferred’ and ‘non-preferred’ music should control for the effect of tempo, which is a factor that the authors of this study failed to take into account.
Regardless, music can have profound effects on emotion and mood, so it is believable that preferred music can pump us up and encourage physical activity. The authors of this study reason that music can distract an exerciser’s attention from perception of physical sensations, giving rise to feelings of less perceived exertion and less fatigue. Perhaps this is just what it is that keeps us going when listening to our favourite music as opposed to music we’re uninterested in. When music is non-preferred, we try to block it out or ignore it, leading to greater attention to pain and lactic acid build-up in our muscles. When we love the song we’re listening to, and we get really into the intricacies of the beat and melody, we ignore physical pain. If it were possible to put someone in a brain scanner while they exercise and listen to their favourite music, it would be predicted that their pain centres (e.g. the insular cortex) would be less active than during exercise with boring music.
So the lesson is this: bring your iPod to the gym, because listening to Justin Bieber’s latest hit (or other cheesy top 40 songs that are played at most facilities) while exercising can literally be painful.
References:
Nakamura PM, Pereira G, Papini CB, Nakamura FY, & Kokubun E (2010). Effects of preferred and nonpreferred music on continuous cycling exercise performance. Perceptual and motor skills, 110 (1), 257-64 PMID: 20391890
But no matter what type of music you prefer, perceived beneficial effects of listening to your favourite music while exercising are probably real. A new study showed that men who cycled at high-intensity while listening to preferred music were able to go a greater distance and had lower ratings of perceived exertion than when they listened to non-preferred music or no music. The study was unfortunately limited in that it only included 15 subjects and all were male, but the findings are quite compelling – subjects were able to cycle for an average of 9.8km when they listened to preferred music as opposed to 7.1km when they listened to non-preferred music (they went 7.7km when listening to no music).
The researchers didn’t report what type of music the subjects preferred, but it is mentioned that faster tempos in songs (a mean rhythm of 117 beats/min) were selected over slower tempos (95 beats/min). These faster tunes were probably chosen to match the elevated heart rates induced by high intensity cycling. However, a fair comparison between ‘preferred’ and ‘non-preferred’ music should control for the effect of tempo, which is a factor that the authors of this study failed to take into account.
Regardless, music can have profound effects on emotion and mood, so it is believable that preferred music can pump us up and encourage physical activity. The authors of this study reason that music can distract an exerciser’s attention from perception of physical sensations, giving rise to feelings of less perceived exertion and less fatigue. Perhaps this is just what it is that keeps us going when listening to our favourite music as opposed to music we’re uninterested in. When music is non-preferred, we try to block it out or ignore it, leading to greater attention to pain and lactic acid build-up in our muscles. When we love the song we’re listening to, and we get really into the intricacies of the beat and melody, we ignore physical pain. If it were possible to put someone in a brain scanner while they exercise and listen to their favourite music, it would be predicted that their pain centres (e.g. the insular cortex) would be less active than during exercise with boring music.
So the lesson is this: bring your iPod to the gym, because listening to Justin Bieber’s latest hit (or other cheesy top 40 songs that are played at most facilities) while exercising can literally be painful.
References:
Nakamura PM, Pereira G, Papini CB, Nakamura FY, & Kokubun E (2010). Effects of preferred and nonpreferred music on continuous cycling exercise performance. Perceptual and motor skills, 110 (1), 257-64 PMID: 20391890
Tuesday, May 11, 2010
Empathy for pain in doctors versus synesthetes
A common warning given to students interested in a career in medicine is “you better have a tough stomach.” Luckily for physician-wannabes who tense up at the sight of blood, a new study published in NeuroImage suggests that it may be possible to change your brain’s response to watching pain being inflicted on others. The findings are nicely summarized at the BPS Research Digest blog:
In contrast, Decety’s new study suggests that doctors have under-active responses in the whole pain matrix. If this under-active response can be acquired by training (e.g. assisting surgeries in medical school), perhaps pain synesthetes could make their empathetic pain experiences go away by repeatedly viewing needles being pressed into others. However, this might not be the best treatment approach as it would probably be exceptionally painful for the synesthete.
Pain synesthesia is acquired (usually after injury/amputation), unlike most other identified forms of synesthesia in which people are supposedly born experiencing numbers as colours or sounds as tastes. Perhaps this means pain synesthesia can be cured as well.
Decety J, Yang CY, & Cheng Y (2010). Physicians down-regulate their pain empathy response: an event-related brain potential study. NeuroImage, 50 (4), 1676-82 PMID: 20080194
Fitzgibbon BM, Giummarra MJ, Georgiou-Karistianis N, Enticott PG, & Bradshaw JL (2010). Shared pain: from empathy to synaesthesia. Neuroscience and biobehavioral reviews, 34 (4), 500-12 PMID: 19857517
Decety's team used electroencephalography (EEG) to monitor the electrical activity arising from the brains of 15 doctors and 15 controls while they looked at dozens of static pictures of people being pricked in various body parts by a needle or prodded by a cotton bud.This means that doctors are essentially the opposite of people who experience synesthesia for pain, a condition I previously wrote about. Synesthesia for pain, the physical experience of pain induced by simply viewing pain in another, has mainly been identified in rare ‘phantom limb’ patients who have had an amputation but still experience sensations in the absent limb. If you can recall, certain areas of the brain are active when pain is felt: some areas involving emotion/affective processing, some areas involving cognition/evaluation, and other areas involving the actual sensation of pain. It is hypothesized that pain synesthetes have overly-sensitive responses to viewing pain in brain regions that are responsible for conscious pain perception.
When a person looks at someone else in pain, their EEG response typically shows two distinct characteristics: a frontal component after 110ms, which is thought to reflect an automatic burst of empathy, and a more central, parietal component after about 350ms, which reflects a conscious evaluation of what's been seen.
As expected, the control participants showed an enhanced early and later phase EEG response to the needle pictures compared with the cotton bud pictures. The doctors, by contrast, showed no difference in brain response to the two categories of picture.
In contrast, Decety’s new study suggests that doctors have under-active responses in the whole pain matrix. If this under-active response can be acquired by training (e.g. assisting surgeries in medical school), perhaps pain synesthetes could make their empathetic pain experiences go away by repeatedly viewing needles being pressed into others. However, this might not be the best treatment approach as it would probably be exceptionally painful for the synesthete.
Pain synesthesia is acquired (usually after injury/amputation), unlike most other identified forms of synesthesia in which people are supposedly born experiencing numbers as colours or sounds as tastes. Perhaps this means pain synesthesia can be cured as well.
Decety J, Yang CY, & Cheng Y (2010). Physicians down-regulate their pain empathy response: an event-related brain potential study. NeuroImage, 50 (4), 1676-82 PMID: 20080194
Fitzgibbon BM, Giummarra MJ, Georgiou-Karistianis N, Enticott PG, & Bradshaw JL (2010). Shared pain: from empathy to synaesthesia. Neuroscience and biobehavioral reviews, 34 (4), 500-12 PMID: 19857517
Sunday, May 9, 2010
The sense of body ownership and 3D virtual reality
The classic ‘rubber hand illusion’ gives profound insight to our brain’s ability to confuse the real and living with the inanimate. Originally reported by Matthew Botvinick and Jonathan Cohen in 1998, when people have their hand hidden from view and watch a dummy hand being stroked with a paintbrush while their hidden hand is also stroked, they feel the stroke to be coming from the dummy hand rather than their real hand.
This illusion demonstrates that our perception of body-part ownership is malleable, complementing evidence I mentioned in a previous post suggesting that patients with amputated limbs can still feel pain in their missing limbs. Studies of the rubber-hand illusion have helped us understand how we recognize our body parts and how that might change if we were to lose or replace a body part. Moreover, they have helped us understand the relationships among vision, somatosensation (the sense of touch) and proprioception (the sense of body-part location).
However, it turns out that the illusion doesn’t only need to be studied with rubber hands – it can also be induced with a virtual representation of hands. In this ‘virtual arm illusion,’ people who view a 3D virtual arm instead of a rubber arm, under the same conditions as Botvinick and Cohen’s illusion, confuse their real arm with the arm displayed in virtual reality. Since it easier to experimentally manipulate virtual images and scenes than a rubber object, the virtual arm illusion could be used as an important tool to build on findings from rubber hand studies.
Indeed, in a new study by the researchers who originally reported the virtual arm illusion, they demonstrate that the illusion extends beyond the sense of touch. Subjects who had their hand blinded from view wore 3D goggles and watched a virtual hand move either in synchrony or asynchrony with their actual hand movement. The subjects felt a sense of ownership over the virtual hand when it was moving in synchrony with real hand movement, but they felt that they had significantly less control over the virtual hand with their real hand was moving differently.
The authors suggest that their findings could provide insight to the use of virtual bodies in therapies and rehabilitation. But what about this (slightly more commercially-driven) application: 3D movies where you feel like you are literally part of the film, moving around a scene, feeling yourself touching supposedly virtual objects, supposedly virtual people??
Toying with our sense of body ownership using virtual manipulations could further bridge the fading gap between science-fiction and reality. And help us understand the living electricity encased in our skulls along the way.
Sanchez-Vives MV, Spanlang B, Frisoli A, Bergamasco M, & Slater M (2010). Virtual hand illusion induced by visuomotor correlations. PloS one, 5 (4) PMID: 20454463
This illusion demonstrates that our perception of body-part ownership is malleable, complementing evidence I mentioned in a previous post suggesting that patients with amputated limbs can still feel pain in their missing limbs. Studies of the rubber-hand illusion have helped us understand how we recognize our body parts and how that might change if we were to lose or replace a body part. Moreover, they have helped us understand the relationships among vision, somatosensation (the sense of touch) and proprioception (the sense of body-part location).
However, it turns out that the illusion doesn’t only need to be studied with rubber hands – it can also be induced with a virtual representation of hands. In this ‘virtual arm illusion,’ people who view a 3D virtual arm instead of a rubber arm, under the same conditions as Botvinick and Cohen’s illusion, confuse their real arm with the arm displayed in virtual reality. Since it easier to experimentally manipulate virtual images and scenes than a rubber object, the virtual arm illusion could be used as an important tool to build on findings from rubber hand studies.
Indeed, in a new study by the researchers who originally reported the virtual arm illusion, they demonstrate that the illusion extends beyond the sense of touch. Subjects who had their hand blinded from view wore 3D goggles and watched a virtual hand move either in synchrony or asynchrony with their actual hand movement. The subjects felt a sense of ownership over the virtual hand when it was moving in synchrony with real hand movement, but they felt that they had significantly less control over the virtual hand with their real hand was moving differently.
The authors suggest that their findings could provide insight to the use of virtual bodies in therapies and rehabilitation. But what about this (slightly more commercially-driven) application: 3D movies where you feel like you are literally part of the film, moving around a scene, feeling yourself touching supposedly virtual objects, supposedly virtual people??
Toying with our sense of body ownership using virtual manipulations could further bridge the fading gap between science-fiction and reality. And help us understand the living electricity encased in our skulls along the way.
Sanchez-Vives MV, Spanlang B, Frisoli A, Bergamasco M, & Slater M (2010). Virtual hand illusion induced by visuomotor correlations. PloS one, 5 (4) PMID: 20454463
Thursday, May 6, 2010
fMRI lie-detection evidence rejected in court
Conveniently in keeping with the theme of my last post on fMRI for lie-detection in law, a story appeared on Wired explaining that a lie-detection brain scan could be used as evidence court for the first time.
Thus this case did not even reach the point that Frederick Schauer discusses when he argues that whether fMRI has a place in the courtroom should be determined by legal rather than scientific standards. It is well known that the lie-detection abilities of a jury are weak and unreliable. If the fMRI evidence increased the objectivity at all in judging the witness in this case, it might have made sense to allow the evidence.
However, it looks like we’re still stuck clinging to traditional lie-detection methods, not even willing to question if a new process makes more sense. Still I wouldn't dare suggest that fMRI has a place in the courtroom in the absence of comprehensive peer-reviewed research examining this application. The fMRI evidence was probably garbage in this case – but that doesn’t mean it didn’t deserve questioning, and it doesn’t mean fMRI evidence will be garbage in every future case. Brain scans performed by companies with a financial stake in the outcome should especially be taken with a grain of salt, but we’re likely to see more detailed assessments of this type of evidence in future court cases.
A Brooklyn attorney hopes to break new ground this week when he offers a brain scan as evidence that a key witness in a civil trial is telling the truth, Wired.com has learned.This attempt was rejected by the judge, on the grounds that assessing credibility is the role of the jury, not an expert witness. The judge did even seek to determine how reliable fMRI lie-detection is, and the reliability was not compared to that of traditional methods (e.g. letting the jury decide).
If the fMRI scan is admitted, it would be a legal first in the United States and could have major consequences for the future of neuroscience in court.
The lawyer, David Levin, wants to use that evidence to break a he-said/she-said stalemate in an employer-retaliation case. He’s representing Cynette Wilson, a woman who claims that after she complained to temp agency CoreStaff Services about sexual harassment at a job site, she no longer received good assignments. Another worker at CoreStaff claims he heard her supervisor say that she should not be placed on jobs because of her complaint. The supervisor denies that he said anything of the sort.
So, Levin had the coworker undergo an fMRI brain scan by the company Cephos, which claims to provide “independent, scientific validation that someone is telling the truth.”
Thus this case did not even reach the point that Frederick Schauer discusses when he argues that whether fMRI has a place in the courtroom should be determined by legal rather than scientific standards. It is well known that the lie-detection abilities of a jury are weak and unreliable. If the fMRI evidence increased the objectivity at all in judging the witness in this case, it might have made sense to allow the evidence.
However, it looks like we’re still stuck clinging to traditional lie-detection methods, not even willing to question if a new process makes more sense. Still I wouldn't dare suggest that fMRI has a place in the courtroom in the absence of comprehensive peer-reviewed research examining this application. The fMRI evidence was probably garbage in this case – but that doesn’t mean it didn’t deserve questioning, and it doesn’t mean fMRI evidence will be garbage in every future case. Brain scans performed by companies with a financial stake in the outcome should especially be taken with a grain of salt, but we’re likely to see more detailed assessments of this type of evidence in future court cases.
Tuesday, May 4, 2010
Lie-detection and neurolaw: do brain scans have a place in the courtroom?
The legal application of neuroimaging for lie-detection in courtrooms has been criticized by scientists on a number of grounds. Firstly, results of fMRI studies on experimental subjects such as undergraduate students cannot necessarily be applied to people offering evidence in a court. Secondly, lie-detection neuroimaging studies are designed such that lies are ‘instructed,’ meaning that a lie in the laboratory is not actually a real lie. Thirdly, much of the research on lie-detection with fMRI has been conducted by private companies, such as No Lie MRI, who do not publish their findings in peer-reviewed journals and who hire scientists with vested interests in study outcomes.
Despite these criticisms, an article by law professor Frederick Shauer recently appeared in Trends in Cognitive Sciences, arguing that the suitability of neuroimaging as a tool for the courtroom should be determined according to legal and not scientific standards. Principle among his arguments is the claim that current legal methods of lie-detection are not scientifically valid in any sense, and if neuroimaging provides even a slightly higher validity, it should be used in legal cases. Schauer points out that “[r]esearch shows that ordinary people’s ability to distinguish truth from lies rarely rises above random, and juries are unlikely to do better.” He follows this up by stating that “...the admissibility of neural lie-detection evidence must be based on an evaluation of the realistic alternatives within the legal system and not on a non-comparative assessment of whether neural lie-detection meets the standards that scientists use for scientific purposes.”
The argument is certainly interesting, and scientists should be able to appreciate it. Scientists are trained to be cautious and skeptical, only accepting of findings that have just a 5% or lower chance of being attributed to error. These standards are often even higher for fMRI studies. However, if a jury can correctly detect lies only 50% of the time, brain scans that are right 60% of the time are simply more reliable.
Yes, there may be differences between experimental subjects telling instructed lies and real-life defendants being put to the test. But Schauer argues that “...if the ease of telling an instructed lie in the laboratory correlates with the ease of telling a real lie outside the laboratory, research on instructed lies is no longer irrelevant to detecting real lies.”
This has not yet been demonstrated, and Schauer admits that the use of fMRI for lie-detection is probably unwarranted at this point. Several obstacles first need to be overcome, but fMRI in law seems to hold newfound promise in the face of scientific criticism when Schauer’s idea of comparing neuroimaging to current, perhaps less effective methods of lie-detection, are taken into consideration.
Still, scientists who understand the proper use of fMRI need to develop certain methods to ensure that the lie-detection application is practical and effective. Typically found in many neuroimaging studies are ‘outlier’ subjects whose results do not conform to what is found in other subjects. Sometimes the outlier did not perform the task correctly, resulting in brain activity reflecting attention to the wrong stimuli or a lack of attention to the task altogether. A smart liar might be able to play with his/her attention to the lie-detection task at hand, resulting in skewed data. Since fMRI studies typically require responses that involve button presses rather than speech, intentional distractive thoughts may be more readily enabled.
More fMRI studies on real-life-style lie-detection need to be conducted before a courtroom introduction is warranted. But over-skepticism is unwarranted when lie-detection systems of today’s courts are flawed and unreliable.
Schauer F (2010). Neuroscience, lie-detection, and the law: contrary to the prevailing view, the suitability of brain-based lie-detection for courtroom or forensic use should be determined according to legal and not scientific standards. Trends in cognitive sciences, 14 (3), 101-3 PMID: 20060772
Despite these criticisms, an article by law professor Frederick Shauer recently appeared in Trends in Cognitive Sciences, arguing that the suitability of neuroimaging as a tool for the courtroom should be determined according to legal and not scientific standards. Principle among his arguments is the claim that current legal methods of lie-detection are not scientifically valid in any sense, and if neuroimaging provides even a slightly higher validity, it should be used in legal cases. Schauer points out that “[r]esearch shows that ordinary people’s ability to distinguish truth from lies rarely rises above random, and juries are unlikely to do better.” He follows this up by stating that “...the admissibility of neural lie-detection evidence must be based on an evaluation of the realistic alternatives within the legal system and not on a non-comparative assessment of whether neural lie-detection meets the standards that scientists use for scientific purposes.”
The argument is certainly interesting, and scientists should be able to appreciate it. Scientists are trained to be cautious and skeptical, only accepting of findings that have just a 5% or lower chance of being attributed to error. These standards are often even higher for fMRI studies. However, if a jury can correctly detect lies only 50% of the time, brain scans that are right 60% of the time are simply more reliable.
Yes, there may be differences between experimental subjects telling instructed lies and real-life defendants being put to the test. But Schauer argues that “...if the ease of telling an instructed lie in the laboratory correlates with the ease of telling a real lie outside the laboratory, research on instructed lies is no longer irrelevant to detecting real lies.”
This has not yet been demonstrated, and Schauer admits that the use of fMRI for lie-detection is probably unwarranted at this point. Several obstacles first need to be overcome, but fMRI in law seems to hold newfound promise in the face of scientific criticism when Schauer’s idea of comparing neuroimaging to current, perhaps less effective methods of lie-detection, are taken into consideration.
Still, scientists who understand the proper use of fMRI need to develop certain methods to ensure that the lie-detection application is practical and effective. Typically found in many neuroimaging studies are ‘outlier’ subjects whose results do not conform to what is found in other subjects. Sometimes the outlier did not perform the task correctly, resulting in brain activity reflecting attention to the wrong stimuli or a lack of attention to the task altogether. A smart liar might be able to play with his/her attention to the lie-detection task at hand, resulting in skewed data. Since fMRI studies typically require responses that involve button presses rather than speech, intentional distractive thoughts may be more readily enabled.
More fMRI studies on real-life-style lie-detection need to be conducted before a courtroom introduction is warranted. But over-skepticism is unwarranted when lie-detection systems of today’s courts are flawed and unreliable.
Schauer F (2010). Neuroscience, lie-detection, and the law: contrary to the prevailing view, the suitability of brain-based lie-detection for courtroom or forensic use should be determined according to legal and not scientific standards. Trends in cognitive sciences, 14 (3), 101-3 PMID: 20060772
Monday, May 3, 2010
Should students get "dope-tested" for cognitive enhancers?
Student use of “smart drugs” to increase academic performance is quite a common practice these days. According to a recent article in the Guardian and a publication in Nature, up to 25% of students at some universities are using drugs such as ritalin and modafinil as cognitive enhancers. These agents are relatively easy to acquire, and proponents of their use claim that they cause few adverse side effects.
Those who are for the use of neuroenhancers also argue that taking a pill is no different than having a cup of coffee. Both caffeine and modafinil are drugs (although they have different mechanisms of action) that help students stay up and do last-minute studying, but one substance is legal while the other isn’t.
If this argument is taken into consideration, however, an important follow-up question is: where do you draw the line between “fair” and unfair cognitive enhancement? Would a student who supports modafinil use to improve academic performance also support, say, special brain surgeries for students to help them increase cognition and exam scores?
While this might seem like a ridiculous situation on the surface, we could be facing related questions in the future, especially if no measures to prevent smart drug use are soon taken on students. In a 2008 case study of a single patient, Andres Lozano and colleagues at the University of Toronto found that deep brain stimulation (applying highly localized electric current to neurons) in a specific brain area dramatically enhanced certain cognitive abilities. After the patient was stimulated in the fornix, a major output of the hippocampus, he demonstrated significantly increased scores on measures of verbal memory and spatial learning, and his IQ increased by 9 points. Lozano is currently testing deep brain stimulation in Alzheimer’s patients to see if their cognitive deficits can be reversed or alleviated by the technique.
The potential of deep brain stimulation to positively affect cognition is promising for a few reasons. Lozano wasn’t trying to enhance his patient’s cognitive ability when he stumbled upon this effect; his patient was stimulated near the fornix in an attempt to suppress the patient’s appetite in a case of treatment-resistant obesity. Thus, even though the increased cognitive abilities observed were so drastic, the fornix might not necessarily even be the best place to stimulate in order to enhance cognitive function. Additionally, studies of Parkinson’s patients as well as animal models have revealed supportive effects of deep brain stimulation on cognition.
Now imagine a future in which a cognitive-enhancing electrodes can be inserted into the brain using a relatively non-invasive method, and this treatment is commonly used in the management of cognitive decline and Alzheimer’s disease. Just as students now desire drugs that are meant for treating ADHD and insomnia, would students of the future also jump at the opportunity to try this cognitive therapy meant for Alzheimer’s? Is this still the same thing as drinking a cup of coffee, or does everything change when the treatment involved extends beyond the realm of drug therapy?
In a environment so competitive that students longing for straight A’s are willing to sacrifice sleep and neglect basic self-care, it would be reasonable to assume that some students are willing to do literally whatever it takes to achieve their goals. In the absence of defined limits and regulations, cognitive enhancement to increase academic performance can and will become the rule rather than the exception.
Those who are for the use of neuroenhancers also argue that taking a pill is no different than having a cup of coffee. Both caffeine and modafinil are drugs (although they have different mechanisms of action) that help students stay up and do last-minute studying, but one substance is legal while the other isn’t.
If this argument is taken into consideration, however, an important follow-up question is: where do you draw the line between “fair” and unfair cognitive enhancement? Would a student who supports modafinil use to improve academic performance also support, say, special brain surgeries for students to help them increase cognition and exam scores?
While this might seem like a ridiculous situation on the surface, we could be facing related questions in the future, especially if no measures to prevent smart drug use are soon taken on students. In a 2008 case study of a single patient, Andres Lozano and colleagues at the University of Toronto found that deep brain stimulation (applying highly localized electric current to neurons) in a specific brain area dramatically enhanced certain cognitive abilities. After the patient was stimulated in the fornix, a major output of the hippocampus, he demonstrated significantly increased scores on measures of verbal memory and spatial learning, and his IQ increased by 9 points. Lozano is currently testing deep brain stimulation in Alzheimer’s patients to see if their cognitive deficits can be reversed or alleviated by the technique.
The potential of deep brain stimulation to positively affect cognition is promising for a few reasons. Lozano wasn’t trying to enhance his patient’s cognitive ability when he stumbled upon this effect; his patient was stimulated near the fornix in an attempt to suppress the patient’s appetite in a case of treatment-resistant obesity. Thus, even though the increased cognitive abilities observed were so drastic, the fornix might not necessarily even be the best place to stimulate in order to enhance cognitive function. Additionally, studies of Parkinson’s patients as well as animal models have revealed supportive effects of deep brain stimulation on cognition.
Now imagine a future in which a cognitive-enhancing electrodes can be inserted into the brain using a relatively non-invasive method, and this treatment is commonly used in the management of cognitive decline and Alzheimer’s disease. Just as students now desire drugs that are meant for treating ADHD and insomnia, would students of the future also jump at the opportunity to try this cognitive therapy meant for Alzheimer’s? Is this still the same thing as drinking a cup of coffee, or does everything change when the treatment involved extends beyond the realm of drug therapy?
In a environment so competitive that students longing for straight A’s are willing to sacrifice sleep and neglect basic self-care, it would be reasonable to assume that some students are willing to do literally whatever it takes to achieve their goals. In the absence of defined limits and regulations, cognitive enhancement to increase academic performance can and will become the rule rather than the exception.
Subscribe to:
Posts (Atom)