Monday, May 17, 2010

The neural basis of déjà vu

Reading a book for the second time can be an enlightening experience. At the same time, aspects of this experience can be confusing. During a second visit to a work previously read I suspect that to a degree everyone plays a game that involves trying to determine which parts of the story you remember well and which parts you completely forgot. But there are also parts that lie somewhere in the middle; it is these parts that boggle our minds by leaving us uncertain of whether or not the details are familiar to us. Perhaps a nuance in the storyline that strikes you as familiar is something you actually skimmed over and ignored the first time you read the book. You have some awareness of your ignorance and begin to question your feeling of familiarity.

This clash of evaluations lies at the heart of déjà vu, the experience of recognizing a situation as familiar while simultaneously being aware that the situation may not have actually occurred in the past. Chris Moulin and Akira O’Connor, researchers who have attempted to study déjà vu in their laboratory, have recently published a paper outlining the current state and challenges of scientific research on this inherently subjective phenomenon. They discuss two broad categories of recent research: déjà vu in clinical populations (e.g. associations with epilepsy and dementia), and déjà vu in nonclinical populations.

Importantly, Moulin and O’Connor point out that these categories may be distinct, and that caution should be exerted when making comparisons between the two. A lot of research on déjà vu in clinical populations is not actually study of déjà vu, but of a slightly different experience called déjà vecu (also known as ‘recollective confabulation’) in older adults with dementia. Déjà vecu instances involve inappropriate feelings of familiarity, like in déjà vu, but the feelings are not necessarily accompanied by awareness that it is inappropriate. The validity of extending evidence from studies on déjà vecu to the casual experience of déjà vu is questionable.

However, there has been a movement in experimental cognitive psychology toward studying déjà vu by generating the phenomenon in nonclinical populations. These studies use techniques such as hypnotic suggestion and familiarity questionnaires about images that are previously shown or not shown to subjects. There are few studies using these techniques, and the applicability to déjà vu experiences in everyday life is still being questioned.

So a solid scientific theory of déjà vu is still nonexistent. But there have been some recent neuroscientific investigations that have shed light on the neural basis of déjà vu. Deep brain stimulation and brain lesions studies both implicate areas in the mesial temporal cortex in the generation of déjà vu. Moulin and O’Connor argue that this doesn’t necessarily mean we can label this region as the ‘déjà vu cortex’ of the brain; rather, if we are to make progress in understanding déjà vu in the brain, we should examine how mesial temporal structures interact with whole neural networks during instances of déjà vu. For example, the authors hypothesize that “mesial temporal structures may aberrantly indicate a sensation of familiarity despite the rest of the hippocampo-cortical network indicating the overarching nonrecognition state that ultimately presides.”

In other words, when you’re re-reading a book or article that was edited with a minor detail after you read it the first time, your mesial temporal regions are telling you that the minor detail is familiar, but the rest of your brain is telling you that you never read that minor detail the first time. What’s happening in the rest of the brain, and why the brain decides to confuse you like this in the first place are questions for further research. That means that in the labs of Moulin, O’Connor and déjà vu researchers alike, it will be, in the famous words of Yogi Berra, “déjà vu all over again.”



O'Connor AR, & Moulin CJ (2010). Recognition without identification, erroneous familiarity, and déjà vu. Current psychiatry reports, 12 (3), 165-73 PMID: 20425276

Saturday, May 15, 2010

Type of music influences exercise performance

This post was chosen as an Editor's Selection for ResearchBlogging.orgIf you work out, and you like to listen to music while you work out, you probably have a set of favourite songs or artists that you always listen to when exercising. You might even have a playlist on your iPod called ‘running music,’ ‘lifting tunes,’ or ‘workout beatz.’ Personally, as a heavy metal fan, music from bands like Metallica and Iron Maiden get me focused during a workout. I have a playlist called ‘Heart attack music’ for times that I need the heaviest of the heavy screamer bands to get me pumped up enough to run that extra mile.

But no matter what type of music you prefer, perceived beneficial effects of listening to your favourite music while exercising are probably real. A new study showed that men who cycled at high-intensity while listening to preferred music were able to go a greater distance and had lower ratings of perceived exertion than when they listened to non-preferred music or no music. The study was unfortunately limited in that it only included 15 subjects and all were male, but the findings are quite compelling – subjects were able to cycle for an average of 9.8km when they listened to preferred music as opposed to 7.1km when they listened to non-preferred music (they went 7.7km when listening to no music).

The researchers didn’t report what type of music the subjects preferred, but it is mentioned that faster tempos in songs (a mean rhythm of 117 beats/min) were selected over slower tempos (95 beats/min). These faster tunes were probably chosen to match the elevated heart rates induced by high intensity cycling. However, a fair comparison between ‘preferred’ and ‘non-preferred’ music should control for the effect of tempo, which is a factor that the authors of this study failed to take into account.

Regardless, music can have profound effects on emotion and mood, so it is believable that preferred music can pump us up and encourage physical activity. The authors of this study reason that music can distract an exerciser’s attention from perception of physical sensations, giving rise to feelings of less perceived exertion and less fatigue. Perhaps this is just what it is that keeps us going when listening to our favourite music as opposed to music we’re uninterested in. When music is non-preferred, we try to block it out or ignore it, leading to greater attention to pain and lactic acid build-up in our muscles. When we love the song we’re listening to, and we get really into the intricacies of the beat and melody, we ignore physical pain. If it were possible to put someone in a brain scanner while they exercise and listen to their favourite music, it would be predicted that their pain centres (e.g. the insular cortex) would be less active than during exercise with boring music.

So the lesson is this: bring your iPod to the gym, because listening to Justin Bieber’s latest hit (or other cheesy top 40 songs that are played at most facilities) while exercising can literally be painful.

References:
Nakamura PM, Pereira G, Papini CB, Nakamura FY, & Kokubun E (2010). Effects of preferred and nonpreferred music on continuous cycling exercise performance. Perceptual and motor skills, 110 (1), 257-64 PMID: 20391890

Tuesday, May 11, 2010

Empathy for pain in doctors versus synesthetes

A common warning given to students interested in a career in medicine is “you better have a tough stomach.” Luckily for physician-wannabes who tense up at the sight of blood, a new study published in NeuroImage suggests that it may be possible to change your brain’s response to watching pain being inflicted on others. The findings are nicely summarized at the BPS Research Digest blog:
Decety's team used electroencephalography (EEG) to monitor the electrical activity arising from the brains of 15 doctors and 15 controls while they looked at dozens of static pictures of people being pricked in various body parts by a needle or prodded by a cotton bud.

When a person looks at someone else in pain, their EEG response typically shows two distinct characteristics: a frontal component after 110ms, which is thought to reflect an automatic burst of empathy, and a more central, parietal component after about 350ms, which reflects a conscious evaluation of what's been seen.

As expected, the control participants showed an enhanced early and later phase EEG response to the needle pictures compared with the cotton bud pictures. The doctors, by contrast, showed no difference in brain response to the two categories of picture.
This means that doctors are essentially the opposite of people who experience synesthesia for pain, a condition I previously wrote about. Synesthesia for pain, the physical experience of pain induced by simply viewing pain in another, has mainly been identified in rare ‘phantom limb’ patients who have had an amputation but still experience sensations in the absent limb. If you can recall, certain areas of the brain are active when pain is felt: some areas involving emotion/affective processing, some areas involving cognition/evaluation, and other areas involving the actual sensation of pain. It is hypothesized that pain synesthetes have overly-sensitive responses to viewing pain in brain regions that are responsible for conscious pain perception.

In contrast, Decety’s new study suggests that doctors have under-active responses in the whole pain matrix. If this under-active response can be acquired by training (e.g. assisting surgeries in medical school), perhaps pain synesthetes could make their empathetic pain experiences go away by repeatedly viewing needles being pressed into others. However, this might not be the best treatment approach as it would probably be exceptionally painful for the synesthete.

Pain synesthesia is acquired (usually after injury/amputation), unlike most other identified forms of synesthesia in which people are supposedly born experiencing numbers as colours or sounds as tastes. Perhaps this means pain synesthesia can be cured as well.

ResearchBlogging.orgDecety J, Yang CY, & Cheng Y (2010). Physicians down-regulate their pain empathy response: an event-related brain potential study. NeuroImage, 50 (4), 1676-82 PMID: 20080194

Fitzgibbon BM, Giummarra MJ, Georgiou-Karistianis N, Enticott PG, & Bradshaw JL (2010). Shared pain: from empathy to synaesthesia. Neuroscience and biobehavioral reviews, 34 (4), 500-12 PMID: 19857517

Sunday, May 9, 2010

The sense of body ownership and 3D virtual reality

The classic ‘rubber hand illusion’ gives profound insight to our brain’s ability to confuse the real and living with the inanimate. Originally reported by Matthew Botvinick and Jonathan Cohen in 1998, when people have their hand hidden from view and watch a dummy hand being stroked with a paintbrush while their hidden hand is also stroked, they feel the stroke to be coming from the dummy hand rather than their real hand.

This illusion demonstrates that our perception of body-part ownership is malleable, complementing evidence I mentioned in a previous post suggesting that patients with amputated limbs can still feel pain in their missing limbs. Studies of the rubber-hand illusion have helped us understand how we recognize our body parts and how that might change if we were to lose or replace a body part. Moreover, they have helped us understand the relationships among vision, somatosensation (the sense of touch) and proprioception (the sense of body-part location).

However, it turns out that the illusion doesn’t only need to be studied with rubber hands – it can also be induced with a virtual representation of hands. In this ‘virtual arm illusion,’ people who view a 3D virtual arm instead of a rubber arm, under the same conditions as Botvinick and Cohen’s illusion, confuse their real arm with the arm displayed in virtual reality. Since it easier to experimentally manipulate virtual images and scenes than a rubber object, the virtual arm illusion could be used as an important tool to build on findings from rubber hand studies.

Indeed, in a new study by the researchers who originally reported the virtual arm illusion, they demonstrate that the illusion extends beyond the sense of touch. Subjects who had their hand blinded from view wore 3D goggles and watched a virtual hand move either in synchrony or asynchrony with their actual hand movement. The subjects felt a sense of ownership over the virtual hand when it was moving in synchrony with real hand movement, but they felt that they had significantly less control over the virtual hand with their real hand was moving differently.

The authors suggest that their findings could provide insight to the use of virtual bodies in therapies and rehabilitation. But what about this (slightly more commercially-driven) application: 3D movies where you feel like you are literally part of the film, moving around a scene, feeling yourself touching supposedly virtual objects, supposedly virtual people??

Toying with our sense of body ownership using virtual manipulations could further bridge the fading gap between science-fiction and reality. And help us understand the living electricity encased in our skulls along the way.
ResearchBlogging.org
Sanchez-Vives MV, Spanlang B, Frisoli A, Bergamasco M, & Slater M (2010). Virtual hand illusion induced by visuomotor correlations. PloS one, 5 (4) PMID: 20454463

Thursday, May 6, 2010

fMRI lie-detection evidence rejected in court

Conveniently in keeping with the theme of my last post on fMRI for lie-detection in law, a story appeared on Wired explaining that a lie-detection brain scan could be used as evidence court for the first time.
A Brooklyn attorney hopes to break new ground this week when he offers a brain scan as evidence that a key witness in a civil trial is telling the truth, Wired.com has learned.
If the fMRI scan is admitted, it would be a legal first in the United States and could have major consequences for the future of neuroscience in court.

The lawyer, David Levin, wants to use that evidence to break a he-said/she-said stalemate in an employer-retaliation case. He’s representing Cynette Wilson, a woman who claims that after she complained to temp agency CoreStaff Services about sexual harassment at a job site, she no longer received good assignments. Another worker at CoreStaff claims he heard her supervisor say that she should not be placed on jobs because of her complaint. The supervisor denies that he said anything of the sort.

So, Levin had the coworker undergo an fMRI brain scan by the company Cephos, which claims to provide “independent, scientific validation that someone is telling the truth.”
This attempt was rejected by the judge, on the grounds that assessing credibility is the role of the jury, not an expert witness. The judge did even seek to determine how reliable fMRI lie-detection is, and the reliability was not compared to that of traditional methods (e.g. letting the jury decide).

Thus this case did not even reach the point that Frederick Schauer discusses when he argues that whether fMRI has a place in the courtroom should be determined by legal rather than scientific standards. It is well known that the lie-detection abilities of a jury are weak and unreliable. If the fMRI evidence increased the objectivity at all in judging the witness in this case, it might have made sense to allow the evidence.

However, it looks like we’re still stuck clinging to traditional lie-detection methods, not even willing to question if a new process makes more sense. Still I wouldn't dare suggest that fMRI has a place in the courtroom in the absence of comprehensive peer-reviewed research examining this application. The fMRI evidence was probably garbage in this case – but that doesn’t mean it didn’t deserve questioning, and it doesn’t mean fMRI evidence will be garbage in every future case. Brain scans performed by companies with a financial stake in the outcome should especially be taken with a grain of salt, but we’re likely to see more detailed assessments of this type of evidence in future court cases.

Tuesday, May 4, 2010

Lie-detection and neurolaw: do brain scans have a place in the courtroom?

The legal application of neuroimaging for lie-detection in courtrooms has been criticized by scientists on a number of grounds. Firstly, results of fMRI studies on experimental subjects such as undergraduate students cannot necessarily be applied to people offering evidence in a court. Secondly, lie-detection neuroimaging studies are designed such that lies are ‘instructed,’ meaning that a lie in the laboratory is not actually a real lie. Thirdly, much of the research on lie-detection with fMRI has been conducted by private companies, such as No Lie MRI, who do not publish their findings in peer-reviewed journals and who hire scientists with vested interests in study outcomes.

Despite these criticisms, an article by law professor Frederick Shauer recently appeared in Trends in Cognitive Sciences, arguing that the suitability of neuroimaging as a tool for the courtroom should be determined according to legal and not scientific standards. Principle among his arguments is the claim that current legal methods of lie-detection are not scientifically valid in any sense, and if neuroimaging provides even a slightly higher validity, it should be used in legal cases. Schauer points out that “[r]esearch shows that ordinary people’s ability to distinguish truth from lies rarely rises above random, and juries are unlikely to do better.” He follows this up by stating that “...the admissibility of neural lie-detection evidence must be based on an evaluation of the realistic alternatives within the legal system and not on a non-comparative assessment of whether neural lie-detection meets the standards that scientists use for scientific purposes.”

The argument is certainly interesting, and scientists should be able to appreciate it. Scientists are trained to be cautious and skeptical, only accepting of findings that have just a 5% or lower chance of being attributed to error. These standards are often even higher for fMRI studies. However, if a jury can correctly detect lies only 50% of the time, brain scans that are right 60% of the time are simply more reliable.

Yes, there may be differences between experimental subjects telling instructed lies and real-life defendants being put to the test. But Schauer argues that “...if the ease of telling an instructed lie in the laboratory correlates with the ease of telling a real lie outside the laboratory, research on instructed lies is no longer irrelevant to detecting real lies.”

This has not yet been demonstrated, and Schauer admits that the use of fMRI for lie-detection is probably unwarranted at this point. Several obstacles first need to be overcome, but fMRI in law seems to hold newfound promise in the face of scientific criticism when Schauer’s idea of comparing neuroimaging to current, perhaps less effective methods of lie-detection, are taken into consideration.

Still, scientists who understand the proper use of fMRI need to develop certain methods to ensure that the lie-detection application is practical and effective. Typically found in many neuroimaging studies are ‘outlier’ subjects whose results do not conform to what is found in other subjects. Sometimes the outlier did not perform the task correctly, resulting in brain activity reflecting attention to the wrong stimuli or a lack of attention to the task altogether. A smart liar might be able to play with his/her attention to the lie-detection task at hand, resulting in skewed data. Since fMRI studies typically require responses that involve button presses rather than speech, intentional distractive thoughts may be more readily enabled.

More fMRI studies on real-life-style lie-detection need to be conducted before a courtroom introduction is warranted. But over-skepticism is unwarranted when lie-detection systems of today’s courts are flawed and unreliable.

ResearchBlogging.org
Schauer F (2010). Neuroscience, lie-detection, and the law: contrary to the prevailing view, the suitability of brain-based lie-detection for courtroom or forensic use should be determined according to legal and not scientific standards. Trends in cognitive sciences, 14 (3), 101-3 PMID: 20060772

Monday, May 3, 2010

Should students get "dope-tested" for cognitive enhancers?

Student use of “smart drugs” to increase academic performance is quite a common practice these days. According to a recent article in the Guardian and a publication in Nature, up to 25% of students at some universities are using drugs such as ritalin and modafinil as cognitive enhancers. These agents are relatively easy to acquire, and proponents of their use claim that they cause few adverse side effects.

Those who are for the use of neuroenhancers also argue that taking a pill is no different than having a cup of coffee. Both caffeine and modafinil are drugs (although they have different mechanisms of action) that help students stay up and do last-minute studying, but one substance is legal while the other isn’t.

If this argument is taken into consideration, however, an important follow-up question is: where do you draw the line between “fair” and unfair cognitive enhancement? Would a student who supports modafinil use to improve academic performance also support, say, special brain surgeries for students to help them increase cognition and exam scores?

While this might seem like a ridiculous situation on the surface, we could be facing related questions in the future, especially if no measures to prevent smart drug use are soon taken on students. In a 2008 case study of a single patient, Andres Lozano and colleagues at the University of Toronto found that deep brain stimulation (applying highly localized electric current to neurons) in a specific brain area dramatically enhanced certain cognitive abilities. After the patient was stimulated in the fornix, a major output of the hippocampus, he demonstrated significantly increased scores on measures of verbal memory and spatial learning, and his IQ increased by 9 points. Lozano is currently testing deep brain stimulation in Alzheimer’s patients to see if their cognitive deficits can be reversed or alleviated by the technique.

The potential of deep brain stimulation to positively affect cognition is promising for a few reasons. Lozano wasn’t trying to enhance his patient’s cognitive ability when he stumbled upon this effect; his patient was stimulated near the fornix in an attempt to suppress the patient’s appetite in a case of treatment-resistant obesity. Thus, even though the increased cognitive abilities observed were so drastic, the fornix might not necessarily even be the best place to stimulate in order to enhance cognitive function. Additionally, studies of Parkinson’s patients as well as animal models have revealed supportive effects of deep brain stimulation on cognition.

Now imagine a future in which a cognitive-enhancing electrodes can be inserted into the brain using a relatively non-invasive method, and this treatment is commonly used in the management of cognitive decline and Alzheimer’s disease. Just as students now desire drugs that are meant for treating ADHD and insomnia, would students of the future also jump at the opportunity to try this cognitive therapy meant for Alzheimer’s? Is this still the same thing as drinking a cup of coffee, or does everything change when the treatment involved extends beyond the realm of drug therapy?

In a environment so competitive that students longing for straight A’s are willing to sacrifice sleep and neglect basic self-care, it would be reasonable to assume that some students are willing to do literally whatever it takes to achieve their goals. In the absence of defined limits and regulations, cognitive enhancement to increase academic performance can and will become the rule rather than the exception.