Brain scans as lie detectors? One lying thief checks it out
Published: Sunday, January 29, 2006 at 6:01 a.m.
Last Modified: Sunday, January 29, 2006 at 1:14 a.m.
Picture this: your boss is threatening to fire you because he thinks you stole company property. He doesn't believe your denials. Your lawyer suggests you deny it one more time - in a brain scanner that will show you're telling the truth.
Brain scans tested as lie detectors
Wacky? Science fiction? It might happen this summer.
Just the other day I lay flat on my back as a scanner probed the tiniest crevices of my brain and a computer screen asked, ''Did you take the watch?''
And two outfits called Cephos Corp. and No Lie MRI Inc. say they'll start offering brain scans for lie detection later this year.
''I'd use it tomorrow in virtually every criminal and civil case on my desk'' to check the truthfulness of clients, said attorney Robert Shapiro, best known for defending O.J. Simpson against murder charges.
Shapiro serves as an adviser to entrepreneur Steven Laken and has a financial interest in Cephos, which Laken founded to commercialize the brain-scanning work at the Medical University of South Carolina. That's where I got scanned.
The technology is called functional magnetic resonance imaging, or fMRI. It's a standard tool for studying the brain, but research into using it to detect lies is still in early stages. Nobody really knows yet whether it will prove more accurate than polygraphs, which measure things like blood pressure and breathing rate to look for emotional signals of lying.
But advocates for fMRI say it has the potential to be more accurate, because it zeros in on the source of lying, the brain, rather than using indirect measures. So it may someday provide lawyers with something polygraphs can't: legal evidence of truth-telling that's widely admissible in court. (Courts generally regard polygraph results as unreliable.)
Laken said he's aiming to offer the fMRI service for use in situations like libel, slander and fraud where it's one person's word against another, and perhaps in employee screening by government agencies. Attorneys suggest it would be more useful in civil than most criminal cases, he said.
Of course, there's no telling where the general approach might lead. The idea of using scanners to detect lies has started a buzz among scientists, legal experts and ethicists. Unlike perusing your mail or tapping your phone, this is ''looking inside your brain,'' Hank Greely, a law professor who directs the Stanford Center for Law and the Biosciences, told me a few days before my scan.
It ''does seem to me to be a significant change in our ability . . . to invade what has been the last untouchable sanctuary, the contents of your own mind,'' Greely said. ''It should make us stop and think to what extent we should allow this to be done.''
But Dr. Mark George, the genial neurologist and psychiatrist who let me lie in his scanner and be grilled by his computer, said he doesn't see a privacy problem with the technology.
That's because it's impossible to test people without their consent, he said. Subjects have to cooperate so fully - holding the head still, and reading and responding to the questions, for example - that they have to agree to the scan. ''It really doesn't read your mind if you don't want your mind to be read,'' he said. ''If I were wrongly accused and this were available, I'd want my defense lawyer to help me get this.''
George and colleagues recently reported that the technology spotted lies in 28 out of 31 volunteers. I joined an extension of that study.
That's why I found myself lying in George's fMRI scanner, focused on questions popping up on a computer screen.
Some were easy: Am I awake, is it 2004, do I like movies? Others were a little more challenging: Have I ever cheated on taxes, or gossiped, or deceived a loved one? As instructed, I answered them all truthfully, pushing the ''Yes'' button with my thumb or the ''No'' button with my index finger.
Then, there it was: ''Did you remove a watch from the drawer?''
Just a half-hour or so before, in an adjacent room, I'd been told to remove either a watch or a ring from a drawer and slip it into a locker. This was the mock crime the study used. So I took the watch. As I lay in the scanner I remembered seizing its gold metal band and nestling it into the locker.
So, the computer was asking, did I take the watch?
No, I replied with a jab of my finger. I didn't steal nuthin.'
I lied again and again when asked if I'd taken the watch, but replied truthfully when asked the same questions about the ring. It would be a different computer's job to figure out which I was lying about, the watch or the ring. It would compare the way my brain acted when I responded to those questions versus what my brain did when I responded to routine questions truthfully. Whichever looked more different from the ''truthful'' brain activity would be considered the signature of deceit.
The computer asked me 160 questions over the course of 16 minutes - actually, it was 80 questions two times apiece. The verdict would take a few days to produce, since it required a lot of data analysis. In a real-world interrogation, George said, the subject of the questioning would go through an exercise like this ring-or-watch task as well as being quizzed about the topic at hand. That way, if the computer failed in the experimental task, it would be obvious that it couldn't judge the person's truthfulness.
It didn't take any jury to find the truth in my case.
''We nabbed ya,'' George said after sending me the results of my scan. ''It wasn't a close call.''
I was ratted out by the three parts of my brain the technique targets. They'd become more active when I lied about taking the watch than when I truthfully denied taking the ring.
Those areas are involved in juggling the demands of doing several things at once, in thinking about oneself, and in stopping oneself from making a natural response - all things the brain apparently does when it pulls back from blurting the truth and works up a whopper instead, George said.
But ethical and legal experts said they were wary of quickly applying fMRI for spotting lies.
''What's really scary is if we start implementing this before we know how accurate it really is,'' Greely said. ''People could be sent to jail, people could be sent to the death penalty, people could lose their jobs.''
Judy Illes, director of Stanford's program in neuroethics, also has concerns: Could people, including victims of crimes, be coerced into taking an fMRI test? Could it distinguish accurate memories from muddled ones? Her worries multiply if fMRI evidence starts showing up in the courtroom. For one thing, unlike the technical data from a polygraph, it can be used to make brain images that look simple and convincing, belying the complexity of the data behind them, she said.
''You show a jury a picture with a nice red spot, that can have a very strong impact in a very rapid way. . . . We need to understand how juries are going to respond to that information. Will they be open to complex explanations of what the images do and do not mean?''
But he figures that if a perfect lie detector were developed, that practical consideration might not matter. The mere knowledge that one is available, he said, might provoke people to clean up their acts.
''My hope,'' George said, ''would be that it might make the world operate a little bit more openly and honestly.''
On the Net:
Cephos Corp.: http://www.cephoscorp.com
No Lie MRI Inc.: http://www.noliemri.com
fMRI information: http://www.radiologyinfo.org/content/functional-mr.htmBenefit&Risk
Comments are currently unavailable on this article