Christmas 1981 was anything but merry for Danny Brown. He was 25 then and spending time with Bobbie Russell, a 28-year-old woman he had recently met.
Before the holidays were over, Brown would be in prison, accused of raping Russell and strangling her with a string of electric lights from a Christmas tree. Soon after, he was sentenced to life.
Brown passionately defended his innocence, and in January 2001, his supporters leapt on advances in DNA testing to prove that a semen sample from Russell that had been kept refrigerated at a crime lab in Toledo, Ohio, was not Brown’s. But that wasn’t enough to get Brown released. Russell’s 6-year-old son, the only witness to the crime, initially said that two people attacked his mother. If he was telling the truth, Brown conceivably could have been an accomplice.
So Brown took and passed a polygraph test. And in April 2001, based on both the DNA and lie detector results, an Ohio judge freed Brown, who had by then spent half his life in prison for a crime he didn’t commit. “The polygraph helped liberate Brown from a 19-year-long nightmare,” says Jim McCloskey, the founder of Centurion Ministries in Princeton, New Jersey, an organization that helps to free falsely convicted people and was instrumental in Brown’s release.
The ability to distinguish truth from lie, to tell the guilty from the innocent, is one of the most basic challenges in all human interactions. Everyone lies, and everyone tries to spot a liar. No one is consistently successful at either. Hence the strong appeal of a scientific device that could unequivocally separate fact from fiction; today, the attempt to perfect such a device is spurring investigators to delve into the complexities of biology and human nature.
Until now, the modern polygraph, a 66-year-old invention, has been the only broadly accepted technology for exposing liars. More than 2,000 private examiners around the country routinely administer polygraph tests to determine facts in cases ranging from marital infidelity to employee theft. In numerous well-publicized instances, polygraphs have uncovered deeply buried lies. Susan Smith, who drowned her two young sons in 1995 in a car she let roll into a South Carolina pond, confessed after failing three polygraph exams, though the authorities had no other evidence linking her to the murders.
The U.S. government has long relied on polygraphs when checking the backgrounds of job applicants at intelligence agencies or of people who have access to classified information. And since the terrorist attacks of September 11, federal agencies have escalated their use of lie detectors. The FBI says it is planning to start hooking agents up to polygraphs more often to root out moles like Robert Hanssen, who sold secrets to Moscow for nearly two decades. The Justice Department has begun to test hundreds of workers at facilities where anthrax is stored, in hopes of discovering who sent the deadly spores through the mail last fall.
But even its most ardent advocates admit that the polygraph is accurate only about 90 percent of the time, and that its error rate rises when used for anything other than investigations in which enough evidence exists to formulate specific questions. The polygraph is unreliable because it doesn’t read minds. Rather, it measures fluctuations in the rate and depth of breathing, and in perspiration, blood pressure, and pulse, on the assumption that when people lie, they become agitated. That’s a recipe for false positives. An innocent person can easily become uncontrollably nervous merely from being strapped to a polygraph and peppered with questions that could land him in jail or get him fired. Mistakes occur frequently enough that a 1988 federal law forbids most private companies from using lie detectors to screen prospective employees. In only one state, New Mexico, are polygraph results admissible as evidence in court without prior agreement of both sides.
There is so much concern about the efficacy of polygraphs, in fact, that the Department of Energy recently sponsored an 18-month study by the National Academy of Sciences to define circumstances in which the device should be allowed. Besides analyzing the effectiveness of polygraphs, the study (which may be released before this article appears) will also evaluate several new lie detection methods that could complement or eventually even replace the polygraph.
What lie detection researchers are looking for is the so-called Pinocchio response-an unmistakable physical sign of deception. To that end, they are experimenting with high-tech brain imaging machines, electroencephalographs, even infrared cameras to find the telltale physiological evidence of a lie. But these researchers are up against more than scientific obstacles; what they have to contend with is the complexity and variety of lies themselves. It’s virtually impossible to recreate a real lie-certainly one whose detection would result in meaningful consequences-in a laboratory.
So investigators must concoct intricately staged simulations and acknowledge that accurately measuring the effectiveness of new lie detection technologies is extremely difficult-in some sense, it’s a quixotic quest. “Lying is probably one of the most complicated things that we do,” says Jennifer Vendemia, a psychology professor at the University of South Carolina.
Still, some of these new approaches are intriguing because they look directly into the brain, seeking out lies at the moment in which they are formed, rather than merely measuring, as the polygraph does, the secondary signs of nervousness such as sweating and a racing heart. That shift represents a potentially huge leap forward, according to Stephen Kosslyn, a psychology professor at Harvard University. “Current lie detection tools,” he says, “are at least one step removed from the organ that’s actually doing the lying.”
These emerging brain-based technologies offer researchers a window into the mind, revealing basic truths about the nature of deception. For one thing, investigators are learning that lies may originate in different parts of the brain-simple denials are formed in one area, elaborate confabulations in another. Moreover, the new experiments offer physical evidence of something that philosophers have long suspected: It takes more mental energy to lie than to tell the truth.
The most advanced equipment being harnessed to measure deception is the functional magnetic resonance imaging (fMRI) machine, which uses strong magnetic fields to induce the molecules within brain tissue to emit distinctive radio signals. By mapping those signals onto images of the brain taken in rapid succession, the fMRI monitors the movement of blood-and, from that, can determine which areas of the brain are activated by a particular task.
In a seminal fMRI lie detection study, Daniel Langleben and colleagues at the University of Pennsylvania School of Medicine gave volunteers a playing card and a handheld yes/no clicker. The subjects were told to lie when shown a question that would reveal their card, but to answer other queries honestly (see illustration on the opposite page). When these people gave truthful answers, the fMRI showed increased activity in parts of the brain related to vision and finger movement. When they lied, the same areas lit up-but so did areas in the front part of the brain that have been shown to regulate decision making in the presence of rival information. “They activate when you make a choice unconsciously,” Langleben says. Which means, he adds, that lying apparently takes more mental effort than telling the truth. “Truth is the baseline,” he says. “St. Augustine was right when he defined deception as intentional denial of truth. If you don’t know the truth, you can’t lie.”
Langleben’s research is noteworthy because it demonstrated that it’s possible to see the physical differences between lying and truth telling within the brain. But though Langleben’s work pinpointed a few locations that are active during deception, he and scientists involved in similar studies caution that lying is a complex behavior, and that it’s likely to be linked to a large number of brain sites, many of which remain unknown.
Harvard’s Kosslyn hopes to make some inroads on this problem by charting the brain activity corresponding to disparate types of lies. Broadly speaking, Kosslyn suspects that the location of a deception within the brain and the amount of energy needed to carry the deception out are determined by whether someone is telling a premeditated lie or on-the-spot fib. To test this idea, Kosslyn’s team told volunteers to take an actual memory and warp it into a lie. One person could, for instance, imagine that as a high school baseball player he hit 50 home runs when in reality he was a benchwarmer. Later, during an fMRI scan, the volunteers were asked questions and instructed to either stick to the memorized lie, tell the truth, or make up something new on the spot. So far, based on just a smattering of initial results, Kosslyn says that fMRI scans appear to distinguish between these different types of lies. Spontaneous lies seem to activate parts of the frontal lobes that play a crucial role in what’s known as working memory-short-term, immediately relevant information-whereas memorized lies don’t appear to affect this part of the brain. “We hope that in fact there will be unique brain signatures for the different types of lies,” Kosslyn says.
Even if fMRI research proves fruitful, the equipment isn’t likely to attain widespread use for lie detection. The machines are bulky, expensive (to buy and to operate), and highly sensitive to motion. “Just twitch a little bit, and it’ll ruin the scan,” says Kosslyn. Some researchers, however, are experimenting with another brain-scanning tool that may turn out to be more practical than fMRI: the electroencephalograph (EEG), which directly measures the electrical output of the brain rather than inferring brain activity from blood flow, as the fMRI does. EEGs are relatively cheap, portable, and unobtrusive.
In one EEG study, University of South Carolina researcher Jennifer Vendemia had student volunteers don a cap fitted with 128 electrodes that record brain waves. The students were then asked to look at a computer screen on which was displayed first a simple sentence that was obviously true or false-“The grass is green,” for example, or “Mickey Mouse shot Abraham Lincoln”-followed by a question: either “True?” or “False?” The volunteers were told to respond by telling the truth sometimes and lying at other times. In another experiment, Vendemia had the students commit a mock crime, in which they raided the office of a make-believe professor and stole a copy of an upcoming exam.
Later, when questioned about the incident while hooked up to the EEG, the volunteers were instructed to claim they weren’t involved.
The results of Vendemia’s studies, so far involving about 260 volunteers, suggest that there are predictable patterns of brain activity when people lie. A series of characteristic fluctuations in electrical energy, known as event-related potentials, seem to occur less than half a second after someone tells a lie, according to Vendemia.
Scientists have recognized the connection between lying and brain waves since the early 1990s. Until then, it was known that when a person encounters something unusual-an English word buried in a list of Russian expressions, for example-the brain produces a distinctive wave that’s called a P300 because it occurs about 300 milliseconds after the stimulus. But in 1991, psychologist Emanuel Donchin and Lawrence Farwell, his graduate student at the University of Illinois at Urbana-Champaign, took this “oddball paradigm” a step further. Using a mock-crime scenario, they showed that a “guilty” person produces a P300 when presented with a telltale detail of the incident within a group of unrelated words or images.
Based on this research, last year Farwell debuted a commercial lie detection method he calls Brain Fingerprinting. On the strength of it, he was named to Time magazine’s list of Top 100 Next Wave Innovators. Farwell claims his machine can prove whether a suspect was at a crime scene based on whether he or she generates P300 waves when shown images of key details from the crime. For example, suppose a murder victim was wearing a green sweater; the killer should produce a P300 response when he sees a picture of the sweater among other articles of clothing, having been told that one of them was worn by the victim.
But like the polygraph, Brain Fingerprinting has its share of detractors, including Farwell’s former collaborator Donchin. For instance, Donchin says, a person could produce a P300 spike when seeing the green sweater not because he murdered the man wearing it but because he saw a similar one at a store recently on sale for a surprisingly low price. Consequently, there’s no way to ensure that the images shown to a suspect will elicit a response only from a guilty person. Moreover, because Brain Fingerprinting is useful only when specific evidence exists, the U.S. General Accounting Office concluded last October that it is of limited use for the more general screening done by the Pentagon, CIA, FBI, and Secret Service.
For now, those agencies rely on the polygraph. Recognizing the device’s flaws, the government has begun to increase its funding for research into novel kinds of lie detectors-raising the visibility of what had been an obscure corner of science. “Nobody cared what I did two years ago,” says South Carolina’s Vendemia. “Now all of a sudden lots of people are investigating it.”
Not all the research involves high-level brain experiments. For example, researchers at the Mayo Clinic have theorized that blushing around the eyes, detectable only by an infrared camera, is a sign of lying. To prove this, they asked eight volunteers to stab a mannequin, grab a $20 bill, and leave the room. Twelve others did nothing. Then all 20 participants had to try to convince interrogators that they didn’t take part in the mock crime. Monitoring the subtle changes in temperature around the subjects’ eyes, the scientists correctly identified six of the eight “guilty” subjects and 11 of the 12 “innocent” ones. Similar experiments have found that a split-second hesitation before answering a question is often an indication of lying-as is an almost indiscernible muscle twitch.
These types of lie detection techniques are appealing because they could let authorities test for duplicity quickly and secretly, without having to hook anyone up to a machine. They raise the possibility of, for instance, hidden airport sensors that surreptitiously scan the area around people’s eyes for increased heat to determine whether they’re telling the truth about what’s in their luggage.
For now, though, the general attitude among federal agencies is that as new lie detection technologies emerge, they will likely be used to enhance the polygraph, which is a relatively inexpensive and simple way to test for the truth. Equipment for measuring eye blushing, hesitation, twitching, or even brain activity could be incorporated into the machine, says Andy Ryan, chief of research at the Department of Defense Polygraph Institute, which oversees polygraph examiners for federal agencies. Alternatively, these technologies, particularly high-priced and bulky equipment like fMRI scanners, could become the lie detection equivalent of a second opinion. “It’s like taking a treadmill test to see if your heart is OK,” says Ryan. “Your doctor would never diagnose heart problems from the results of a single treadmill test, which just offers a pattern subject to interpretation and further examination. It’s the same way with a polygraph.”
As investigators begin to unscramble the brain’s role in deception and home in on more accurate ways to measure dishonesty, one thing is for sure: It will be a long time before we learn the whole truth about lying.