SHARE

person playing chess
A high IQ and education won’t necessarily protect you from highly irrational behavior—and it may sometimes amplify your errors. Pexels

It is June 17, 1922, and two middle-aged men—one short and squat, the other tall and lumbering with a walrus moustache—are sitting on the beach in Atlantic City, New Jersey. They are Harry Houdini and Arthur Conan Doyle—and by the end of the evening, their friendship will never be the same again.

It ended as it began—with a séance. Spiritualism was all the rage among London’s wealthy elite, and Conan Doyle was a firm believer, attending five or six gatherings a week. He even claimed that his wife Jean had some psychic talent, and that she had started to channel a spirit guide, Phineas, who dictated where they should live and when they should travel.

Houdini, in contrast, was a skeptic, but he still claimed to have an open mind, and on a visit to England two years previously, he had contacted Conan Doyle to discuss his recent book on the subject. Now Conan Doyle was in the middle of an American book tour, and he invited Houdini to join him in Atlantic City.

The visit had begun amicably enough. Houdini had helped to teach Conan Doyle’s boys to dive, and the group were resting at the seafront when Conan Doyle decided to invite Houdini up to his hotel room for an impromptu séance, with Jean as the medium. He knew that Houdini had been mourning the loss of his mother, and he hoped that his wife might be able to make contact with the other side.

And so they returned to the Ambassador Hotel, closed the curtains, and waited for inspiration to strike. Jean sat in a kind of trance with a pencil in one hand as the men sat by and watched. She sat with her pen poised over the writing pad, before her hand began to fly wildly across the page. “Oh, my darling, thank God, at last I’m through,” the spirit began to write. “I’ve tried oh so often—now I am happy…” By the end of the séance, Jean had written around twenty pages in “angular, erratic script.”

Her husband was utterly bewitched—but Houdini was less than impressed. Why had his mother, a Jew, professed herself to be a Christian? How had this Hungarian immigrant written her messages in perfect English—“a language which she had never learned!”? And why did she not bother to mention that it was her birthday?

Meeting these two men for the first time, you would have been forgiven for expecting Conan Doyle to be the more critical thinker. Yet it was the professional illusionist, a Hungarian immigrant whose education had ended at the age of twelve, who could see through the fraud.

While decades of psychological research have documented humanity’s more irrational tendencies, it is only relatively recently that scientists have started to measure how that irrationality varies between individuals, and whether that variance is related to measures of intelligence. They are finding that the two are far from perfectly correlated: it is possible to have a very high IQ or SAT score, while still performing badly on these new tests of rationality—a mismatch known as “dysrationalia.” Indeed, there are some situations in which intelligence and education may sometimes exaggerate and amplify your mistakes.

the intelligence trap book cover
David Robson’s new book The Intelligence Trap is on sale now. Courtesy of W. W. Norton & Company, Inc.

A true recognition of dysrationalia—and its potential for harm—has taken decades to blossom, but the roots of the idea can be found in the now legendary work of two Israeli researchers, Daniel Kahneman and Amos Tversky, who identified many cognitive biases and heuristics (quick-and-easy rules of thumb) that can skew our reasoning.

One of their most striking experiments asked participants to spin a “wheel of fortune,” which landed on a number between 1 and 100, before considering general knowledge questions—such as estimating the number of African countries that are represented in the UN. The wheel of fortune should, of course, have had no influence on their answers—but the effect was quite profound. The lower the quantity on the wheel, the smaller their estimate—the arbitrary value had planted a figure in their mind, “anchoring” their judgment.

You have probably fallen for anchoring yourself many times while shopping during sales. Suppose you are looking for a new TV. You had expected to pay around $150, but then you find a real bargain: a $300 item reduced to $200. Seeing the original price anchors your perception of what is an acceptable price to pay, meaning that you will go above your initial budget.

Other notable biases include framing (the fact that you may change your opinion based on the way information is phrased), the sunk cost fallacy (our reluctance to give up on a failing investment even if we will lose more trying to sustain it), and the gambler’s fallacy—the belief that if the roulette wheel has landed on black, it’s more likely the next time to land on red. The probability, of course, stays exactly the same.

Given these findings, many cognitive scientists divide our thinking into two categories: “system 1,” intuitive, automatic, “fast thinking” that may be prey to unconscious biases; and “system 2,” “slow,” more analytical, deliberative thinking. According to this view—called dual- process theory—many of our irrational decisions come when we rely too heavily on system 1, allowing those biases to muddy our judgment.

It is difficult to overestimate the influence of this work, but none of the early studies by Kahneman and Tversky had tested whether our irrationality varies from person to person. Are some people more susceptible to these biases, while others are immune, for instance? And how do those tendencies relate to our general intelligence? Conan Doyle’s story is surprising because we intuitively expect more intelligent people, with their greater analytical minds, to act more rationally—but as Tversky and Kahneman had shown, our intuitions can be deceptive.

If we want to understand why smart people do dumb things, these are vital questions.

During a sabbatical at the University of Cambridge in 1991, a Canadian psychologist called Keith Stanovich decided to address these issues head on. With a wife specializing in learning difficulties, he had long been interested in the ways that some mental abilities may lag behind others, and he suspected that rationality would be no different. The result was an influential paper introducing the idea of dysrationalia as a direct parallel to other disorders like dyslexia and dyscalculia.

It was a provocative concept—aimed as a nudge in the ribs to all the researchers examining bias. “I wanted to jolt the field into realizing that it had been ignoring individual differences,” Stanovich told me.

Stanovich emphasizes that dysrationalia is not just limited to system 1 thinking. Even if we are reflective enough to detect when our intuitions are wrong, and override them, we may fail to use the right “mindware”—the knowledge and attitudes that should allow us to reason correctly. If you grow up among people who distrust scientists, for instance, you may develop a tendency to ignore empirical evidence, while putting your faith in unproven theories. Greater intelligence wouldn’t necessarily stop you forming those attitudes in the first place, and it is even possible that your greater capacity for learning might then cause you to accumulate more and more “facts” to support your views.

Stanovich has now spent more than two decades building on the concept of dysrationalia with a series of carefully controlled experiments.

To understand his results, we need some basic statistical theory. In psychology and other sciences, the relationship between two variables is usually expressed as a correlation coefficient between 0 and 1. A perfect correlation would have a value of 1—the two parameters would essentially be measuring the same thing; this is unrealistic for most studies of human health and behavior (which are determined by so many variables), but many scientists would consider a “moderate” correlation to lie between 0.4 and 0.59.

Using these measures, Stanovich found that the relationships between rationality and intelligence were generally very weak. SAT scores revealed a correlation of just 0.19 with measures of anchoring, for instance. Intelligence also appeared to play only a tiny role in the question of whether we are willing to delay immediate gratification for a greater reward in the future, or whether we prefer a smaller reward sooner —a tendency known as “temporal discounting.” In one test, the correlation with SAT scores was as small as 0.02. That’s an extraordinarily modest correlation for a trait that many might assume comes hand in hand with a greater analytical mind. The sunk cost bias also shows almost no relationship to SAT scores.

You might at least expect that more intelligent people could learn to recognize these flaws. In reality, most people assume that they are less vulnerable than other people, and this is equally true of the “smarter” participants. Indeed, in one set of experiments studying some of the classic cognitive biases, Stanovich found that people with higher SAT scores actually had a slightly larger “bias blind spot” than people who were less academically gifted. “Adults with more cognitive ability are aware of their intellectual status and expect to outperform others on most cognitive tasks,” Stanovich told me. “Because these cognitive biases are presented to them as essentially cognitive tasks, they expect to outperform on them as well.”

Stanovich has now refined and combined many of these measures into a single test, which is informally called the “rationality quotient.” He emphasizes that he does not wish to devalue intelligence tests—they “work quite well for what they do”—but to improve our understanding of these other cognitive skills that may also determine our decision making, and place them on an equal footing with the existing measures of cognitive ability.

“Our goal has always been to give the concept of rationality a fair hearing—almost as if it had been proposed prior to intelligence,” he wrote in his scholarly book on the subject. It is, he says, a “great irony” that the thinking skills explored in Kahneman’s Nobel Prize-winning work are still neglected in our most well-known assessment of cognitive ability.

After years of careful development and verification of the various sub-tests, the first iteration of the “Comprehensive Assessment of Rational Thinking” was published at the end of 2016. Besides measures of the common cognitive biases and heuristics, it also included probabilistic and statistical reasoning skills—such as the ability to assess risk—that could improve our rationality, and questionnaires concerning contaminated mindware such as anti-science attitudes.

For a taster, consider the following question, which aims to test the “belief bias.” Your task is to consider whether the conclusion follows logically, based only on the opening two premises.

All living things need water.

Roses need water.

Therefore, roses are living things.

What did you answer? According to Stanovich’s work, 70 percent of university students believe that this is a valid argument. But it isn’t, since the first premise only says that “all living things need water”—not that “all things that need water are living.”

If you still struggle to understand why that makes sense, compare it to the following statements:

All insects need oxygen.

Mice need oxygen.

Therefore mice are insects.

The logic of the two statements is exactly the same—but it is far easier to notice the flaw in the reasoning when the conclusion clashes with your existing knowledge. In the first example, however, you have to put aside your preconceptions and think, carefully and critically, about the specific statements at hand—to avoid thinking that the argument is right just because the conclusion makes sense with what you already know.

When combining all these sub-tests, Stanovich found that the overall correlation with commonly used measures of cognitive ability, was often moderate: on one batch of tests, the correlation coefficient with SATs was around 0.47, for instance. Some overlap was to be expected, especially given the fact that several of the rationality quotient’s measures, such as probabilistic reasoning, would be aided by mathematical ability and other aspects of cognition measured by academic tests. “But that still leaves enough room for the discrepancies between rationality and intelligence that lead to smart people acting foolishly,” Stanovich said. His findings fit with many other recent results showing that critical thinking and intelligence represent two distinct entities, and that those other measures of decision making can be useful predictors of real-world behaviors.

With further development, the rationality quotient could be used in recruitment to assess the quality of a potential employee’s decision making; Stanovich told me that he has already had significant interest from law firms, financial institutions, and executive head­hunters.

Stanovich hopes his test may also be a useful tool to assess how students’ reasoning changes over a school or university course. “This, to me, would be one of the more exciting uses,” Stanovich said. With that data, you could then investigate which interventions are most successful at cultivating more rational thinking styles.

If we return to that séance in Atlantic City, Arthur Conan Doyle’s behavior would certainly seem to fit neatly with theories of dysrationalia. According to dual-process (fast/slow thinking) theories, this could just be down to cognitive miserliness. People who believe in the paranormal rely on their gut feelings and intuitions to think about the sources of their beliefs, rather than reasoning in an analytical, critical way.

This may be true for many people with vaguer, less well-defined beliefs, but there are some particular elements of Conan Doyle’s biography that suggest his behavior can’t be explained quite so simply. Often, it seemed as if he was using analytical reasoning from system 2 to rationalize his opinions and dismiss the evidence. Rather than thinking too little, he was thinking too much.

Psychologists call this “motivated reasoning”—a kind of emotionally charged thinking that leads us to demolish the evidence that questions our beliefs and build increasingly ornate arguments to justify them. This is a particular problem when a belief sits at the core of our identity, and in these circumstances greater intelligence and education may actually increase your foolish thinking. (This is similar to Stanovich’s concept of “contaminated mindware”—in which our brain has been infected by an irrational idea that then skews our later thinking.)

Consider people’s beliefs about issues such as climate change. Among Democrats, the pattern is exactly as you would hope: the more educated someone is, the more likely they are to endorse the scientific evidence that carbon emissions generated by humans are leading to global warming. Among Republicans, however, the exact opposite is true: the more educated someone is, the less likely they are to accept the scientific evidence.

This same polarization can be seen on many other charged issues, such as stem cell research or evolution and creationism, with more educated individuals applying their brainpower to protect their existing opinions, even when they disagree with the scientific consensus. It could also be observed in beliefs about certain political conspiracy theories. When it comes to certain tightly held beliefs, higher intelligence and knowledge is a tool for propaganda rather than truth seeking, amplifying our errors.

The unfortunate conclusion is that, even if you happen to be rational in general, it’s possible that you may still be prone to flawed reasoning on certain questions that matter most to you. Conan Doyle’s beliefs were certainly of this kind: spiritualism seems to have offered him enormous comfort throughout his life.

Following their increasingly public disagreement, Houdini lost all respect for Conan Doyle; he had started the friendship believing that the writer was an “intellectual giant” and ended it by writing that “one must be half-witted to believe some of these things.” But given what we now know about rationality, the very opposite may be true: only an intellectual giant could be capable of believing such things.


David Robson is a senior journalist at the BBC and author of The Intelligence Trap: Why Smart People Make Dumb Mistakes (WW Norton). He is @d_a_robson on Twitter. His website is www.davidrobson.me.

Excerpted from The Intelligence Trap: Why Smart People Make Dumb Mistakes. Copyright © 2019 by David Robson. Used with permission of the publisher, W. W. Norton & Company, Inc. All rights reserved.