Why Smart People Do Stupid ThingsU of T Magazine
By Kurt Kleiner
can someone so smart be so stupid? We’ve all asked this question after
watching a perfectly intelligent friend or relative pull a boneheaded
People buy high and sell low. They believe their
horoscope. They figure it can’t happen to them. They bet it all on
black because black is due. They supersize their fries and order the
diet Coke. They talk on a cellphone while driving. They throw good
money after bad. They bet that a financial bubble will never burst.
done something similarly stupid. So have I. Professor Keith Stanovich
should know better, but he’s made stupid mistakes, too.
$30,000 on a house once,” he laughs. “Probably we overpaid for it. All
of the books tell you, ‘Don’t fall in love with one house; fall in love
with four houses.’ We violated that rule.” Stanovich is an adjunct
professor of human development and applied psychology at the University
of Toronto who studies intelligence and rationality. The reason smart
people can sometimes be stupid, he says, is that intelligence and
rationality are different.
“There is a narrow set of cognitive
skills that we track and that we call intelligence. But that’s not the
same as intelligent behaviour in the real world,” Stanovich says.
He’s even coined a term to describe the failure to act rationally despite adequate intelligence: “dysrationalia.”
we define and measure intelligence has been controversial since at
least 1904, when Charles Spearman proposed that a “general intelligence
factor” underlies all cognitive function. Others argue that
intelligence is made up of many different cognitive abilities. Some
want to broaden the definition of intelligence to include emotional and
Stanovich believes that the intelligence
that IQ tests measure is a meaningful and useful construct. He’s not
interested in expanding our definition of intelligence. He’s happy to
stick with the cognitive kind. What he argues is that intelligence by
itself can’t guarantee rational behaviour.
Earlier this year,
Yale University Press published Stanovich’s book What Intelligence
Tests Miss: The Psychology of Rational Thought. In it, he proposes a
whole range of cognitive abilities and dispositions independent of
intelligence that have at least as much to do with whether we think and
behave rationally. In other words, you can be intelligent without being
rational. And you can be a rational thinker without being especially
Time for a pop quiz. Try to solve this problem
before reading on. Jack is looking at Anne, but Anne is looking at
George. Jack is married but George is not. Is a married person looking
at an unmarried person?
Yes No Cannot be determined
than 80 per cent of people answer this question incorrectly. If you
concluded that the answer cannot be determined, you’re one of them. (So
was I.) The correct answer is, yes, a married person is looking at an
Most of us believe that we need to know if
Anne is married to answer the question. But think about all of the
possibilities. If Anne is unmarried, then a married person ( Jack) is
looking at an unmarried person (Anne). If Anne is married, then a
married person (Anne) is looking at an unmarried person (George).
Either way, the answer is yes.
To figure this out, most people
have the intelligence if you tell them something like “think logically”
or “consider all the possibilities.” But unprompted, they won’t bring
their full mental faculties to bear on the problem.
And that’s a
major source of dysrationalia, Stanovich says. We are all “cognitive
misers” who try to avoid thinking too much. This makes sense from an
evolutionary point of view. Thinking is time-consuming, resource
intensive and sometimes counterproductive. If the problem at hand is
avoiding the charging sabre-toothed tiger, you don’t want to spend more
than a split second deciding whether to jump into the river or climb a
So we’ve developed a whole set of heuristics and biases to
limit the amount of brainpower we bear on a problem. These techniques
provide rough and ready answers that are right a lot of the time – but
For instance, in one experiment, a researcher
offered subjects a dollar if, in a blind draw, they picked a red jelly
bean out of a bowl of mostly white jelly beans. The subjects could
choose between two bowls. One bowl contained nine white jelly beans and
one red one. The other contained 92 white and eight red ones. Thirty to
40 per cent of the test subjects chose to draw from the larger bowl,
even though most understood that an eight per cent chance of winning
was worse than a 10 per cent chance. The visual allure of the extra red
jelly beans overcame their understanding of the odds.
consider this problem. There’s a disease outbreak expected to kill 600
people if no action is taken. There are two treatment options. Option A
will save 200 people. Option B gives a one-third probability that 600
people will be saved, and a two-thirds probability that no one will be
saved. Most people choose A. It’s better to guarantee that 200 people
be saved than to risk everyone dying.
But ask the question this
way – Option A means 400 people will die. Option B gives a one-third
probability that no one will die and two-thirds probability that 600
will die – and most people choose B. They’ll risk killing everyone on
the lesser chance of saving everyone.
The trouble, from a
rational standpoint, is that the two scenarios are identical. All
that’s different is that the question is restated to emphasize the 400
certain deaths from Option A, rather than the 200 lives saved. This is
called the “framing effect.” It shows that how a question is asked
dramatically affects the answer, and can even lead to a contradictory
Then there’s the “anchoring effect.” In one experiment,
researchers spun a wheel that was rigged to stop at either number 10 or
65. When the wheel stopped, the researchers asked their subjects if the
percentage of African countries in the United Nations is higher or
lower than that number. Then the researchers asked the subjects to
estimate the actual percentage of African countries in the UN. The
people who saw the larger number guessed significantly higher than
those who saw the lower number. The number “anchored” their answers,
even though they thought the number was completely arbitrary and
The list goes on. We look for evidence that
confirms our beliefs and discount evidence that discredits it
(confirm-ation bias). We evaluate situations from our own perspective
without considering the other side (“myside” bias). We’re influenced
more by a vivid anecdote than by statistics. We are overconfident about
how much we know. We think we’re above average. We’re certain that
we’re not affected by biases the way others are.
Stanovich identifies another source of dysrationalia – what he calls
“mindware gaps.” Mindware, he says, is made up of learned cognitive
rules, strategies and belief systems. It includes our understanding of
probabilities and statistics, as well as our willingness to consider
alternative hypotheses when trying to solve a problem. Mindware is
related to intelligence in that it’s learned. However, some highly
intelligent, educated people never acquire the appropriate mindware.
People can also suffer from “contaminated mindware,” such as
superstition, which leads to irrational decisions.
argues that dysrationalities have important real-world consequences.
They can affect the financial decisions you make, the government
policies you support, the politicians you elect and, in general, your
ability to build the life you want. For example, Stanovich and his
colleagues found that problem gamblers score lower than most people on
a number of rational thinking tests. They make more impulsive
decisions, are less likely to consider the future consequences of their
actions and are more likely to believe in lucky and unlucky numbers.
They also score poorly in understanding probability and statistics. For
instance, they’re less likely to understand that when tossing a coin,
five heads in a row does not make tails more likely to come up on the
next toss. Their dysrationalia likely makes them not just bad gamblers,
but problem gamblers – people who keep gambling despite hurting
themselves, their family and their livelihood.
From early in his
career, Stanovich has followed the pioneering heuristics and biases
work of Daniel Kahneman, who won a Nobel Prize in economics, and his
colleague Amos Tversky. In 1994, Stanovich began comparing people’s
scores on rationality tests with their scores on conventional
intelligence tests. What he found is that they don’t have a lot to do
with one another. On some tasks, there is almost a complete
dissociation between rational thinking and intelligence.
might, for example, think more rationally than someone much smarter
than you. Likewise, a person with dysrationalia is almost as likely to
have higher than average intelligence as he or she is to have lower
than average intelligence.
To understand where the rationality
differences between people come from, Stanovich suggests thinking of
the mind as having three parts. First is the “autonomous mind” that
engages in problematic cognitive shortcuts. Stanovich calls this “Type
1 processing.” It happens quickly, automatically and without conscious
The second part is the algorithmic mind. It engages in
Type 2 processing, the slow, laborious, logical thinking that
intelligence tests measure.
The third part is the reflective
mind. It decides when to make do with the judgments of the autonomous
mind, and when to call in the heavy machinery of the algorithmic mind.
The reflective mind seems to determine how rational you are. Your
algorithmic mind can be ready to fire on all cylinders, but it can’t
help you if you never engage it.
When and how your reflective
mind springs into action is related to a number of personality traits,
including whether you are dogmatic, flexible, open-minded, able to
tolerate ambiguity or conscientious.
“The inflexible person, for
instance, has trouble assimilating new knowledge,” Stanovich says.
“People with a high need for closure shut down at the first adequate
solution. Coming to a better solution would require more cognitive
Fortunately, rational thinking can be taught, and
Stanovich thinks the school system should expend more effort on it.
Teaching basic statistical and scientific thinking helps. And so does
teaching more general thinking strategies. Studies show that a good way
to improve critical thinking is to think of the opposite. Once this
habit becomes ingrained, it helps you to not only consider alternative
hypotheses, but to avoid traps such as anchoring, confirmation and
Stanovich argues that psychologists should perhaps
develop tests to determine a rationality quotient (RQ) to complement IQ
tests. “I’m not necessarily an advocate of pushing tests on everyone,”
he says. “But if you are going to test for cognitive function, why
restrict testing to just an IQ test, which only measures a restricted
domain of cognitive function?”
How Rational Are You?
Five questions to get you thinking
By Kurt Kleiner
intelligence as measured by IQ tests is important, so is the ability to
think rationally about problems. The surprise is that less intelligent
people usually perform just as well as highly intelligent people on
problems that test rationality. Here are a few questions that test if
you’re a rational thinker.
1. A bat and ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?
2. Is the following conclusion logically valid?
Premise 1: All living things need water.
Premise 2: Roses need water.
Therefore, roses are living things.
XYZ virus causes a disease in one in every 1,000 people. A test always
correctly indicates if a person is infected. The test has a
false-positive rate of five per cent – in other words, the test wrongly
indicates that the XYZ virus is present in five per cent of the cases
in which the person does not have the virus. What is the probability
that an individual testing positive actually has the XYZ virus?
4. There are four cards on a table. Each has a letter on one side and a number on the other. The cards look like this:
K A 8 5
is a rule: If a card has a vowel on its letter side, it has an even
number on its number side. Which card(s) must be turned over to find
out if the rule is true or false?
5. According to a
comprehensive study by the U.S. Department of Transportation, a
particular German car is eight times more likely than a typical family
car to kill the occupants of another car in a crash. The U.S.
Department of Transportation is considering recommending a ban on the
sale of this German car. Do you think the United States should ban the
sale of this car?
1. Five cents. Many people,
including students at MIT, Princeton and Harvard, automatically answer
10 cents. After all, a dollar plus 10 cents equals $1.10. But that
cognitive shortcut doesn’t work, since it would mean the bat costs only
90 cents more than the ball.
2. No, it is not logical, even
though 70 per cent of university students given the problem think it
is. Although the conclusion is true, it doesn’t follow from the
premises. Consider the same problem worded in a different way:
Premise 1: All insects need oxygen.
Premise 2: Mice need oxygen.
Therefore, mice are insects.
the original problem, the tendency is to be a cognitive miser, and let
the obvious truth of the conclusion substitute for reasoning about its
logical validity. (In the second problem, though, our cognitive miser
makes the problem easy.)
3. Two per cent. (Most people say 95
per cent.) If one in 1,000 people has the disease, 999 don’t. But with
a five per cent false-positive rate, the test will show that almost 50
of them are infected. Of 51 patients testing positive, only one will
actually be infected. The math here isn’t especially hard. But thinking
the problem through is tricky.
4. A and 5. Ninety per cent of
people get this one wrong, usually by picking A and 8. They think they
need to confirm the rule by looking for a vowel on the other side of
the 8. But the rule only says that vowels must have even numbers, not
that consonants can’t. An odd number on the back of the A, or a vowel
on the back of the 5, would show that the rule is false.
there’s no right or wrong answer here. However, 78 per cent of the
people Stanovich sampled thought the German car should be banned. But
when he turned the question around so that Germany was considering
banning an American car (he was quizzing people in the U.S., by the
way), only 51 per cent thought Germany should ban the car. This is an
example of “myside bias” – evaluating a problem from a standpoint that
is biased toward your own situation.