Tip:
Highlight text to annotate it
X
The beauty of the pupil—there is a fact that is not widely known, but it's very unusual—the
pupil normally contracts and dilates rhythmically. It's known as hippus, I think. When people
are engaged in a task—you assign a multiplication task—the pupil dilates, and it stays steady
as a rock—hippus is gone—so the measurement noise is eliminated. I don't know what to
mechanism is, but it's absolutely obvious when you watch it. Measurement noise is eliminated
when people are engaged in a task, so it is more sensitive than the other autonomic indices.
Yes, that's right.
Yes.
The title of your book is "Thinking, Fast and Slow," and you talk about two systems,
system one and two. Can you give us an example or tell us a bit about the characters in the book?
Well, the characters are indeed system one and system two. System one corresponds to
a distinction that everybody recognizes in their own thinking, that there are some thought
that just happen to you and there are some thought that you must generate. There is a
lot of mental life that is completely effortless, and then there is some of mental life that
feels like work. That distinction is obvious, and people recognize it.
Now how you label turns out to be quite important. The proper labels would be type one and type
two, and there would even be a third type because I'm not sure that effortful reasoning
and self-control and inhibition of responses are really exactly the same thing. They're probably distinct.
So there are two or three types of responses. It turns out that when—learning about types
is very difficult and thinking about types is very difficult, but our brain seems to
be wired to think about agents. When you describe system one and system two and there are agents
that do things, people find it easy to understand, compelling and interesting, and system one
and system two develop personalities.
The personality of system one is that it does everything and it does everything quickly,
and most of the time it's right, but what it doesn't recognize is that it doesn't recognize
its own limitations so that when it encounters an ambiguous situation it makes a choice.
When it doesn't know the answer to the question, it answers a related question, but it's never
stumped or very rarely stumped by simple questions.
System two is a very different operation. It gets mobilized when system one encounters difficulties.
You mentioned the difference between system one as being things that happen to you and
system two are things that you do. Can you give us some examples of the two systems?
Sure. When I say the word "mother," you have images probably of your mother and you
certainly have an emotional reaction, and that's something that happens to you. When
I say, "Two plus two," a number comes to your mind. You didn't bring it there. It
just came. It happened to you. There are many, you know—in fact, most of mental life is
like that. The words that I utter when I say this sentence, they just come to me. Sometimes
I will stop and choose which word—that's system two—but most of the time, when I
speak the words just come, so that's system one.
System two is—well, there are really two types of operations that system two performs,
and one is complex computations. That is where the pupil dilates and this is mental work.
Mental work is involved in short-term memory tasks. If I ask you, "What was your previous
telephone number," you'll work, and your pupil will expand by about 30 or 40 percent of its
area as you retrieve those. Then there is self-control, the inhibition of impulses.
When you are indeed choosing your words carefully because you don't want to offend, those are
situations in which system two is hard at work, and you feel it, so it corresponds...
System one and System two really correspond to experiences that are readily available
and that everybody recognizes. That distinction between something happening and something
that you do is, I think, pretty compelling to most people.
The dichotomy that you've drawn between system one and two, how does that relate to the previous
work you've done on heuristics and biases?
Well, it turns out we had—Amos Tversky and I, when we started our work, we had something
in mind that was fairly similar to that. We were interested in intuitive statistics, so
in estimates that come to people's mind about probabilities and so on. Now in many of these
cases—we were both teachers of statistics, so we were testing our own intuitions, but
we knew that we could compute. So in our very first paper, we distinguished intuition from
computation, and our point was that intuition is in some cases surprisingly error-prone
and that people should rely on computation.
It's fine.
Yes. That was the beginning, but we never studied what I now call system two. Then our
work became controversial, and people attacked it and criticized it. There was something
that essentially all the criticisms and all the experimental criticisms of our work had
in common, in that they created a situation in which people could figure out the answer
by working on it. That was really the background.
Amos Tversky and I in the very last paper that we wrote together, we answered one of
our very persistent and well-known critic, Gerd Gigerenzer. We pointed out that in his
experiments, typically people would see—well, how would I describe it? One of our best-known
examples in heuristics, and it's one of the best examples in the heuristics literature,
is the Linda example. Linda is that not-so-young woman. She's about 30 years old now, but I'm
telling you that when she was a student she was an activist, the feminist, marched in
all the marches. I didn't say feminist, actually. Then we asked people how likely it is that
Linda now is a bank teller, or how likely it is that she is now a bank teller and is
active in the feminist movement. Now there's no question that when you ask different people
those two questions, they will invariably say that it's more likely that she's a feminist
bank teller rather than a bank teller.
When you ask them the two questions to compare the two options, you're allowing system two
to check logic. By priming logical reasoning and by creating some—you can sensitize people
so that they will detect that obviously she is more likely to be a bank teller than a
feminist bank teller, but that seems to be a different process. When people see only
one example, they evaluate the fit of that example. When you show them two things together,
they can also compare, them and you provide another cue.
That was really the background to the distinction between the two systems with the controversy
around our work. It was an attempt to resolve that controversy by pointing out if you do
it between subjects and if you do it the way the world is—so you make judgments intuitively
about things, that they happen—you get those effects, and you can make them disappear by
allowing logic to play.
Now there's been a lot of airtime, I suppose, around the idea of the 10,000 hours of expertise.
Is there anything to that figure, the 10,000?
I have no idea, really, about the 10,000 hours—that is, I'm a customer of this. I mean, Ericsson,
who has promoted this figure, is a highly reputable researcher, but it's a crude approximation,
I'm sure. I mean, there's nothing magical about 10,000, and I'm sure that it doesn't
take the same amount of time to different people, and expertise is not wholly defined
and so on, but it gives you an idea that this is a lot of hours, that to become an expert
where you see that qualitative change in the way things are done, where basically performance
switches from what I call system two to system one, that takes a long time. How many hours,
I'm not committing myself, and I don't know.
Yes. One of the goals of the course is to cue people to the difference between people
who are actual experts and people who simply just claim to be experts. Is there anything
that people should watch out for, any red flags to tell the difference between people
who actually know or can actually do what they claim for themselves?
Yes, I mean, I think—Gary Klein and I wrote a paper in which we actually suggested and
then said— it's embarrassingly simple—but when somebody acts like a self-confident expert
on a range of problems, then there's one question to be asked: did that person have a decent
opportunity to learn how to perform the task? That requires getting feedback on the quality
of performance and getting rapid and unequivocal feedback. In the absence of rapid and unequivocal
feedback, expertise is just the self-confidence that comes with a lot of experience, and that
is uncorrelated with accuracy. This is something we've known for 50 years or more.
So if somebody wanted to become an expert at a new task, what's the fastest and most
efficient way to turn, as you said, that system two, that effortful sort of processing, into system one?
Well, there are really two ways of doing this, and you have to use both. You have to use
system two. For somebody to become an expert driver, you have to tell them how to drive.
I would say for somebody to become an expert diagnostician on the basis of X-rays, you
have to teach them what the things look like so that they'll be able to recognize them.
But then you'll need also a lot of practice with high-quality feedback. Merely telling
people how to do something is not going to turn them into experts, and repeatedly telling
them the same thing is not going to help. It's a lot of practice with feedback that
creates real expertise, but you can abbreviate the time that it takes to reach expertise
by having high-quality instruction about what cues you should be paying attention to.
Yes, so actually knowing what it is that discriminates the two categories, if it's an abnormal scan
versus a normal scan and so on?
Gary Klein has a beautiful example. He talks of a nurse in the cardiac ward who comes home
and talks to her father-in-law, as I recall, and says, "We have to go to the hospital,"
because he doesn't look good to her. It turns out that, yes, he had to go to the hospital.
He's in deep trouble. He needs—twelve hours later or something, he is on the operating table.
What she had done...
Gary Klein did what he and others—but I think he is the main guru of this type of
enterprise—he found out what the cues were, although she was not aware of the cues that
she was using. He found out that when arteries are obstructed, getting obstructed, which
will lead to a heart attack, there is a pattern. The pattern of distribution of the blood in
the face changes. Now she had recognized, she had learned that pattern, but she didn't
know what it was. Now when you're training nurses, you can show them the pattern.
That's clever. The goal of the course, the title of the course is "The Science of Everyday
Thinking," and what we're trying to do is to provide people with the ability to think
more clearly, argue better, reason better, I suppose learn to use system two, to be more
analytic, to unpack, read more carefully and so on. Do you have any advice for somebody
in the course who's trying to improve their everyday thinking?
Well, you know, my advice would be quite conservative. I mean, it would be: pick a few areas and
pick a few things where you want to change what you're doing, and focus on those. I mean,
do not expect that you can generally increase the quality of your thinking because I think
you really cannot, but if there are repetitive mistakes that you are prone to make, if you
learn the cues, the situations in which you make that mistake, then maybe you can learn
to eliminate them. I'm not...
The history of success in enterprises like yours is that they're not always successful.
I mean, people feel great when they hear of all these ways of doing things and of controlling
themselves, but then when they are making a mistake they are so busy making it that
they have no time to correct it.
One of the reasons, I think, for my skepticism about this is that I don't think my thinking
is very much better than it was 40 years ago or 45 years ago when I started doing this
work. This suggests some humility. So pick your shots, pick a few areas, and then in
those situations that you recognize as situations where you're prone to make a mistake, slow
yourself down.
One piece of advice, by the way, is that recognize situations where you can't do it alone, where
you need a friend, where you need advice because if you do it alone you are going to make a mistake.
So the nature of system two is that it's effortful, that it's something that you have to do. Now
that's hard, and obviously, as you mentioned, trying to get people to be motivated enough
to engage in system two—well, actually, a lot of people have the tools and have everything
they need in order to make better decisions, in order to learn a new task, but it's just
a matter of putting in that cognitive effort to doing a little bit of putting in some elbow
grease and actually making that happen. Do you have any advice for how to make that cognitive
effort seem a little less effortful?
No. I'm not sure I know how to make it seem less effortful. I know it's going to be effortful.
What you can do is illustrate the costs and benefits of investing some effort. There are,
by the way, large individual differences, so Keith Stanovich—I don't know if he's on your list.
He's not. He was on the list, but we couldn't catch up with him.
You couldn't catch him. Keith Stanovich has a whole program of research distinguishing
between what he calls intelligence and rationality, and rationality is in effect the ability to
deploy system two where it's needed and to interfere with the mistakes that system one
is apt to produce. He finds some people are more rational but—not particularly irrational,
although they're intelligent or vice versa.
That's interesting. I mean, that's one of the hardest tasks, just getting the people—
I mean, that have everything available to them but it's actually just doing the thing.
Well, you can recognize—I mean, I've worked a lot with anchoring. That's a phenomenon.
So somebody puts a number in your head, and it looks plausible after a while. I mean,
in fact, this is the way our mind works. We hear something strange, we try to make sense
of it. Trying to make sense of it makes us more prone to believe it, so anchoring is
a suggestion. In fact that's very powerful. You can recognize when you're being anchored.
So if you are in a negotiation situation and the other side has an outrageous number, you
know there is—you could become anchored, and that is worth resisting. That's an example.
Another example is that when you make explicit predictions, you know, like will somebody
who's a young professor eventually get tenure or not, remind yourself that the base rate
of tenure is very important in that story. That is a system two kind of judgment.
In your book, in the beginning, you talked about your relationship with Amos Tversky,
a very productive and sounds like an outstanding working relationship. How could you make that
happen as in a workplace, in order to facilitate a better, productive environment where ideas
come freely? Can you describe what the nature of that is?
Well, you know, creating a productive environment is very different from creating exceptional
collaborations, but for creating a productive environment, I think there are some recipes
and they are really well-known. So you've got to create many opportunities for people
to bump into each other so that they can exchange ideas. You've got to allow—to encourage
exchange of ideas between people who are not in the same field. You know, Steve Jobs was
famous for the suggestion of having very few restrooms in the building to force people
from different units to meet each other on their way to the restroom or there. That's
a recipe that works for encouraging exchange of ideas.
Many places in the UK, many departments of universities and research centers used to
have it. It's diminishing. They used to have coffee in the morning, tea in the afternoon,
which was like 30 minutes where everybody would be in the same room at the same time.
I think that's enormously productive. Now how to get an exceptional collaboration going,
I don't think there is any recipe for that. If you are lucky enough, it happens to you,
and I was very lucky.
Yes, absolutely. So what's next? This was Matt's question. You've written this book.
We know everything about—well, we know a lot more than we did about the differences
between system one and 2 and that difference and in training and so on. What's the next—if
you're looking at the landscape of the judgment/decision-making field at the moment, what do you think is
something worth paying attention to?
Look, I mean, I'm very skeptical about forecasting. I think that's very evident in my book. I
think people have no idea what the future will be, and I'm no exception, so I have really
no interesting forecast. I've never tried to forecast the future. There's something
that's very obvious that is happening, and this is the tremendous spread of neuroscience
and the merger of psychology and neuroscience. There, you can make a confident prediction
because so many very bright young students are going to that field. They are betting
their careers on it, so you know that for the next 15 years there's going to be a lot
of work in neuroscience and decision-making, neuroscience and various aspects of psychological
functioning. That prediction is a no-brainer. More complicated predictions, I can't make.
Do you think that's a fruitful sort of enterprise, the merger of the two?
I've always been a believer of there are some people who are by nature skeptics and other
people who are by nature sort of believers and gullible. Amos was on the skeptical side,
very strongly, and I'm on the gullible side, so I tend to have enthusiasms and to believe
new things are going to be productive. Among my close friends, I'm the most enthusiastic
about neuroeconomics and that sort of thing, but my close friends, who are more Amos-like,
they need more proof.
Fair enough. We're presenting students with the cognitive reflection task and asking them
to—we're giving them—which should be interesting with 200,000 people who are taking the course—to
see what the difference is between fast and slow. Maybe we should mention the cognitive reflection task?
Yes, you can mention it. By the way, this—you know that it was done by Shane Frederick.
We actually had the bat and ball. He put the bat and ball at play in an article that we
wrote together, yes.
Really?
Yes. You know, my Nobel talk was based on a paper that Shane and I had written together,
so it extended that paper, and the bat and ball was Shane's—one of Shane's many contribution to that work.
Yes. Do you think it's a reasonable—there's been a lot of work since the bat and the ball
problem. I'm trying to pin down exactly the nature of the differences...
Look, Keith Stanovich in particular has recently come up with demonstrations that, yes, it
is related to self-control and to what he calls rationality, so he treats it as a test
of rationality. Shane is more ambivalent about whether this is very different from intelligence.
Then there is a massively embarrassing result, which is that they are gender differences
and that nobody wants to see and nobody really believes that men are more rational than women,
and yet men do better in that test than women by a lot. I mean, it's not a small effect.
Now my wife, Anne Treisman, is a well-known psychologist and a National Science medalist
and all that, and she was completely uninterested in those puzzles. She says she suspects that
women are much less interested in puzzles and much less competitive in that particular
way than—you know, it looked trivial to her. She wasn't going to put a lot of work
into it, whereas I have always been one. You know, show me a puzzle, and I'll go to work on it.
Fair enough. So what does success look like? In your book, you mentioned that you'd like
to equip students with the vocabulary and the jargon of judgment decision-making to
at least help them recognize when they might be in this minefield. Can you think of how—what
success looks like in the end of this personal interview?
From my end, from what I was trying to do, success is always measured, I think, by whether
you have changed the language. I was very explicit that changing the language was the
objective of it, and to a significant extent this has been successful. System one and system
two are now part of the language, to the dismay of many psychologists who don't like this
idea of systems as agents and they would have liked to have type one and type 2, but if
I had tried type one and type two, they would not have become part of the language. So success
is new words. People who understand anchoring, who understand availability, another word
that—"What you see is all there is" has limited currency, but it has some currency.
So that's what success is like. It's really introducing terms that make it easier for
people to see certain phenomena.
We're trying to figure out whether this course is successful. You've mentioned, for example,
Keith Stanovich's conception of the cognitive reflection tasks, so potentially seeing a
change, obviously not on exactly the same questions but at the beginning and the end—maybe
a drop in belief in the paranormal, maybe an increase in the cognitive personality tasks,
to see the need for cognition or something, people want to think more or something at
the end of the course. Can you think of another benchmark that might help to gauge whether
people are doing—are thinking more?
This is very ambitious, what you're trying to do. In a way, a course, the way that one
would want to structure a course to achieve that objective, would require a lot of practice.
So if ideally you'd want people to—you'd want as an exercise: "Here is a mistake I
made today," or, "Here is a mistake I almost made today," so you'd want to make people
introspect it. But the easier by far task is to make people critical of other people.
If you improve—my thought has always been, you know, I've said that the aim of the book
is to educate gossip, really, and that is because I believe that if you train people
to be good critics of other people thinking and decision-making, eventually they will
turn that on themselves. This is the easiest way of doing it, rather than making people
do something that is inherently quite aversive, which is monitor themselves and criticize
themselves as they go along.
My name is Danny. I think about thinking.