In part 1 of my discussion of Choose Wisely, I detailed a typical decision—what to do on a beautiful Saturday—to illustrate the sorts of decisions we face in everyday life. My aim was to show that even a simple decision such as this one comprises a complexity that we often fail to appreciate. Far from a straightforward, algorithmic process of weighing pros and cons and specifying their probabilities, deciding what to do on a Saturday is inextricably wrapped up in our values and goals, our mood and situation, our sense of morality and the expectations of our community.
In part one, I also sketched briefly what my collaborator, Richard Schuldenfrei, and I called “intelligent reflection” as a model of how such a decision might be made. As I explained:
“Intelligent reflection allows you to see multiple aspects of a decision. It allows you to compare options that seem to have little or nothing in common. It allows you to consider how a simple decision of how to spend a Saturday says something about who you are and what you value. It allows you to ponder what kind of shadow your decision about today may cast on your future. Intelligent reflection speaks not only to what you decide, but also to how you decide.”
There is no “science” of intelligent reflection, nor are there rules that enable us to distinguish clearly instances of intelligent reflection from instances of unintelligent reflection … of which there are many in most of our lives. There is, however, an alternative to intelligent reflection for which there is a science and within which there are rules. It is known as rational choice theory (RCT), and it has become the normative standard for decision-making done well. In this part of the essay, I will suggest that RCT is deeply inadequate as a normative standard. First, I’ll briefly outline RCT and how decisions are made using it. Second, I’ll show how Daniel Kahneman and Amos Tversky’s work revolutionized our understanding of decision-making, but left RCT as the normative standard untouched. Finally, I’ll share why RCT doesn’t hold up as the normative standard.
* * *
A brief sketch of rational choice theory
From the perspective of rational choice theory (hereafter RCT), which comes, largely, from economics, the presumed goal of a decision is to maximize utility or preference. What “utility” means has been debated for centuries. Unlike something like money, “utility” is subjective—it is in the eye of the beholder. Though extremely vague, its virtue is that it captures more than just pleasure. Utility could be pleasure in some decision settings, but it could be usefulness in others. Two hours in the weight room may not give the professional athlete much pleasure, but it can be useful in making the athlete better at her sport. The term “utility” functions as a way of acknowledging the diversity of things that are valued. Though pleasure and money capture much of what most people value, people value things that are neither of those, such as health or achievement or meaningful social relationships. “Preference” often substitutes for utility. It too is subjective, and virtually content free. The way we know what someone “prefers” is by observing what someone chooses. Almost by definition, what people choose is what they prefer.
RCT assumes that people bring well-articulated preferences to the decision process—in other words, that preferences are exogenous (they exist prior to the occasion on which a decision must be made). People then array the options before them, or construct a set of options, analyze them into relevant attributes, and assess the importance each attribute should have in influencing their decision. For example, someone might decide that a car’s reliability is much more important than the color of its upholstery and give reliability extra weight in deciding what car to buy. Then, people assess how good each attribute of each alternative is; they assign each attribute a value. Next, they try to determine how likely it is that if they choose an alternative, their goals with respect to the target attributes will actually be realized. For example, they might think, “The value of going to the beach is great if it is sunny, but there is some significant probability it will rain, and in that case the value will be greatly decreased.” The value of the options and the probability of attaining those values are given numerical specifications. You multiply those specifications, and the product of the values and the probabilities is the expected utility of that option. “The value of the beach in good weather is 100, the value of the beach if it rains is 10. The chance of good weather is 80 percent, of rain, 20 percent.” Rational choosers then just do the math: 80 percent of 100 is 80, and 20 percent of 10 is 2, so by adding them together we get the expected utility of the trip to the beach.
One could imagine using this framework to decide where to go to college, what to study, what job to take, where to go on vacation, what investments to make, what house to buy, what city to move to, and perhaps whether (and when) to marry, and whether (and when) to have children. And one could use it to decide what to do on a beautiful Saturday.
RCT is meant to be essentially an all-purpose tool for making complex and often difficult decisions. It is a tool for asking and answering two very important questions: What are you trying to attain with this decision, and how likely is each of the options on the table to enable you to attain it? RCT has a better known and more influential cousin in what is called “cost-benefit analysis.” With cost-benefit analysis you assess the plusses and minuses of each available alternative to get a net value and then choose the alternative with the highest net value. Not only is cost-benefit analysis meant to guide individual decisions, it is also meant to guide government decisions (for example, which program to reduce greenhouse gases should be implemented; which prescription drug plan should be adopted) and business decisions (for example, which new product should be developed; which marketing campaign should be pursued).
Why assess probability in addition to value? Because little in life is certain, and every decision is a prediction. You may choose a college because you sat in on a couple of biology classes and loved them. But are all the biology teachers as engaging as the ones you heard? You may choose to vacation at a national park because of its beauty and serenity. But what if it’s extremely crowded? You may choose a job because it seems that your colleagues will be great people to work with. But how much can you tell on the basis of one day spent at the company? Thus, probability assessment is essential.
This description of RCT is admittedly highly schematic, but it enables me to highlight a few key points. First, the structure of RCT is entirely formal; one could substitute variables for actual alternatives and attributes and have a recipe that applies to all decisions. Second, deviations from this normative model will also be formal. That is, “errors” or biases in decision-making are identified as errors because of their failure to match the formal normative model. For RCT, the paradigmatic model of rational decision-making is the gambling casino, in which possible gains and losses and probabilities of those gains and losses are unambiguously specified, and different possible bets can be compared using a common metric—expected monetary value.
* * *
Rational choice theory meets heuristics and biases
In the past half century or so, a part of psychology known as behavioral decision-making, or judgment and decision-making, has grown into a major enterprise, the central purpose of which is to describe and explain how decisions actually are made and look for discrepancies between what RCT tells us to do and what we actually do. As a result of this research, a more refined idea of what RCT can tell us has emerged.
The field of judgment and decision-making has developed an ever-growing catalogue of the mistakes human beings are susceptible to when they use a variety of heuristics, or shortcuts, rather than RCT, or in preparation for the use of RCT, to evaluate information, make decisions, and then calculate the expected results of those decisions. Much of the time, these heuristics work fine, but sometimes they introduce bias. Taken together, these mistakes or biases are subsumed under “System 1” (S1) in Daniel Kahneman’s synoptic Thinking, Fast and Slow. S1 works outside of consciousness, rapidly delivering results to consciousness that are produced by these heuristics. In other words, S1 provides answers to the questions we may have. Afterward, a second, slower process, which is conscious, effortful, and rule-governed, may go to work using logic, probability theory, and other formal systems. This second system, S2, of which we are aware, may take the results of S1 and analyze them, sometimes leading to a different decision than S1 has produced on its own. Perhaps because S2 is slow, effortful, and conscious, it is usually what we have in mind when we say we are thinking a decision over. But in fact, the fast-acting S1 may already have made the decision before S2 even gets started.
In his book, Kahneman analogizes S1 to many perceptual processes. Think about driving in city streets. You want to turn left, but a car is approaching from the other direction. Do you have enough time to make the turn? How far away is the car, and how fast is it going? Your visual system answers these questions for you very fast, and typically very accurately. But when the passenger sitting next to you—a teenage beginning driver—asks you how you knew that you had the time to make the turn, you have nothing to say. So it is with S1-type decision-making processes. They deliver answers, but the conscious you typically has no idea how they arrived at those answers. And because S1 is so fast, it may answer a question for you before you have even fully formulated it.
Kahneman spent a quarter of a century researching S1 processes, most of it in collaboration with Amos Tversky. Their focus was on elucidating the ways S1 goes wrong. But studying the errors of S1 required that there be some standard—some normative theory—of how judgments and decisions should be made. To provide that standard, Kahneman and Tversky, like most other researchers in their field, relied on the RCT model I just described to provide a contrast with S1 processes. RCT is at the heart of how economics captures decision-making, and provides the background against which heuristics, biases, and other S1 processes are evaluated.
RCT is the province of S2, and is slow, effortful, and logical. A decision-maker need not accept the results of the automatic S1 processes as competent or definitive, but these processes deliver answers upon which consciousness acts. One of Kahneman’s main arguments is that people think they are using S2 when faced with problems of judgment and choice when in fact S1 is doing much of the work—automatically, effortlessly, rapidly, but not always accurately.
I cannot overstate the significance of Kahneman’s body of work (with Tversky as well as other collaborators) mapping out various S1 processes and relating them to S2. Among the characteristics of S1 processes are these: They distinguish surprising events from normal events; they infer causes and intentions; they neglect ambiguity and suppress doubt; they exaggerate the consistency of the information being processed; they focus on what is present in a situation and largely ignore what is absent, even when absent information is relevant to the task at hand; they respond more to changes in the environment than to steady states; they overweight the significance of rare events; they are more affected by potential losses than potential gains from a baseline state; they tend to frame the decisions being faced narrowly. And they are always working. This list of attributes is impressive, but hardly exhaustive. The research that Kahneman and Tversky did launched an explosion of interest in heuristics and biases and their effects on decision-making (see the work of Gerd Gigerenzer for many examples studied from a somewhat different perspective). By some counts, at this point more than one hundred different ones have been identified and studied. The exploration and explication of S1 processes has been quite a growth industry.
For beginning this line of research that countless others have followed, Kahneman deservedly won the Nobel Prize in Economics in 2002 (Tversky would surely have shared it had he not died prematurely). A few years later, economist Richard Thaler also won a Nobel, for work very much inspired by Kahneman and Tversky. But my aim here is not to describe various S1 processes and explain how they lead us astray. For that, Thinking, Fast and Slow is pretty definitive.
Kahneman and others have offered serious criticisms of the notion that RCT in its pure form can adequately capture judgment and decision-making. But they are proposing, in my and Schuldenfrei’s view, modifications of RCT rather than basically different ways of understanding what thinking is about. Their critique of RCT is essentially that it fails as a description of decision-making, not that it fails as a norm for decision-making. We think this approach is inadequate to capture the scope of the problem. We argue that what is needed is a different, nonformal conception of judgment and decision-making, which I will sketch in part three of this series. Kahneman’s articulations of the limits of RCT lead only to a variant that defines itself by differences from RCT, and in that sense keeps RCT as the central model. And RCT remains the basic prescriptive model, the proffered guide to good judgment and decision-making. Another way of stating our objective in Choose Wisely is this: Economists and many other social scientists had assumed that human beings are “rational” decision-makers. Research has shown that people are not nearly as rational as these researchers assumed. And RCT is a deeply inadequate account of what it means to be rational.
* * *
Mischaracterizing what we mean by thinking
We believe that the view of S2, largely governed by RCT, overseeing and correcting the errors of S1, mischaracterizes both the relation between the two systems and thinking in general. We believe that rather than being a corrective to the errors of S1, S2 (and RCT in particular) are parasitic on S1. Without S1 doing crucial work, the RCT-driven processes of S2 could not get off the ground. Furthermore, RCT mischaracterizes what we mean, or should mean, by “thinking.” Thinking, and thus rationality, is much more than what RCT provides the norms for. And with a more comprehensive understanding of thinking in mind, S1 processes loom even larger.
We think RCT should not be the normative standard for rational decision-making. Our basic reason is that RCT requires that we frame our decisions in a “closed” and formal way. For judgment and decision-making researchers, framing is a paradigm case of S1 bias. Indeed, one of Kahneman and Tversky’s most celebrated papers is titled “The Framing of Decisions and the Psychology of Choice.” Framing phenomena are typically considered to be obstacles to rational decision-making. In taking this stance, decision researchers have typically had a specific handful of examples of framing in mind, examples in which people frame the decisions they face more narrowly than they should. In contrast, we think framing, understood more broadly as imposing limits and context on a decision, is essential to RCT in particular and rationality in general. For RCT to work, the options need to be limited. They need to be clearly defined, unlike the terms that frame much of ordinary life (like, “What should I do on this beautiful Saturday?”). The decisions people face need to be separated from the larger context in which they are, in reality, often embedded. And data and preferences must be homogenized—squeezed into a common framework that facilitates comparison, even among very different things. The data must be homogenized to be amenable to evaluation with quantitative methods. Preferences must be homogenized so that quantitative methods can be used to assess them. What the focus on RCT and S1 deviations from it have in common is that they take a system (thinking) that is varied in form and substance, and extremely sensitive to context, and they close the system to make it manageable and formalizable.
In many cases, good framing is itself the goal of decision-making. It helps us decide what options should properly be on the table, and how they should be assessed and compared. And there is no inquiry or deciding without it. This point is often overlooked or underappreciated, in part because it is thought that rigorously presented examples, like monetary gambles, that call for the use of RCT are themselves unframed. It is central to our view that the standard RCT cases, though thought to be unframed, are in fact framed: They are framed to the extent that they can be easily quantified.
Expressed slightly differently, our view is that framing is a prerequisite for the operation of RCT; without framing, RCT procedures can’t even get started. In addition, RCT requires quantification of both probability and value, which we believe cannot be done within the bounds of RCT, at least not without framing. In many situations in real life, attaching probabilities to outcomes is at best wishful thinking and at worst sheer fantasy. In addition, assigning value to the options we face often depends on framing, and since RCT can’t tell us much about how decisions should be framed, it can’t tell us much about how alternatives should be valued.
By now, almost everyone who studies decision-making knows that RCT is an idealization that does not match how many decisions are actually made. Indeed, perhaps, speaking practically, RCT is not even a good model for how decisions always should be made. Going through the process of RCT decision analysis may be more costly in time and cognitive resources than the decision is worth. And an outcome that is utility maximizing in an individual decision may be destructive when cumulated, so that individual decisions must be considered in terms of the long-term consequences they may have.
This acknowledgment has led some researchers, in the spirit of Herbert Simon (another Nobel Prize winner), to modify the rational choice norm and speak of “bounded rationality,” which highlights the cognitive (and emotional) limitations of human beings. The notion of bounded rationality leaves the normative status of the model of rational choice intact, simply describing the ways finite organisms actually make decisions with processes that fall short of the normative standard. Thus, the normative standard exerts a powerful influence on research, on what investigators find interesting and noteworthy, and on the prescriptions that are offered to improve decision-making. Perhaps most significant, the normative standard makes certain important questions about rationality essentially invisible to researchers and policymakers alike. In our book, we try to make them visible. And in part three of this series, I’ll describe our alternative model for understanding how we should make decisions.
Adapted from Choose Wisely By Barry Schwartz and Richard Schuldenfrei. Published by Yale University Press. Copyright © 2025 by Barry Schwartz and Richard Schuldenfrei. All rights reserved.
Disclosure: Barry Schwartz is a member of the Behavioral Scientist advisory board. Advisors do not play a role in the editorial decisions of the magazine.
