Abstract
One of the essential insights from psychological research is that people’s information processing is often biased. By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however, these biases have often been examined in separate lines of research, thereby precluding the recognition of shared principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias, egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially the specific belief that guides information processing. More importantly, we propose that different biases even share the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the dependent variable), thus tapping into different manifestations of the same latent information processing. In other words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel hypotheses that follow directly from the integrative nature of our perspective.
Thought creates the world and then says, “I didn’t do it.”
—David Bohm (physicist)
One of the essential insights from psychological research is that human information processing is often biased. For instance, people overestimate the extent to which their opinions and beliefs are shared (e.g., Nickerson, 1999), and they apply differential standards in the evaluation of behavior depending on whether it is about a member of their own or another group (e.g., Hewstone et al., 2002), just to name a few. For many such biases there are prolific strands of research, and for the most part these strands do not refer to one another. As such parallel research endeavors may prevent us from detecting common principles, the current article seeks to bring a set of biases together by suggesting that they might actually share the same “recipe.” Specifically, we suggest that they are based on prior beliefs plus belief-consistent information processing. Put differently, we raise the question of whether a finite number of different biases—at the process level—represent variants of “confirmation bias,” or peoples’ tendency to process information in a way that is consistent with their prior beliefs (Nickerson, 1998). Even more importantly, we argue that different biases could be traced back to the same underlying fundamental beliefs and outline why at least some of these fundamental beliefs are likely held widely among humans. In other words, we propose for discussion a unifying framework that might provide a more parsimonious account of the previously researched biases presented in Table 1. And we argue that research on the respective biases should elaborate on whether and how those biases truly exceed confirmation bias. The proposed framework also implies several novel testable hypotheses, thus providing generative potential beyond its integrative function.
| Fundamental belief | Bias | Brief description |
|---|---|---|
| My experience is a reasonable reference. | Spotlight effect (e.g., Gilovich et al., 2000) | Overestimating the extent to which (an aspect of) oneself is noticed by others |
| Illusion of transparency (e.g., Gilovich & Savitsky, 1999) | Overestimating the extent to which one’s own inner states are noticed by others | |
| Illusory transparency of intention (e.g., Keysar, 1994) | Overestimating the extent to which an intention behind an ambiguous utterance (that is clear to oneself) is clear to others | |
| False consensus (e.g., Nickerson, 1999) | Overestimation of the extent to which one’s opinions, beliefs, etc., are shared | |
| Social projection (e.g., Robbins & Krueger, 2005) | Tendency to judge others as similar to oneself | |
| I make correct assessments of the world. | Bias blind spot (e.g., Pronin et al., 2002a) | Being convinced that mainly others succumb to biased information processing |
| Hostile media bias (e.g., Vallone et al., 1985) | Partisans perceiving media reports as biased toward the other side | |
| I am good. | Better-than-average effect (e.g., Alicke & Govorun, 2005) | Overestimating one’s performance in relation to the performance of others |
| Self-serving bias (e.g., Mullen & Riordan, 1988) | Attributing one’s failures externally but one’s successes internally | |
| My group is a reasonable reference. | Ethnocentric bias (e.g., Oeberst & Matschke, 2017) | Giving precedence to one’s own group (not preference) |
| In-group projection (e.g., Bianchi et al., 2010) | Perceiving one’s group (vs. other groups) as more typical of a shared superordinate identity | |
| My group (members) is (are) good. | In-group bias/partisan bias (e.g., Tarrant et al., 2012) | Seeing one’s own group in a more favorable light than other groups (e.g., morally superior, less responsible for harm) |
| Ultimate attribution error (e.g., Hewstone, 1990) | External (vs. internal) attribution for negative (vs. positive) behaviors of in-group members; reverse pattern for out-group members | |
| Linguistic intergroup bias (e.g., Maass et al., 1989) | Using more abstract (vs. concrete) words when describing positive (vs. negative) behavior of in-group members and the reverse pattern for out-group members | |
| Intergroup sensitivity effect (e.g., Hornsey et al., 2002) | Criticisms evaluated less defensively when made by an in-group (vs. out-group) member | |
| People’s attributes (not context) shape outcomes. | Fundamental attribution error/correspondence bias (e.g., L. Ross, 1977) | Preference for dispositional (vs. situational) attribution with regard to others |
| Outcome bias (e.g., Baron & Hershey, 1988) | Evaluation of the quality of a decision as a function of the outcome (valence) |
We begin by outlining the foundations of our reasoning. First, we define “beliefs” and provide evidence for their ubiquity. Second, we outline the many facets of belief-consistent information processing and elaborate on its pervasiveness. In the third part of the article, we discuss a nonexhaustive collection of hitherto independently treated biases (e.g., spotlight effect, false consensus effect, bias blind spot, hostile media effect) and how they could be traced back to one of two fundamental beliefs plus belief-consistent information processing. We then broaden the scope of our focus and discuss several other phenomena to which the same reasoning might apply. Finally, we provide an integrative discussion of this framework, its broader applicability, and potential limitations.
The Ubiquity of Beliefs and Belief-Consistent Information Processing
Because we claim a set of biases to be essentially based on a prior belief plus belief-consistent information processing, we first elaborate on these two parts of the recipe. First, we outline how we conceptualize beliefs and argue that they are an indispensable part of human cognition. Second, we introduce the many routes by which belief-consistent information processing may unfold and present research speaking to its pervasiveness.
Beliefs
We consider beliefs as hypotheses about some aspect of the world that come along with the notion of accuracy—either because people examine beliefs’ truth status or because they already have an opinion about the accuracy of the beliefs. Beliefs in the philosophical sense (i.e., “what we take to be the case or regard as true”; Schwitzgebel, 2019) fall into this category (e.g., “This was the biggest inauguration audience ever”; “Homeopathy is effective”; “Rising temperatures are human-made”), as does knowledge, a special case of belief (i.e., a justified true belief; Ichikawa & Steup, 2018).
Following from this conceptualization, there are certain characteristics that are relevant for the current purpose. First, beliefs may or may not be actually true. Second, beliefs may result from any amount of deliberate processing or reflection. Third, beliefs may be held with any amount of certainty. Fourth, beliefs may be easily testable (e.g., “Canada is larger than the United States”) after some specifications (e.g., “I am rational”), partly testable (e.g., not falsifiable; e.g., “Traumatic experiences are repressed”), or not testable at all (e.g., “Freedom is more important than security”). It is irrelevant for the current purpose whether a belief is false, entirely lacks foundation, or is untestable. All that matters is that the person holding this belief either has an opinion about its truth status or examines its truth status.
The ubiquity of beliefs
There is an abundance of research suggesting that the human cognitive system is tuned to generating beliefs about the world: An incredible plethora of psychological research on schemata, scripts, stereotypes, attitudes (even about unknown entities; Lord & Taylor, 2009), top-down processing, but also learned helplessness and a multitude of other phenomena demonstrates that we readily form beliefs by generalizing across objects and situations (e.g., W. F. Brewer & Nakamura, 1984; Brosch et al., 2010; J. S. Bruner & Potter, 1964; Darley & Fazio, 1980; C. D. Gilbert & Li, 2013; Greenwald & Banaji, 1995; Hilton & von Hippel, 1996; Kveraga et al., 2007; Maier & Seligman, 1976; Mervis & Rosch, 1981; Roese & Sherman, 2007). Furthermore, people (as well as some animals) generate beliefs about the world even when it is inappropriate because there is actually no systematic pattern that would allow for expectations (e.g., A. Bruner & Revusky, 1961; Fiedler et al., 2009; Hartley, 1946; Keinan, 2002; Langer, 1975; Riedl, 1981; Skinner, 1948; Weber et al., 2001; Whitson & Galinsky, 2008).
Explanations for such superstitions, but also for a variety of untestable or unwarranted beliefs, repeatedly refer to the benefits arising from even illusory beliefs. Believing in some kind of higher force (e.g., God), for instance, may provide explanations for relevant phenomena in the world (e.g., thunder, pervasive suffering in the world) and may thereby increase perceptions of predictability, control, self-efficacy, and even justice, all of which have been shown to be beneficial for individuals, even if they are illusory (e.g., Alloy & Abramson, 1979; Alloy & Clements, 1992; Day & Maltby, 2003; Green & Elliott, 2010; Kay, Gaucher, et al., 2010; Kay, Moscovitch, & Laurin, 2010; Langer, 1975; Taylor & Brown, 1988, 1994; Taylor et al., 2000; Witter et al., 1985). Religious ideas, in particular, have furthermore fostered communion, orderly coexistence, and even cooperation among individuals, benefiting both individuals as well as entire groups (e.g., Bloom, 2012; Dow, 2006; Graham & Haidt, 2010; Johnson & Fowler, 2011; Koenig et al., 1999; MacIntyre, 2004; Peoples & Marlowe, 2012). Indeed, there are numerous unwarranted—or even blatantly false—beliefs that either have no (immediate and thus likely detectable) detrimental consequences or even lead to positive consequences (for placebo effects, see Kaptchuk et al., 2010; Kennedy & Taddonio, 1976; Price et al., 2008; for magical thinking, see Subbotsky, 2004; for belief in a just world, see Dalbert, 2009; Furnham, 2003), which fosters the survival of such beliefs.
Beyond demonstrations of peoples’ readiness toward forming beliefs, research has repeatedly affirmed people’s tendency to be intolerant of ambiguity and uncertainty and found a preference for “cognitive closure” (i.e., a made-up mind) instead (Dijksterhuis et al., 1996; Furnham & Marks, 2013; Furnham & Ribchester, 1995; Kruglanski & Freund, 1983; Ladouceur et al., 2000; Webster & Kruglanski, 1997). And last but not least, D. T. Gilbert (1991) made a strong case for the Spinozan view that comprehending something is so tightly connected to believing it that beliefs may be unaccepted only after deliberate reflection—and yet may affect our behavior (Risen, 2016). In other words, beliefs emerge the very moment we understand something about the world. Children understand (and thus believe) something takes place long before they have developed the cognitive capabilities that are needed to deny propositions (Pea, 1980). After all, children are continuously and thoroughly exposed to an environment (e.g., experience, language, culture, social context) that provides an incredibly rich source of beliefs that are transmitted subtly as well as blatantly and thereby effectively shapes humans’ worldviews and beliefs from the very beginning. Taken together, the research has indicated that people readily generate beliefs about the world (D. T. Gilbert, 1991; see also Popper, 1963). Consequently, beliefs are an indispensable part of human cognition.
Belief-consistent information processing—facets and ubiquity
To date, researchers have accumulated much evidence for the notion that beliefs serve as a starting point of how people perceive the world and process information about it. For instance, individuals tend to scan the environment for features more likely under the hypothesis (i.e., belief) than under the alternative (“positive testing”; Zuckerman et al., 1995). People also choose belief-consistent information over belief-inconsistent information (“selective exposure” or “congeniality bias”; for a meta-analysis, see Hart et al., 2009). They tend to erroneously perceive new information as confirming their own prior beliefs (“biased assimilation”; for an overview, see Lord & Taylor, 2009; “evaluation bias”; e.g., Sassenberg et al., 2014) and to discredit information that is inconsistent with prior beliefs (“motivated skepticism”; Ditto & Lopez, 1992; Taber & Lodge, 2006; “disconfirmation bias”; Edwards & Smith, 1996; “partisan bias”; Ditto et al., 2019). At the same time, people tend to stick to their beliefs despite contrary evidence (“belief perseverance”; C. A. Anderson et al., 1980; C. A. Anderson & Lindsay, 1998; Davies, 1997; Jelalian & Miller, 1984), which, in turn, may be explained and complemented by other lines of research. “Subtyping,” for instance, allows for holding on to a belief by categorizing belief-inconsistent information into an extra category (e.g., “exceptions”; for an overview, see Richards & Hewstone, 2001). Likewise, the application of differential evaluation criteria to belief-consistent and belief-inconsistent information systematically fosters “belief perseverance” (e.g., Sanbonmatsu et al., 1998; Trope & Liberman, 1996; see also Koval et al., 2012; Noor et al., 2019; Tarrant et al., 2012). Partly, people hold even stronger beliefs after facing disconfirming evidence (“belief-disconfirmation effect”; Bateson, 1975; see also “cognitive dissonance theory”; Festinger, 1957; Festinger et al., 1955/2011).
All of the phenomena mentioned above are expressions of the principle of belief-consistent information processing (see also Klayman, 1995). That is, although specifics in the task, the stage for information processing, and the dependent measure may vary, all of these phenomena demonstrate the systematic tendency toward belief-consistent information processing. Put differently, belief-consistent information processing emerges at all stages of information processing such as attention (e.g., Rajsic et al., 2015), perception (e.g., Cohen, 1981), evaluation of information (e.g., Ask & Granhag, 2007; Lord et al., 1979; Richards & Hewstone, 2001; Taber & Lodge, 2006), reconstruction of information (e.g., Allport & Postman, 1947; Bartlett, 1932; Kleider et al., 2008; M. Ross & Sicoly, 1979; Sahdra & Ross, 2007; Snyder & Uranowitz, 1978), and the search for new information (e.g., Hill et al., 2008; Kunda, 1987; Liberman & Chaiken, 1992; Pyszczynski et al., 1985; Wyer & Frey, 1983)—including one’s own elicitation of what is searched for (“self-fulfilling prophecy”; Jussim, 1986; Merton, 1948; Rosenthal & Jacobson, 1968; Rosenthal & Rubin, 1978; Sheldrake, 1998; Snyder & Swann, 1978; Watzlawick, 1981). Moreover, many stages (e.g., evaluation) allow for applying various strategies (e.g., ignoring, underweighting, discrediting, reframing). Consequently, individuals have a great number of options at their disposal (think of the combinations) so that the degrees of freedom in their processing of information allows for countless possibilities for belief-consistent information processing, which may explain how belief-consistent conclusions arise even under the least likely circumstances (e.g., Festinger et al., 1955/2011).
In sum, belief-consistent information processing seems to be a fundamental principle in human information processing that is not only ubiquitous (e.g., Gawronski & Strack, 2012; Nickerson, 1998; see also Abelson et al., 1968; Feldman, 1966) but also a conditio humana. This notion is also reflected in the fact that motivation is not a necessary prerequisite for engaging in belief-consistent information processing: Several studies have shown that belief-consistent information processing arises for hypotheses for which people have no stakes in the specific outcome and thus no interest in particular conclusions (i.e., motivated reasoning; Kunda, 1990; e.g., Crocker, 1982; Doherty et al., 1979; Evans, 1972; Klayman & Ha, 1987, 1989; Mynatt et al., 1978; Sanbonmatsu et al., 1998; Skov & Sherman, 1986; Snyder & Swann, 1978; Snyder & Uranowitz, 1978; Wason, 1960). In addition, research under the label “contextual bias” can be classified as unmotivated confirmation bias because it demonstrates how contextual features (e.g., prior information about the credibility of a person) may bias information processing (e.g., the evaluation of the quality of a statement from that person; e.g., Bogaard et al., 2014; see also Dror et al., 2006; Elaad et al., 1994; Kellaris et al., 1996; Risinger et al., 2002). In other words, the same mechanisms apply, regardless of peoples’ interest in the outcome (Trope & Liberman, 1996). Hence, belief-consistent information processing takes place even when people are not motivated to confirm their belief. Furthermore, belief-consistent information processing has been shown even when people are motivated to be unbiased (e.g., Lord et al., 1984), or at least want to appear unbiased. This is frequently the case in the lab, where participants are motivated to hide their beliefs (for an overview of subtle discrimination, see Bertrand et al., 2005). But it is even more true in scientific research (Greenwald et al., 1986), forensic investigations (Dror et al., 2006; Murrie et al., 2013; Rassin et al., 2010), and in the courtroom (or legal decision-making, more generally), in which an unbiased judgment is the ultimate goal that is rarely reached (Devine et al., 2001; Hagan & Parker, 1985; Mustard, 2001; Pruitt & Wilson, 1983; Sommers & Ellsworth, 2001; Steblay et al., 1999; for overviews, see Faigman et al., 2012; Kang & Lane, 2010). Taken together, overabundant research demonstrates that belief-consistent information processing is a pervasive phenomenon for which motivation is not a necessary ingredient.
Biases Reexplained as Confirmation Bias
Having pointed to the ubiquity of beliefs and belief-consistent information processing, let us now return to the nonexhaustive list of biases in Table 1, for which we propose to entertain the notion that they may arise from shared beliefs plus belief-consistent information processing. As can be seen at first glance, we bring together biases that have been investigated in separate lines of research (e.g., bias blind spot, hostile media bias). We argue that all of the biases mentioned in Table 1 could, in principle, be understood as a result of a fundamental belief plus belief-consistent information processing because they might all be based on a fundamental underlying belief. Furthermore, in specifying the fundamental beliefs, we suggest that several biases actually share the same belief (e.g., “I make correct assessments”; see Table 1)—thereby representing only variations in which the underlying belief is expressed.
To be sure, the current approach does not preclude contribution from other factors to the biases at hand. We merely raise the question of whether the parsimonious combination of belief plus belief-consistent information processing alone might provide an explanation that suffices to predict the existence of the biases listed in Table 1. That is, other factors could contribute, attenuate, or exacerbate these biases, but our recipe alone would already allow their prediction. Let us now see how some of the biases mentioned in Table 1 could be traced back to the (same) fundamental beliefs and thereby be explained by them—when acknowledging the principle of belief-consistent information processing. We do so by spelling this out for the biases based on two fundamental beliefs (“My experience is a reasonable reference” and “I make correct assessments”).
“My experience is a reasonable reference”
A number of biases seem to imply that people take both their own (current) phenomenology and themselves as starting points for information processing. That is, even when a judgment or task is about another person, people start from their own lived experience and project it—at least partly—onto others as well (e.g., Epley et al., 2004). For instance, research on phenomena falling under the umbrella of the “curse of knowledge” or “epistemic egocentrism” speaks to this issue because people are bad at taking a perspective that is more ignorant than their own (Birch & Bloom, 2004; for an overview, see Royzman et al., 2003). People overestimate, for instance, the extent to which their appearance and actions are noticed by others (“spotlight effect”; e.g., Gilovich et al., 2000), the extent to which their inner states can be perceived by others (“illusion of transparency”; e.g., Gilovich et al., 1998; Gilovich & Savitsky, 1999), and the extent to which people expect others to grasp the intention behind an ambiguous utterance if its meaning is clear to evaluators (“illusory transparency of intention”; Keysar, 1994; Keysar et al., 1998). Likewise, people overestimate similarities between themselves and others (“self-anchoring” and “social projection”; Bianchi et al., 2009; Otten, 2004; Otten & Epstude, 2006; Otten & Wentura, 2001; A. R. Todd & Burgmer, 2013; van Veelen et al., 2011; for a meta-analysis, see Robbins & Krueger, 2005), as well as the extent to which others share their own perspective (“false consensus effect”; Nickerson, 1999; for a meta-analysis, see Mullen et al., 1985).
Taken together, a number of biases seem to result from people taking—by default—their own phenomenology as a reference in information processing (see also Nickerson, 1999; Royzman et al., 2003). Put differently, people seem to—implicitly or explicitly—regard their own experience as a reasonable starting point when it comes to judgments about others and fail to adjust sufficiently. Instead of disregarding—or even discrediting—their own experience as an appropriate starting point, people rely on it. When judging, for instance, the extent to which others may perceive their own inner state, people act on the belief that their own experience of their inner state (e.g., their nervousness) is a reasonable starting point, which, in turn, guides their information processing. They might start out with a specific and directed question (e.g., “To what extent do others notice my nervousness?”) instead of an open and global question (e.g., “How do others see me?”). People might also focus on information that is consistent with their own phenomenology (e.g., their increased speech rate) as potential cues that others could draw on. Finally, people may ignore or underweight information that is inconsistent with their phenomenology (e.g., their limbs being completely calm) or discredit such observations as potentially valid cues for others. In the same vein, people assume that others draw on the same information (e.g., their increased speech rate) and that they draw the same conclusions from it (i.e., their nervousness). All of this—as well as the empirical evidence for the biases outlined within this section—suggests that people do take their own experience as a reference when processing information to arrive at judgments regarding others and how others see them.
Now let us entertain the notion that people do, by default, regard their own experience as a reasonable reference for their judgments about others. Would biases such as the spotlight effect, the illusion of transparency (of intention), false consensus, and social projection not (by default) follow from the default belief when taking the general human tendency of belief-consistent information processing into account? From our point of view, the answer is an emphatic “yes.” If people judge the extent to which others notice (something about) them or their inner states or intentions, hold the fundamental belief that their own experience is a reasonable reference, and engage in belief-consistent information processing, we should—by default and on average—observe an overestimation of the extent to which an aspect of oneself or one’s own inner states are noticed by others as suggested by the spotlight effect and the illusion of transparency (of intention). Likewise, people should overestimate the extent to which others are similar to themselves (social projection) and share their own opinions and beliefs (false consensus).
This reasoning might be reminiscent of anchoring-and-(insufficient)-adjustment accounts (Tversky & Kahneman, 1974), and there are certainly parallels so that one could speak of a mere reformulation. A crucial difference is, however, that we explicate a fundamental belief that explains why people anchor on their own phenomenology when making judgments about others: They (implicitly or explicitly) believe that their own experience is a reasonable reference, even for others. Yet another advantage of our proposed framework is that it acknowledges even more parallels to other biases and provides a more parsimonious account. After all, we argue that these biases—at their core—could be understood as a variation of confirmation bias (based on a shared belief). That is, we propose an explanation that suffices to predict the existence of these biases while clearly acknowledging that other factors may and do contribute, attenuate, or exacerbate these biases.
“I make correct assessments”
Let us turn our attention to a second group of biases and entertain the notion that they stem from the default belief of making correct assessments, which people hold for themselves but not for others. As we argue below, biases such as the bias blind spot and the hostile media bias are almost logical consequences of people’s simple assumption that their assessments are correct.
Having the belief of making correct assessments also implies not falling prey to biases. Precisely such a meta-bias of expecting others to be more prone (compared to oneself) to such biases has been subsumed under the phenomenon of the bias blind spot. The bias blind spot describes humans’ tendency to “see the existence and operation of cognitive and motivational biases much more in others than in themselves” (Pronin et al., 2002a, p. 369; for reviews, see Pronin, 2007; Pronin et al., 2004). If people start out from the default assumption that they make correct assessments, as suggested by our framework, one part of the bias blind spot is explained right away: the conviction that one’s own assessments are unbiased (see also Frantz, 2006). After all, trust in one’s assessments may effectively prevent the identification of own biases and errors—either by failing to see the necessity to rethink judgments or by failing to identify biases therein. The other part, however, is implied in the fact that people do not hold the same belief for others (for a somewhat similar notion, see Pronin et al., 2004, 2006). Importantly, we propose that people do not generate the same fundamental beliefs about others, particularly not about a broad or vague group of others that is usually assessed in studies (e.g., the “average American” or the “average fellow classmate”; Pronin et al., 2002a, 2006; see also the section on fundamental beliefs and motivation). The logical consequence of people’s believing in the accuracy of their own assessments while simultaneously not holding the same conviction in the accuracy of others’ assessments is that people expect others to succumb to biases more often than they themselves do (e.g., Kruger & Gilovich, 1999; Miller & Ratner, 1998; van Boven et al., 1999). Another consequence is to assume errors on the part of others if discrepancies between their and one’s own judgments are observed (Pronin, 2007; Pronin et al., 2004; Ross et al., 2004, as cited from Pronin et al., 2004).
The hostile media bias describes the phenomenon by which, for instance, partisans of conflicting groups view the same media reports about an intergroup conflict as biased against their own side (Vallone et al., 1985; see also Christen et al., 2002; Dalton et al., 1998; Matheson & Dursun, 2001; Richardson et al., 2008; for a meta-analysis, see Hansen & Kim, 2011). The reasoning of our framework here is similar to the one applied to the bias blind spot (see also Lord & Taylor, 2009): If people assume their own assessments are correct and, by nature of being correct their views are also unbiased, it is almost necessary to assume others (people/media reports) are biased if their views differ. People starting from the belief to make correct assessments process the available information (e.g., a discrepancy between their own view and media reports) in a way that is consistent with this basic belief (e.g., by attributing the discrepancy to a bias in others, not themselves). In addition, in line with our argument of rather general mechanisms being at play, the hostile media effect was found in representative samples (e.g., Gunther & Christen, 2002) and even for people that were less connected with the issue at hand (Hansen & Kim, 2011), that is, not strongly involved with the issue as Vallone et al. (1985) initially regarded as prerequisite.
To summarize, we argue that the bias blind spot and the hostile media bias can essentially be explained by one fundamental underlying belief: People generally trust their assessments but do not hold the same trust for others’ assessments. As a consequence, they are overconfident and do not question their own judgment as systematically as they question the judgment of others (e.g., when confronted with a different view). Hence, we suggest that these biases are based on the same recipe (belief plus belief-consistent information processing). Even more, we suggest that these biases are based on the same fundamental belief: people’s belief that they themselves make correct assessments. By doing so, we not only provide a more parsimonious account for different biases but also bring together biases that have heretofore been treated as unrelated because they have been researched in very different areas within psychology (e.g., whereas hostile media bias is mainly addressed in the intergroup context, bias blind spot is not).
Further Clarifications and Distinctions
So far, we have attempted to show that the biases listed in Table 1 can be understood as a combination of beliefs plus belief-consistent information processing. This is not to say that no other factors or mechanisms are at play but rather to put forth the idea that belief plus belief-consistent information processing suffices as an explanation (with the corollary that the fundamental beliefs are not held for other people as well). In the next section, we add some clarifications to our approach regarding the role of “innocent” processes, motivation, and deliberation, which also differentiate our approach from others. We also contrast our reasoning with a Bayesian perspective.
The role of innocent processes
We have repeatedly emphasized the parsimony of our account but several explanations have been put forward that are even more parsimonious in the sense that they outline how biases can emerge from innocent processes without any prior beliefs that led participants to draw biased conclusions (e.g., Alves et al., 2018; Chapman, 1967; Fiedler, 2000; Hamilton & Gifford, 1976; Meiser & Hewstone, 2006). Instead, characteristics of the environment (e.g., information ecologies) and basic principles of information processing can lead to profoundly biased conclusions, according to these authors (e.g., evaluating members of novel groups or minorities more negatively; Alves et al., 2018). Within these frameworks, individuals’ only contribution to biased conclusions lies in their lack of metacognitive abilities that would enable them to detect (and control for) such biases (e.g., Fiedler, 2000; Fiedler et al., 2018). Obviously, a crucial difference between these accounts and our current perspective is that they start out from the notion of perfectly open-minded individuals that do not hold any relevant beliefs (i.e., a tabula rasa), whereas our main argument rests on the assumption that many biases actually result from already having beliefs. Although this difference already makes clear that these two perspectives do not necessarily compete with one another, but could—in principle—both contribute to biases (at different stages), we are very skeptical about the prevalence of open-mindedness (of not holding any prior belief; see also Fiedler, 2000, p. 662).
As outlined above, we regard beliefs as an indispensable part of human cognition because people are extremely ready to generate beliefs about the world. Therefore, we are skeptical of a truly open mind (in the sense of having literally no prior beliefs or convictions) to be a prevalent case. Nevertheless, innocent circumstances (such as the information ecology) might explain a possible origin of (biased) expectations and beliefs where there were none before (see also Nisbett & Ross, 1980; Sanbonmatsu et al., 1998).
The role of motivation
One recurrent theme in the explanations of several biases is the notion of motivation (e.g., Kruglanski et al., 2020). The bias blind spot, for instance, is sometimes interpreted as an expression of individuals’ motives for superiority (see Pronin, 2007). More generally, for biases based on the beliefs “I am good” and “My group is good,” a number of explanations are based on presumed motives for a positive self-concept or even for self-enhancement (e.g., J. D. Brown, 1986; Campbell & Sedikides, 1999; Hoorens, 1993; John & Robins, 1994; Kwan et al., 2008; Sedikides & Alicke, 2012; Sedikides et al., 2005; Shepperd et al., 2008; Tajfel & Turner, 1986). Following from our account, however, such motivational antecedents are not necessary to explain biases. To be clear, we do not claim that motivation is per se irrelevant. Rather, we can well imagine that motivation may amplify each and every bias. We argue here, however, that motivation is not a necessary precondition to arrive at any of the biases listed.
Fundamental beliefs and motivation
Are people motivated to make correct assessments of the world? Probably yes. Do people need a motive to arrive at the belief that they make correct assessments? Certainly not. Instead, people might simply overgeneralize from their everyday experiences (Riedl, 1981). People almost always correctly expect darkness to follow the light, the downfall after a jump, thirst and hunger after some period without water and food, fatigue subsequent to an extended period of intensive activity, the keys where they left them, electricity from the sockets, a hangover after a lot of alcohol, newspapers to change contents each day, doctors trying to make things better, and salary being paid regularly—just to mention a tiny fraction of the abundance of correct assessments in everyday life (D. T. Gilbert, 1991).
Not all assessments or beliefs about the world are correct, however. Crucially, various mechanisms preclude the realization of making incorrect assessments. First, we have already pointed out that some beliefs may be untestable or unfalsifiable—which has its own psychological advantages as one cannot be proven wrong (Friesen et al., 2015).1 Second, people usually do not attempt to falsify their beliefs, even if that would be possible and desirable (Popper, 1963). Instead, they engage in the many ways of belief-consistent information processing as we have outlined above. This, of course, also contributes to the existence and maintenance of beliefs—and first and foremost to the belief that one makes correct assessments (see also Swann & Buhrmester, 2012). After all, processing information in a belief-consistent way and “confirming” one’s beliefs entails the experience of making correct assessments. Third, even if we set aside the human tendency toward belief-consistent information processing, it is often not possible for people to realize their incorrect assessments. Be it because they lack direct access to the processes that influence their perceptions and evaluations (Nisbett & Wilson, 1977; Wilson & Brekke, 1994; Wilson et al., 2002; see also Frantz, 2006; Pronin et al., 2004, 2006) or because they lack a reference for comparison that would be necessary to identify biases. In the real world, for instance, people often have no access to others’ perceptions and thoughts, which generally precludes the recognition of overestimations (e.g., of the extent to which own inner states are noticed by others, i.e., illusion of transparency). Similarly, once a society decided to hold a person captive because of the potential danger that emanates from the person, there is no chance to realize the person was not dangerous. Likewise, people cannot systematically trace the effects of a placebo to their own expectations (e.g., Kennedy & Taddonio, 1976; Price et al., 2008), just to mention a few instances. In other words, people cannot exert the systematic examination that characterizes scientific scrutiny (which, however, also does not preclude being biased, e.g., Greenwald et al., 1986) and thereby do not detect their incorrect assessments. Taken together, due to a number of reasons people overwhelmingly perceive themselves as making correct assessments. Be it because they are correct, or because they are simply not corrected. Such an overgeneralization to a fundamental belief of making correct assessments could, thus, actually be regarded as a reasonable extrapolation. Consequently, no motivation is needed to arrive at this fundamental belief. Rather, we expect healthy individuals to be—by default—naive realists (see also Griffin & Ross, 1991; Ichheiser, 1949; Pronin et al., 2002b; L. Ross & Ward, 1996). In other words, we propose people generally start from the default assumption that their assessments are accurate.
Since people do not have immediate access to the experiences and phenomenology of others, however, they do not hold the same default belief for other people. This is crucial with regard to biases. After all, if people did not only believe that they make correct assessments of the world, but at the same time and with the same verve also believed that other people make correct assessments of the world, we would not expect biases such as the “bias blind spot” to occur. The fact that people do not hold such a conviction for others, however, does not necessarily involve motivation either—the belief may be lacking for the simple reason that people do not have immediate access to others’ experiences. Consequently, motivation is not necessary to arrive at self-other differences in this regard. Let us illustrate this with regard to ingroup bias: If people merely held the belief that their own group is good (see also Cvencek et al., 2012; Mullen et al., 1992), but did not hold the same belief for other groups, ingroup favoritism could result—without assuming people to believe that their group was better than other groups. Indeed, a lot of research suggests that people have an automatic ingroup favoritism, but not a parallel automatic outgroup derogation (see Fiske, 1998, for an overview).
As we postulate that motivation is not necessary for the fundamental beliefs to arise, we also propose that motivation is not a necessary ingredient for self- or group-favoring outcomes. Crucially, this is at odds with the most commonly accepted theoretical explanation for in-group bias—the social-identity approach (Tajfel & Turner, 1979; Turner et al., 1987), which posits that (a) memberships in social groups are an essential part of individuals’ self-concepts (see also R. Brown, 2000) and (b) individuals strive to see themselves in a positive light. Following from these two postulates, there is a fundamental tendency to favor the social group they identify with (i.e., in-group bias; e.g., M. B. Brewer, 2007; Hewstone et al., 2002). Contrary to this approach, we argue that it does not need a motivational component (i.e., the striving for a positive self-concept). To be sure, motivation may add to and thus likely pronounce in-group bias, but we do not expect it to be a necessary precondition. In fact, our reasoning is in line with the observation that people do not show heightened self-esteem after engaging in in-group bias (for an overview, see Rubin & Hewstone, 1998), as would be expected from original social-identity theorizing.
Belief-consistent information processing and motivation
Quite frequently, belief-consistent information processing—such as in the context of confirmation bias—is equated with motivated information processing (Kunda, 1990), in which people are motivated to defend, maintain, or confirm their prior beliefs. Some authors have even suggested speaking of “my-side bias” rather than confirmation bias (e.g., Mercier, 2017). In fact, belief and motivation often come together: Some beliefs just feel better than others, and “people find it easier to believe propositions they would like to be true than propositions they would prefer to be false” (Nickerson, 1998, p. 197; see also Kruglanski et al., 2018; on the “Pollyanna principle,” see Matlin, 2017). In addition, in some beliefs, people may have already invested a lot (e.g., one’s beliefs about the optimal parenting style or about God/paradise; e.g., Festinger et al., 1955/2011; see also McNulty & Karney, 2002) so that the beliefs are psychologically extremely costly to give up (e.g., ideologies/political systems one has supported for a long time; Lord & Taylor, 2009). Hence, wanting a belief to be true likely pronounces belief-consistent information processing (Kunda, 1990; see also Tesser, 1978) and may even include strategic components (e.g., the deliberate search for belief-consistent information; Festinger, et al., 1955/2011; Yong et al., 2021). But despite this prevalent association of confirmation bias and motivated information processing, the latter is not a necessary precondition of the former. On the contrary, as already outlined above, belief-consistent information processing takes place when people are not motivated to confirm their belief as well as when people are motivated to be unbiased or at least want to appear unbiased. Consequently, belief-consistent information processing is a fundamental principle that is not contingent on motivation.
The role of deliberation
Although deliberation is not entirely independent of motivation, it deserves extra discussion because it can be and has been viewed as a remedy for biases. Specifically, knowledge about the specific bias, the availability of resources (e.g., time), as well as the motivation to deliberate are considered to be necessary and sufficient preconditions to effectively counter bias according to some models (e.g., Oswald, 2014; for a similar case, see the potential solutions proposed by Nickerson, 1999). Although this might be true for logical problems that suggest an immediate (but wrong) solution to participants (e.g., “strategy-based” errors in the sense of Arkes, 1991), much research attests to people’s failure to correct for biases even if they are aware of the problem, urged or motivated to avoid them, and are provided with the necessary opportunity (e.g., Harley, 2007; Lieberman & Arndt, 2000; for a meta-analysis about ignoring inadmissible evidence, see Steblay et al., 2006).
A very plausible reason for this is that people fail to take effective countermeasures spontaneously (Giroux et al., 2016; Kelman et al., 1998). Recall the biases that we suggested might follow from an overgeneralizing of one’s own phenomenology. Overgeneralizing one’s own phenomenology, in turn, effectively boils down to ignoring information one has. This may be quite easy if the nature of the information to be ignored and the judgment to be made are clear-cut, as is the case in most theory-of-mind paradigms (for an overview, see Flavell, 2004). In a typical false-belief study, for instance, people are required to set aside their knowledge that an item was removed from its place in the absence of a person who had previously observed its placement, and the people have to indicate where the other person would believe the item to be. Essentially, the information in this task is binary (present vs. not present; i.e., people have to ignore the knowledge that the item was removed and therefore is not in its original place anymore). In addition, the information to be ignored refers to an aspect of physical reality that is (a) objective in that interindividual agreement should be perfect in this regard and (b) readily accessible (see also Clark & Marshall, 1981). Consequently, not only the information to be ignored but also its impact on the required judgment may be unequivocally and exhaustively identified—and therefore effectively controlled for.
However, the situation is substantially different in the tasks underlying the spotlight effect, the illusion of transparency (of intention), as well as the false consensus effect (and egocentrism in similar tasks; e.g., Chambers & De Dreu, 2014; for other association-based errors, see Arkes, 1991). Here, the information to be ignored is often not binary (e.g., one’s emotions, one’s attitudes) and therefore also not necessarily entirely and unequivocally identifiable to people themselves (i.e., the specific extent or intensity). Furthermore, even without the requirement to ignore some information, the task is much fuzzier in itself (i.e., determining how others view me, the extent to which others may determine what’s going on in my head, how others feel about certain topics). These tasks lack the objectivity and knowledge required to undo the influence of the information to be ignored (for an elimination of the false consensus effect if representative information is readily available, see Engelmann & Strobel, 2012; see also Bosveld et al., 1994).
Under these circumstances, the simple attempt to ignore information likely fails (e.g., Fischhoff, 1975, 1977; Pohl & Hell, 1996; Steblay et al., 2006; see also Dror et al., 2015; Servick, 2015). After all, the very information to ignore is not even clearly identifiable, nor is its impact on the task—which needs to be determined to be able to correct for it. Consequently, the very obvious strategy people likely choose—to somehow inhibit or ignore information they have—is ineffective. Thus, exhibiting a bias may not be due to a lack of deliberation. In addition, (unspecific) deliberation alone might not help in avoiding biases (in fact, more deliberation may even entail more belief-consistent information processing and, thus, more bias; Nestler et al., 2008). Rather, avoidance of biases might need a specific form of deliberation. Interestingly, much research shows that there is an effective strategy for reducing many biases: to challenge one’s current perspective by actively searching for and generating arguments against it (“consider the opposite”; Lord et al., 1984; see also Arkes, 1991; Koehler, 1994). This strategy has proven effective for a number of different biases such as confirmation bias (e.g., Lord et al., 1984; O’Brien, 2009), the “anchoring effect” (e.g., Mussweiler et al., 2000), and the hindsight bias (e.g., Arkes et al., 1988). At least in part, it even seems to be the only effective countermeasure (e.g., for the hindsight bias, see Roese & Vohs, 2012). Essentially, this is another argument for the general reasoning of this article, namely that biases are based on the same general process—belief-consistent information processing. Consequently, it is not the amount of deliberation that should matter but rather its direction. Only if people tackle the beliefs that guide—and bias—their information processing and systematically challenge them by deliberately searching for belief-inconsistent information, we should observe a significant reduction in biases—or possibly even an unbiased perspective. From the perspective of our framework, we would thus derive the hypothesis that the listed biases could be reduced (or even eliminated) when people deliberately considered the opposite of the proposed underlying fundamental belief by explicitly searching for information that is inconsistent with the proposed underlying belief. That is, we would expect a significant reduction of the spotlight effect, the illusion of transparency (of intention), the false consensus effect, and social projection if people were led to deliberately consider the notion and search for information suggesting that their own experience might not be an adequate reference for the respective judgments about others. Likewise, we would expect a debiasing effect on the bias blind spot and the hostile media bias if people deliberately considered the notion that they do not make correct assessments. Put differently, if our framework is correct, the parsimony on the level of explanation would also translate to parsimony on the level of debiasing: The very same strategy could be effective for various biases.
Bayesian belief updating
Our framework suggests a unifying look at how people with existing beliefs process information. As such, it contains two ingredients also prominently part of Bayesian belief updating (e.g., Chater et al., 2010; Jones & Love, 2011). In an idealized Bayesian world, people hold beliefs (i.e., priors), and any new information will either solidify these beliefs or attenuate them depending on its consistency with the prior. Importantly, however, strong prior beliefs will not be changed dramatically by just one weak additional bit of information. Instead, to meaningfully change firmly held beliefs requires extremely strong or a lot of contradictory evidence. Cumulative and consistent experience with the world will thus often lead to a situation in which a new bit of information seems negligibly irrelevant and will not evoke a great deal of belief updating. This may sound reminiscent of our approach of fundamental (i.e., strong prior) beliefs plus belief-consistent information processing, but there are marked differences, as we briefly elaborate.
First, information processing in the classical Bayesian framework is not biased. Although the possibility of biased prior beliefs is well acknowledged (Jones & Love, 2011), rational processing of novel information is a core assumption. That is, the same bit of information means the exact same thing for each recipient; it will just affect their beliefs to differing degrees because they have different and differently strong priors. This is dramatically different from our perspective with its focus on how the same bit of information is attributed, remembered, processed, interpreted, and attended to differently as a function of one’s prior beliefs (see also Mandelbaum, 2019). This notion of biased information processing is utterly absent from the Bayesian world (see also next section). Take, for instance, the finding that the same behavior (e.g., torture) is evaluated differently depending on whether the actor is a member of one’s own group or of another group (e.g., Noor et al., 2019; Tarrant et al., 2012). Or likewise, take the differential evaluation of the same scientific method depending on whether its result is consistent or inconsistent with one’s prior belief (e.g., Lord et al., 1979). Both are incompatible with the fundamental idea of Bayesian belief updating. And more generally, our approach is about the impact of prior beliefs on the processing of information rather than the impact of (novel) information on prior beliefs. Given these stark differences, it is not surprising that many predictions we derive from our understanding are not derivable from a Bayesian belief-updating perspective.
Second, by relying on the rich empirical evidence on belief-consistent information processing, we explicitly emphasize the many ways in which the novel information is already a result of biased information processing: When people selectively attend to or search for belief-consistent information (positive testing, selective exposure, congeniality bias), when they selectively reconstruct belief-consistent information from their memory, and when they behave in a way such that they elicit the phenomenon they searched for themselves (self-fulfilling prophecy), they already display bias (see also next section). People are biased in eliciting new data, and those data are then processed; people do not simply update their beliefs on the basis of information they (more or less arbitrarily) encounter in the world. As a result, people likely gather a biased subsample of information, which, in turn, will not only lead to biased prior beliefs but may also lead to strong beliefs that are actually based on rather little (and entirely homogeneous) information. But there are even more and more extreme ways in which prior beliefs may bias information processing: Prior beliefs may, for instance, affect whether or not a piece of information is regarded as informative at all for one’s beliefs (Fischhoff & Beyth-Marom, 1983). Categorizing belief-inconsistent information into an extra class of exceptions (that are implicitly uninformative to the hypothesis) is such an example (or subtyping; see also Kube & Rozenkrantz, 2021). Likewise, discrediting a source of information easily legitimates the neglect of information (see disconfirmation bias). In its most extreme form, however, prior beliefs may not be put to a test at all. Instead, people may treat them as facts or definite knowledge, which may lead people to ignore all further information or to classify belief-inconsistent information simply as false.
In sum then, our reasoning deviates from the Bayesian approach in that its core assumption is one of biased (vs. unbiased) information processing. Specifically, we propose prior beliefs to bias the processing of novel information as well as other stages of information processing, including those that elicit (novel) information. Furthermore, the hypotheses that fall from our framework cannot be likewise derived from the Bayesian perspective.
Bias, rationality, and adaptivity
Given that this article and its presented framework are about biases, it seems reasonable to add a few elaborations on this as well as related concepts. Although we are mainly proposing a framework to explain biases that have already been documented and defined by others, it is noteworthy that all of the biases listed in Table 1 comprise one of the following two conceptualizations of the term “bias”: On the one hand, some of these biases are defined as a systematic deviation from an objectively accurate reference. For instance, if people are convinced that their opinions and beliefs are shared to a larger extent than what is actually the case, their evaluation (about others) deviates from the objective reference (i.e., the evaluation of others) and thus indicates a false consensus. Essentially, all biases that refer to an overestimation or underestimation build on the comparison between peoples’ judgments and the actual (empirical) reference. This is possible because the judgment itself refers to some aspect in the world that can be directly assessed and, thus, compared.
On the other hand, for several judgments such references are lacking for comparison. For instance, with what could or should one compare a person’s evaluation of a scientific method or moral judgments to draw conclusions about potential biases? A typical approach is to examine whether the same target (e.g., scientific method, behavior of another person) is evaluated differently depending on factors that should actually be irrelevant. In other words, bias is here conceptualized (or rather demonstrated) as an impact of factors that should not play a role (i.e., the influence of unwarranted factors). For instance, if the identical scientific method is evaluated differently depending on whether or not it supports or challenges one’s prior beliefs (i.e., is discredited when yielding belief-inconsistent results; Lord et al., 1979), this denotes disconfirmation bias (Edwards & Smith, 1996) or partisan bias (Ditto et al., 2019). Likewise, when the very same behavior (e.g., torture, violent attacks) is evaluated differently depending on whether the actor is a member of one’s own group or of another group, one speaks of “in-group bias” (e.g., Noor et al., 2019; Tarrant et al., 2012). In other words, bias in this case is operationalized as a systematic difference in information processing and its outcome as a function of unwarranted factors. This notion of unwarranted factors also differentiates biases from other phenomena: For instance, we would not speak of bias in the case of experimental manipulations (e.g., mood inductions) affecting individuals’ retrieval of happy memories to regulate their mood (e.g., Josephson, 1996). However, if the same manipulation affected individuals’ perception of novel information (e.g., Forgas & Bower, 1987; Wright & Bower, 1992), we would subsume it under the umbrella term “bias.” This aligns well with our definition of beliefs as hypotheses about the world that come along with the notion of accuracy. In other words, it is about beliefs that state or claim something to be true, for example, “I make correct assessments” or “I am good,” regardless of whether or not it actually is true (e.g., “This was the biggest inauguration audience ever”; see also above).
Because biases have been conceptualized with a sense of accuracy in mind and a plethora of research has by now documented peoples’ biases, thus pointing to the frequent inaccuracy of their judgments, two secondary questions have been raised and lively debated in the past: the question of the (ir)rationality of biases and the question of the adaptivity of (certain) biases (e.g., Evans, 1991; Evans et al., 1993; Fiedler et al., 2021; Gigerenzer et al., 2011; Gigerenzer & Selten, 2001; Hahn & Harris, 2014; Haselton et al., 2009; Oaksford & Chater, 1992; Sedlmeier et al., 1998; Simon, 1990; Sturm, 2012; P. M. Todd & Gigerenzer, 2001, 2007). In particular, it has been argued that many biases and heuristics could be regarded as rational in the context of real-world settings, in which people lack complete knowledge and have an imperfect memory as well as limited capacities (e.g., Gigerenzer et al., 2011; Simon, 1990). In the same context, researchers have argued that some of the heuristics lead to biases mainly in specific lab tasks while resulting in rather accurate judgments in many real-world situations (e.g., Fiedler et al., 2021; Sedlmeier et al., 1998; P. M. Todd & Gigerenzer, 2001). In other words, they argued for the adaptivity of these heuristics, which are mostly correct, whereas research focuses on the few (artificial) situations in which they lead to incorrect results (i.e., biases). Apart from the fact that this debate mainly revolved around heuristics and biases that we do not deal with here (e.g., the set of heuristics introduced by Tversky & Kahneman, 1974), an adequate treatise of the rationality and adaptivity is beyond the scope of this article for two reasons. First, consideration of the (ir)rationality as well as adaptivity of biases is a complex and rich topic that could fill an article on its own. One factor that complicates the topic is that there is no single conceptualization of rationality but a variety of different perpectives on this topic (e.g., normative vs. descriptive, theoretical vs. practical, process vs. outcome; for an overview, see Knauff & Spohn, 2021), each of which come along with different definitions of rationality or standards of comparisons that allow for conclusions about rationality. The same holds for adaptivity because it would inevitably have to be clarified what adaptivity refers to (e.g., survival, success—in whatever sense, accurate representations). Second, and even more importantly from a research perspective, biases are first and foremost phenomena evidenced by data and empirical observations—whereas the question of the rationality of these observations is basically an evaluation of this observation and, thus, yet another issue. In presenting a framework of common underlying mechanisms, however, this article’s focus is on the explanation of the biases, not on their (normative) evaluation.
Broadening the Scope
Let us return to the application of our recipe to biases. Above we spelled out our reasoning in detail by taking two fundamental beliefs and discussing how they might explain a number of biases. Specifically, we put up for discussion that the general recipe of a belief plus belief-consistent information processing may suffice to produce the biases listed in Table 1. It is beyond the scope of the current article to do so for each of the other biases contained in Table 1. Instead, we would like to reexamine additional phenomena under this unifying lens.
Let us begin with hindsight bias, the tendency to overestimate what one could have known before the fact (Fischhoff, 1975; for an overview, see Roese & Vohs, 2012; for meta-analyses, see Christensen-Szalanski & Willham, 1991; Guilbault et al., 2004). That people overestimate the extent to which uninformed others may know about outcomes or an event that one has already learned about could likewise be understood as people taking their own experience as a reference when making judgments about others. When judgments were about themselves in a still ignorant prior state, however, our framework would at least need the extra specification that people took their current experience as a reference when asked about previous times, which is quite plausible (e.g., Levine & Safer, 2002; Markus, 1986; McFarland & Ross, 1987; Wolfe & Williams, 2018). In more general terms, people oftentimes hold the (erroneous) conviction that they have held their current beliefs forever (e.g., Greenwald, 1980; Swann & Buhrmester, 2012; von der Beck et al., 2019).
Several other phenomena that are usually not conceptualized as bias, or at least not linked to bias research, could essentially be understood as variations of confirmation bias as well. Stereotypes, for example, are basically beliefs people hold about others (“people of group X are Y”) and likewise elicit belief-consistent information processing and even behavior (e.g., discrimination). Belief in specific conspiracy theories might be understood as an expression of the rather general belief that “seemingly random events were intentionally brought about by a secret plan of powerful elites.” This basic belief as an underlying principle might provide a parsimonious explanation of why the endorsements of various conspiracy theories flock together (Bruder et al., 2013); of why such a “conspiracy mentality” is correlated with the general tendency to see agency (anthropomorphism; Imhoff & Bruder, 2014), negative intentions of others (Frenken & Imhoff, 2022), and patterns where there are none (van Prooijen et al., 2018); and also of other paranormal beliefs that play down the role of randomness (Pennycook et al., 2015).
When we consider the breadth of our conceptualization of beliefs, it becomes clear that the integrative potential of our account might be even larger: The beliefs we have elaborated on and presented in Table 1 are likely rather fundamental beliefs in that they are chronically accessible and central to people. Recall, however, that our conceptualization of beliefs also entails beliefs that are rather irrelevant to a person and only situationally induced. In consideration of this fact, a number of experimental manipulations may be subsumed under our reasoning as well. Across diverse research fields, scholars have provided participants with the task to test a given hypothesis. Although such experimenter-generated hypotheses are clearly different from long-held and widely shared beliefs, they follow a similar recipe if we regard them as situationally induced beliefs. For instance, the questions that Snyder and Swann (1978) had their participants examine—whether target person X is introverted/extraverted—can be regarded as a situationally induced belief that is examined by participants. Even if they did not generate the belief themselves and even if they were indifferent with regard to its truth, it guided their information processing and systematically led to the confirmation of the induced belief. So the question arises as to whether a number of experimental manipulations (e.g., assimilation vs. contrast; Mussweiler, 2003, 2007; promotion vs. prevention focus; Freitas & Higgins, 2002; Galinsky et al., 2005; mindset inductions; Burnette et al., 2013; Taylor & Gollwitzer, 1995) could also be treated as experimenter-induced beliefs that elicit belief-consistent information processing. In this regard, a plethora of psychological findings could be integrated into one overarching model.
Summary and Novel Hypotheses
Now that we have outlined our reasoning in detail and highlighted its integrative potential, let us turn to the hypotheses it generates. The main hypothesis (H1) we have repeatedly mentioned throughout is that several biases can be traced back to the same basic recipe of belief plus belief-consistent information processing. Undermining belief-consistent information processing (e.g., by successfully eliciting a search for belief-inconsistent information) should—according to this logic—attenuate biases. Thus, to the extent that an explicit instruction to “consider the opposite” (of the proposed underlying belief) is effective in undermining belief-consistent information processing, it should attenuate virtually any bias to which our recipe is applicable, even if this has not been documented in the literature so far. Thus, cumulative evidence that experimentally assigning such a strategy fails to reduce biases named here would speak against our model.
At the same time, we have proposed that several biases are actually based on the same beliefs, which leads to the assumption that biases sharing the same beliefs should show a positive correlation (or at least a stronger positive correlation than biases that are based on different beliefs, H2). Thus, collecting data from a whole battery of bias tasks would allow a confirmatory test of whether the underlying beliefs serve as organizing latent factors that can explain the correlations between the different bias manifestations.
Further hypotheses follow from the fact that there is a special case of a fundamental belief in that its content inherently relates to biases—the belief that one makes correct assessments. Essentially, it might be regarded as a kind of “g factor” of biases (for a similar idea, see Fiedler, 2000; Fiedler et al., 2018; Metcalfe, 1998). Following from this, we expect natural (e.g., interindividual) or experimentally induced differences in the belief of making correct assessments (e.g., undermining it; for discussions on the phenomenon of gaslighting, e.g., see Gass & Nichols, 1988; Rietdijk, 2018; Tobias & Joseph, 2020) to be mirrored not only in biases based on this but also other beliefs (H3). However, in consideration of the fact that we essentially regard several biases as a tendency to confirm the underlying fundamental belief (via belief-consistent information processing), “successfully” biased information processing should nourish the belief in one’s making correct assessments—as one’s prior beliefs have been confirmed (H4). For example, people who believe their group to be good and engage in belief-consistent information processing leading them to conclusions that confirm their belief are at the same time confirmed in their convictions that they make correct assessments of the world. The same should work for other biases such as the “better-than-average effect” or “outcome bias,” for instance. If I believe myself to be better than the average, for instance, and subsequently engage in confirmatory information processing by comparing myself with others who have lower abilities in the particular domain in question, this should strengthen my belief that I generally assess the world correctly. Likewise, if I believe that it is mainly peoples’ attributes that shape outcomes and—consistent with this belief—attribute a company’s failure to its CEO’s mismanagement, I get “confirmed” in my belief that I make correct assessments. Only if belief-consistent information processing failed would the belief that one makes correct assessments likewise not be nourished. This is, however, not extremely likely given the plethora of research showing that people may see confirmation of the basic belief even if there is actually none or only equivocal confirmation (e.g., Doherty et al., 1979; Friesen et al., 2015; Lord et al., 1979; Isenberg, 1986), let alone disconfirmation (Festinger et al., 1955/2011; Traut-Mattausch et al., 2004). If, however, engaging in any (other) form of bias expression would attenuate biases following from the belief of making correct assessments, this would strongly speak against our rationale.
There is one exception, however. If one was aware that one is processing information in a biased way and was unable to rationalize this proceeding, biases should not be expressed because it would threaten one’s belief in making correct assessments. In other words, the belief in making correct assessments should constrain biases based on other beliefs because people are rather motivated to maintain an illusion of objectivity regarding the manner in which they derived their inferences (Pyszczynski & Greenberg, 1987; Sanbonmatsu et al., 1998). Thus, there is a constraint on motivated information processing: People need to be able to justify their conclusions (Kunda, 1990; see also C. A. Anderson et al., 1980; C. A. Anderson & Kellam, 1992). If people were stripped of this possibility, that is, if they were not able to justify their biased information processing (e.g., because they are made aware of their potential bias and fear that others could become aware of it as well), we should observe attempts to reduce that particular bias and an effective reduction if people knew how to correct for it (H5).
Above and beyond these rather general hypotheses, further corollaries of our account unfold. For instance, we would expect the same group favoritism for groups people do not belong to and identify with but which they believe to be good (H6). This hypothesis would not be predicted by the social-identity approach (Tajfel & Turner, 1979; Turner et al., 1987), which is most commonly referred to when explaining in-group favoritism.
Conclusion
There have been many prior attempts of synthesizing and integrating research on (parts of) biased information processing (e.g., Birch & Bloom, 2004; Evans, 1989; Fiedler, 1996, 2000; Gawronski & Strack, 2012; Gilovich, 1991; Griffin & Ross, 1991; Hilbert, 2012; Klayman & Ha, 1987; Kruglanski et al., 2012; Kunda, 1990; Lord & Taylor, 2009; Pronin et al., 2004; Pyszczynski & Greenberg, 1987; Sanbonmatsu et al., 1998; Shermer, 1997; Skov & Sherman, 1986; Trope & Liberman, 1996). Some of them have made similar or overlapping arguments or implicitly made similar assumptions to the ones outlined here and thus resonate with our reasoning. In none of them, however, have we found the same line of thought and its consequences explicated.
To put it briefly, theoretical advancements necessitate integration and parsimony (the integrative potential), as well as novel ideas and hypotheses (the generative potential). We believe that the proposed framework for understanding bias as presented in this article has merits in both of these aspects. We hope to instigate discussion as well as empirical scrutiny with the ultimate goal of identifying common principles across several disparate research strands that have heretofore sought to understand human biases.
ORCID iD
Footnotes
Declaration of Conflicting Interests The author(s) declared that there were no conflicts of interest with respect to the authorship or the publication of this article.
Funding This research was supported by Leibniz Gemeinschaft Grant SAW-2017-IWM-4 (to A. Oeberst) and German Research Foundation Grant OE 604/3-1 (to A. Oeberst).
1. One aspect that may additionally contribute to confirmation is the vagueness of the beliefs. Note that the beliefs we propose to underlie several biases below are rather fundamental in nature and are thus rather abstract and global (e.g., “I am good”). Obviously, there are several variations of the belief to be good—depending, for instance, on the domain or dimension that is evaluated (e.g., morality, competence) and the specific context (e.g., in a game, at work, on weekends). Moreover, there may certainly be exceptions to it (e.g., “I am generally a moral person, but I am aware that I am stingy when it comes to anonymous donations”), but the general beliefs still function as a kind of default that guides information processing. Their variability may actually contribute to their confirmation because it leaves so many degrees of freedom (e.g., “The money that I do not donate is spent on other choices of moral integrity, and the fact that I admit not donating further reflects on my honesty and thus ultimately my morality”; Dunning et al., 1989).
Transparency
Action Editor: Timothy J. Pleskac
Editor: Klaus Fiedler
References
Abelson R. P., Aronson E. E., McGuire W. J., Newcomb T. M., Rosenberg M. J., Tannenbaum P. H. (1968). Theories of cognitive consistency: A sourcebook. Rand-McNally.
Alicke M. D., Govorun O. (2005). The better-than-average effect. In Alicke M. D., Dunning D. A., Krueger J. I., Alicke M. D., Dunning D. A., Krueger J. I. (Eds.), The self in social judgment (pp. 85–106). Psychology Press.
Alloy L. B., Abramson L. Y. (1979). Judgment of contingency in depressed and nondepressed students: Sadder but wiser? Journal of Experimental Psychology: General, 108(4), 441–485. https://doi.org/10.1037/0096-3445.108.4.441
Alloy L. B., Clements C. M. (1992). Illusion of control: Invulnerability to negative affect and depressive symptoms after laboratory and natural stressors. Journal of Abnormal Psychology, 101(2), 234–245. https://doi.org/10.1037/0021-843X.101.2.234
Allport G. W., Postman L. (1947). The psychology of rumor. Henry Holt & Co.
Anderson C. A., Kellam K. L. (1992). Belief perseverance, biased assimilation, and covariation detection: The effects of hypothetical social theories and new data. Personality and Social Psychology Bulletin, 18(5), 555–565. https://doi.org/10.1177/0146167292185005
Anderson C. A., Lepper M. R., Ross L. (1980). Perseverance of social theories: The role of explanation in the persistence of discredited information. Journal of Personality and Social Psychology, 39(6), 1037–1049.
Arkes H. (1991). Costs and benefits of judgment errors: Implications for debiasing. Psychological Bulletin, 110(3), 486–498.
Baron J., Hershey J. C. (1988). Outcome Bias in Decision Evaluation. Journal of Personality and Social Psychology, 54(4), 569–579.
Bartlett S. C. (1932). Remembering: A study in experimental and social psychology. Cambridge University Press.
Bateson C. D. (1975). Rational processing or rationalization? The effect of disconfirming information on a stated religious belief. Journal of Personality and Social Psychology, 32, 176–184.
Bertrand M., Chugh D., Mullainathan S. (2005). Implicit discrimination. The American Economic Review, 95(2), 94–98.
Bianchi M., Machunsky M., Steffens M. C., Mummendey A. (2009). Like me or like us. Is ingroup projection just social projection? Experimental Psychology, 56(3), 198–205. https://doi.org/10.1027/1618-3169.56.3.198
Bianchi M., Mummendey A., Steffens M. C., Yzerbyt V. Y. (2010). What do you mean by “European”? Evidence of spontaneous ingroup projection. Personality and Social Psychology Bulletin, 36(7), 960–974. https://doi.org/10.1177/0146167210367488
Bogaard G., Meijer E. H., Vrij A., Broers N. J., Merckelbach H. (2014). Contextual bias in verbal credibility assessment: Criteria-based content analysis, reality monitoring and scientific content analysis. Applied Cognitive Psychology, 28(1), 79–90. https://doi.org/10.1002/acp.2959
Bosveld W., Koomen W., van der Pligt J. (1994). Selective exposure and the false consensus effect: The availability of similar and dissimilar others. British Journal of Social Psychology, 33(4), 457–466. https://doi.org/10.1111/j.2044-8309.1994.tb01041.x
Brewer M. B. (2007). The social psychology of intergroup relations: Social categorization, ingroup bias, and outgroup prejudice. In Kruglanski A. W., Higgins E. T. (Eds.), Social psychology: Handbook of basic principles (pp. 695–715). Guilford Press.
Brewer W. F., Nakamura G. V. (1984). The nature and functions of schemas. In Wyer Jr. R. S., Srull T. K. (Eds.), Handbook of social cognition (pp. 119–160). Lawrence Erlbaum Associates.
Bruder M., Haffke P., Neave N., Nouripanah N., Imhoff R. (2013). Measuring individual differences in generic beliefs in conspiracy theories across cultures: Conspiracy Mentality Questionnaire. Frontiers in Psychology, 4, Article 225. https://doi.org/10.3389/fpsyg.2013.00225
Burnette J. L., O’Boyle E. H., VanEpps E. M., Pollack J. M., Finkel E. J. (2013). Mind-sets matter: A meta-analytic review of implicit theories and self-regulation. Psychological Bulletin, 139(3), 655–701. https://doi.org/10.1037/a0029531
Chater N., Oaksford M., Hahn U., Heit E. (2010). Bayesian models of cognition. Wiley Interdisciplinary Reviews: Cognitive Science, 1, 811–823.
Christen C. T., Kannaovakun P., Gunther A. C. (2002). Hostile media perceptions: Partisan assessments of press and public during the 1997 United Parcel Service strike. Political Communication, 19(4), 423–436. https://doi.org/10.1080/10584600290109988
Clark H. H., Marshall C. R. (1981). Definite reference and mutual knowledge. In Joshi A. K., Webber B. L., Sag I. A. (Eds.), Elements of discourse understanding (pp. 10–63). Cambridge University Press.
Cohen C. E. (1981). Person categories and social perception: Testing some boundaries of the processing effect of prior knowledge. Journal of Personality and Social Psychology, 40(3), 441–452. https://doi.org/10.1037/0022-3514.40.3.441
Cvencek D., Greenwald A. G., Meltzoff A. N. (2012). Balanced identity theory. Review of evidence for implicit consistency in social cognition. In Gawronski B., Strack F. (Eds.), Cognitive consistency: A fundamental principle in social cognition (pp. 157–177). Guilford Press.
Dalbert C. (2009). Belief in a just world. In Leary M. R., Hoyle R. H. (Eds.), Handbook of individual differences in social behavior (pp. 288–297). Guilford Press.
Dalton R. J., Beck P. A., Huckfeldt R. (1998). Partisan cues and the media: Information flows in the 1992 presidential election. American Political Science Review, 92(1), 111–126. https://doi.org/10.2307/2585932
Darley J. M., Fazio R. H. (1980). Expectancy confirmation processes arising in the social interaction sequence. American Psychologist, 35, 867–881.
Davies M. F. (1997). Belief persistence after evidential discrediting: The impact of generated versus provided explanations on the likelihood of discredited outcomes. Journal of Experimental Social Psychology, 33, 561–578. https://doi.org/10.1006/jesp.1997.1336
Day L., Maltby J. (2003). Belief in good luck and psychological well-being: The mediating role of optimism and irrational beliefs. The Journal of Psychology, 137(1), 99–110. https://doi.org/10.1080/00223980309600602
Devine D. J., Clayton L. D., Dunford B. B., Seying R., Pryce J. (2001). Jury decision making: 45 years of empirical research on deliberating groups. Psychology, Public Policy, and Law, 7(3), 622–727. https://doi.org/10.1037/1076-8971.7.3.622
Dijksterhuis A. P., van Knippenberg A. D., Kruglanski A. W., Schaper C. (1996). Motivated social cognition: Need for closure effects on memory and judgment. Journal of Experimental Social Psychology, 32(3), 254–270. https://doi.org/10.1006/jesp.1996.0012
Ditto P. H., Liu B. S., Clark C. J., Wojcik S. P., Chen E. E., Grady R. H., Celniker J. B., Zinger J. F. (2019). At least bias is bipartisan: A meta-analytic comparison of partisan bias in liberals and conservatives. Perspectives on Psychological Science, 14(2), 273–291. https://doi.org/10.1177/1745691617746796
Ditto P. H., Lopez D. F. (1992). Motivated skepticism: Use of differential decision criteria for preferred and non-preferred conclusions. Journal of Personality and Social Psychology, 63(4), 568–584. https://doi.org/10.1037/0022-3514.63.4.568
Dow J. (2006). The evolution of religion: Three anthropological approaches. Method & Theory in the Study of Religion, 18(1), 67–91.
Dror I. E., Thompson W. C., Meissner C. A., Kornfield I., Krane D., Saks M., Risinger M. (2015). Letter to the editor—Context management toolbox: A linear sequential unmasking (LSU) approach for minimizing cognitive bias in forensic decision making. Journal of Forensic Sciences, 60(4), 1111–1112. https://doi.org/10.1111/1556-4029.12805
Dunning D., Meyerowitz J. A., Holzberg A. D. (1989). Ambiguity and self-evaluation: The role of idiosyncratic trait definitions in self-serving assessments of ability. Journal of Personality and Social Psychology, 57(6), 1082–1090. https://doi.org/10.1037/0022-3514.57.6.1082
Elaad E., Ginton A., Ben-Shakhar G. (1994). The effects of prior expectations and outcome knowledge on polygraph examiners’ decisions. Journal of Behavioral Decision Making, 7(4), 279–292. https://doi.org/10.1002/bdm.3960070405
Epley N., Morewedge C. K., Keysar B. (2004). Perspective taking in children and adults: Equivalent egocentrism but differential correction. Journal of Experimental Social Psychology, 40(6), 760–768. https://doi.org/10.1016/j.jesp.2004.02.002
Evans J. S. B. T. (1972). Interpretation and matching bias in a reasoning task. Quarterly Journal of Experimental Psychology, 24, 193–199.
Evans J. S. B. T. (1989). Bias in human reasoning: Causes and consequences. Psychology Press.
Evans J. S. B. T. (1991). Theories of human reasoning: The fragmented state of the art. Theory & Psychology, 1(1), 83–105.
Evans J. S. B. T., Over D. E., Manktelow K. I. (1993). Reasoning, decision making and rationality. Cognition, 49, 165–187.
Faigman D. L., Kang J., Bennett M. W., Carbado D. W., Casey P., Dasgupta N., Godsil R. D., Greenwald A. G., Levinson J. D., Mnookin J. (2012). Implicit bias in the courtroom. UCLA Law Review, 59, 1124–1187.
Festinger L. (1957). A theory of cognitive dissonance. Stanford University Press.
Festinger L., Riecken H. W., Schachter S. (2011). When prophecy fails. Wilder Publications. (Original work published 1955)
Fiedler K. (1996). Explaining and simulating judgment biases as an aggregation phenomenon in probabilistic, multiple-cue environments. Psychological Review, 103(1), 193–213.
Fiedler K., Freytag P., Meiser T. (2009). Pseudocontingencies: An integrative account of an intriguing cognitive illusion. Psychological Review, 116(1), 187–206. https://doi.org/10.1037/a0014480
Fiedler K., Hofferbert J., Wöllert F. (2018). Metacognitive myopia in hidden-profile tasks: The failure to control for repetition biases. Frontiers in Psychology, 9, Article 903. https://doi.org/10.3389/fpsyg.2018.00903
Fiedler K., Prager J., McCaughey L. (2021). Heuristics and biases. In Knauff M., Spohn W. (Eds.), The handbook of rationality (pp. 159–200). MIT Press.
Fischhoff B. (1975). Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1(3), 288–299. https://doi.org/10.1037/0096-1523.1.3.288
Fischhoff B., Beyth-Marom R. (1983). Hypothesis evaluation from a Bayesian perspective. Psychological Review, 90, 239–260.
Fiske S. T. (1998). Stereotyping, prejudice, and discrimination. In Gilbert D. T., Fiske S. T., Lindzey G. (Eds.), The handbook of social psychology (pp. 357–411). McGraw-Hill.
Forgas J. P., Bower G. H. (1987). Mood effects on person-perception judgments. Journal of Personality and Social Psychology, 53(1), 53–60.
Frenken M., Imhoff R. (2022). Malevolent intentions and secret coordination. Dissecting cognitive processes in conspiracy beliefs via diffusion modeling. Journal of Experimental Social Psychology, 103, Article 104383. https://doi.org/10.1016/j.jesp.2022.104383
Friesen J. P., Campbell T. H., Kay A. C. (2015). The psychological advantage of unfalsifiability: The appeal of untestable religious and political ideologies. Journal of Personality and Social Psychology, 108(3), 515–529. https://doi.org/10.1037/pspp0000018
Furnham A., Ribchester T. (1995). Tolerance of ambiguity: A review of the concept, its measurement and applications. Current Psychology, 14(3), 179–199. https://doi.org/10.1007/BF02686907
Galinsky A. D., Leonardelli G. J., Okhuysen G. A., Mussweiler T. (2005). Regulatory focus at the bargaining table: Promoting distributive and integrative success. Personality and Social Psychology Bulletin, 31(8), 1087–1098. https://doi.org/10.1177/0146167205276429
Gawronski B., Strack F. (Eds.). (2012). Cognitive consistency. A fundamental principle in social cognition. Guilford Press.
Gigerenzer G., Hertwig R., Pachur T. (2011). Heuristics: The foundations of adaptive behavior. Oxford University Press.
Gigerenzer G., Selten R. (2001). Bounded rationality: The adaptive toolbox. MIT Press.
Gilovich T. (1991). How we know what isn’t so. The fallibility of human reason in everyday life. Free Press.
Gilovich T., Medvec V. H., Savitsky K. (2000). The spotlight effect in social judgment: An egocentric bias in estimates of the salience of one’s own actions and appearance. Journal of Personality and Social Psychology, 78(2), 211–222. https://doi.org/10.1037//0022-3514.78.2.211
Gilovich T., Savitsky K. (1999). The spotlight effect and the illusion of transparency: Egocentric assessments of how we are seen by others. Current Directions in Psychological Science, 8(6), 165–168. https://doi.org/10.1111/1467-8721.00039
Gilovich T., Savitsky K., Medvec V. H. (1998). The illusion of transparency: Biased assessments of others’ ability to read one’s emotional states. Journal of Personality and Social Psychology, 75(2), 332–346. https://doi.org/10.1037//0022-3514.75.2.332
Greenwald A. G. (1980). The totalitarian ego. Fabrication and revision of personal history. American Psychologist, 35(7), 603–618.
Greenwald A. G., Banaji M. R. (1995). Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychological Review, 102(1), 4–27.
Greenwald A. G., Pratkanis A. R., Leippe M. R., Baumgardner M. H. (1986). Under what conditions does theory obstruct research progress? Psychological Review, 93(2), 216–229.
Gunther A. C., Christen C. T. (2002). Projection or persuasive press? Contrary effects of personal opinion and perceived news coverage on estimates of public opinion. Journal of Communication, 52(1), 177–195. https://doi.org/10.1111/j.1460-2466.2002.tb02538.x
Hagan J., Parker P. (1985). White-collar crime and punishment: The class structure and legal sanctioning of securities violations. American Sociological Review, 50, 302–316.
Hamilton D. L., Gifford R. K. (1976). Illusory correlation in interpersonal perception: A cognitive basis of stereotypic judgments. Journal of Experimental Social Psychology, 12(4), 392–407. https://doi.org/10.1016/S0022-1031(76)80006-6
Hart W., Albarracín D., Eagly A. H., Brechan I., Lindberg M. J., Merril L. (2009). Feeling validated versus being correct: A meta-analysis of selective exposure to information. Psychological Bulletin, 135(4), 555–588. https://doi.org/10.1037/a0015701
Hartley E. (1946). Problems in prejudice. King’s Cross Press.
Haselton M. G., Bryant G. A., Wilke A., Frederick D., Galperin A., Frankenhuis W. E., Moore T. (2009). Adaptive rationality: An evolutionary perspective on cognitive bias. Social Cognition, 27(5), 733–763.
Hewstone M. (1990). The ‘ultimate attribution error’? A review of the literature on intergroup causal attribution. European Journal of Social Psychology, 20, 311–335. https://doi.org/10.1002/ejsp.2420200404
Hilbert M. (2012). Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making. Psychological Bulletin, 138(2), 211–237. https://doi.org/10.1037/a0025940
Hill C., Memon A., McGeorge P. (2008). The role of confirmation bias in suspect interviews: A systematic evaluation. Legal and Criminological Psychology, 13(2), 357–371. https://doi.org/10.1348/135532507X238682
Hornsey M. J., Oppes T., Svensson A. (2002). “It’s OK if we say it, but you can’t”: Responses to intergroup and intragroup criticism. European Journal of Social Psychology, 32, 293–307. https://doi.org/10.1002/ejsp.90
Ichheiser G. (1949). Misunderstandings in human relations: A study in false social perception. University of Chicago Press.
Imhoff R., Bruder M. (2014). Speaking (un-)truth to power: Conspiracy mentality as a generalised political attitude. European Journal of Personality, 28(1), 25–43. https://doi.org/10.1002/per.1930
Jelalian E., Miller A. G. (1984). The perseverance of beliefs: Conceptual perspectives and research developments. Journal of Social and Clinical Psychology, 2(1), 25–56.
John O. P., Robins R. W. (1994). Accuracy and bias in self-perception: Individual differences in self-enhancement and the role of narcissism. Journal of Personality and Social Psychology, 66(1), 206–219. https://doi.org/10.1037/0022-3514.66.1.206
Jones M., Love B. C. (2011). Bayesian fundamentalism or enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences, 34, 169–231. https://doi.org/10.1017/S0140525X10003134
Jussim L. (1986). Self-fulfilling prophecies: A theoretical and integrative review. Psychological Review, 93(4), 429–445.
Kang J., Lane K. (2010). Seeing through colorblindness: Implicit bias and the law. UCLA Law Review, 58, 465–520.
Kaptchuk T. J., Friedlander E., Kelley J. M., Sanchez M. N., Kokkotou E., Singer J. P., Kowalczykowski M., Miller F. G., Kirsch I., Lembo A. J. (2010). Placebos without deception: A randomized controlled trial in irritable bowel syndrome. PLOS ONE, 5(12), Article e15591. https://doi.org/10.1371/journal.pone.0015591
Kennedy J. E., Taddonio J. L. (1976). Experimenter effects in parapsychological research. Journal of Parapsychology, 40(1), 1–33.
Keysar B., Barr D. J., Horton W. S. (1998). The egocentric basis of language use: Insights from a processing approach. Current Directions in Psychological Science, 7(2), 46–49. https://doi.org/10.1111/1467-8721.ep13175613
Klayman J., Ha Y.-W. (1989). Hypothesis testing in rule discovery: Strategy, structure, and content. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15(4), 596–604. https://doi.org/10.1037/0278-7393.15.4.596
Kleider H. M., Pezdek K., Goldinger S. D., Kirk A. (2008). Schema-driven source misattribution errors: Remembering the expected from a witnessed event. Applied Cognitive Psychology, 22(1), 1–20. https://doi.org/10.1002/acp.1361
Knauff M., Spohn W. (2021). Psychological and philosophical frameworks of rationality—A systematic introduction. In Knauff M., Spohn W. (Eds.), The handbook of rationality (pp. 1–70). MIT Press.
Koenig H. G., Hays J. C., Larson D. B., George L. K., Cohen H. J., McCullough M. E., Meador K. G., Blazer D. G. (1999). Does religious attendance prolong survival? A six-year follow-up study of 3,968 older adults. Journals of Gerontology: Medical Sciences, 54A(7), M370–M376. https://doi.org/10.1093/gerona/54.7.M370
Koval P., Laham S. M., Haslam N., Bastian B., Whelan J. A. (2012). Our flaws are more human than yours: Ingroup bias in humanizing negative characteristics. Personality and Social Psychology Bulletin, 38(3), 283–295. https://doi.org/10.1177/0146167211423777
Kruger J., Gilovich T. (1999). “Naive cynicism” in everyday theories of responsibility assessment: On biased assumptions of bias. Journal of Personality and Social Psychology, 76(5), 743–753. https://doi.org/10.1037/0022-3514.76.5.743
Kruglanski A. W., Bélanger J. J., Chen X., Köpetz C., Pierro A., Mannetti L. (2012). The energetics of motivated cognition: A force-field analysis. Psychological Review, 119(1), 1–20. https://doi.org/10.1037/a0025488
Kruglanski A. W., Freund T. (1983). The freezing and unfreezing of lay-inferences: Effects on impressional primacy, ethnic stereotyping, and numerical anchoring. Journal of Experimental Social Psychology, 19(5), 448–468. https://doi.org/10.1016/0022-1031(83)90022-7
Kruglanski A. W., Jasko K., Milyavsky M., Chernikova M., Webber D., Pierro A., di Santo D. (2018). Cognitive consistency theory in social psychology: A paradigm reconsidered. Psychological Inquiry, 29(2), 45–59. https://doi.org/10.1080/1047840X.2018.1480619
Kube T., Rozenkrantz L. (2021). When beliefs face reality: An integrative review of belief updating in mental health and illness. Perspectives on Psychological Science, 16, 247–274. https://doi.org/10.1177/1745691620931496
Kunda Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498.
Kwan V. S. Y., John O. P., Robins R. W., Kuang L. L. (2008). Conceptualizing and assessing self-enhancement bias: A componential approach. Journal of Personality and Social Psychology, 94(6), 1062–1077. https://doi.org/10.1037/0022-3514.94.6.1062
Ladouceur R., Gosselin P., Dugas M. J. (2000). Experimental manipulation of intolerance of uncertainty: A study of a theoretical model of worry. Behaviour Research and Therapy, 38(9), 933–941. https://doi.org/10.1016/S0005-7967(99)00133-3
Lieberman J. D., Arndt J. (2000). Understanding the limits of limiting instructions: Social psychological explanations for the failures of instructions to disregard pretrial publicity and other inadmissible evidence. Psychology, Public Policy, and Law, 6(3), 677–711. https://doi.org/10.1037/1076-8971.6.3.677
Lord C. G., Lepper M. R., Preston E. (1984). Considering the opposite: A corrective strategy for social judgment. Journal of Personality and Social Psychology, 47(6), 1231–1243. https://doi.org/10.1037/0022-3514.47.6.1231
Lord C. G., Ross L., Lepper M. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109. https://doi.org/10.1037/0022-3514.37.11.2098
Maass A., Salvi D., Arcuri L., Semin G. (1989). Language use in intergroup contexts: The linguistic intergroup bias. Journal of Personality and Social Psychology, 57(6), 981–993. https://doi.org/10.1037//0022-3514.57.6.981
Mandelbaum E. (2019). Troubles with Bayesianism: An introduction to the psychological immune system. Mind & Language, 34, 141–157.
Matheson K., Dursun S. (2001). Social identity precursors to the hostile media phenomenon: Partisan perceptions of coverage of the Bosnian conflict. Group Processes & Intergroup Relations, 4(2), 116–125. https://doi.org/10.1177/1368430201004002003
Matlin M. W. (2017). Pollyanna principle. In Pohl R. F. (Ed.), Cognitive illusions: Intriguing phenomena in thinking, judgment and memory (pp. 315–335). Routledge.
McFarland C., Ross M. (1987). The relation between current impressions and memories of self and dating partners. Personality and Social Psychology Bulletin, 13(2), 228–238. https://doi.org/10.1177/0146167287132008
Meiser T., Hewstone M. (2006). Illusory and spurious correlations: Distinct phenomena or joint outcomes of exemplar-based category learning? European Journal of Social Psychology, 36(3), 315–336. https://doi.org/10.1002/ejsp.304
Mercier H. (2017). Confirmation bias – Myside bias. In Pohl R. F. (Ed.), Cognitive illusions: Intriguing phenomena in thinking, judgment and memory (pp. 99–114). Routledge.
Merton R. K. (1948). The self-fulfilling prophecy. The Antioch Review, 8(2), 193–210.
Mervis C. B., Rosch E. (1981). Categorization of natural objects. Annual Review of Psychology, 32(1), 89–115.
Miller D. T., Ratner R. K. (1998). The disparity between the actual and assumed power of self-interest. Journal of Personality and Social Psychology, 74(1), 53–62. https://doi.org/10.1037/0022-3514.74.1.53
Mullen B., Atkins J. L., Champion D. S., Edwards C., Hardy D., Story J. E., Vanderklok M. (1985). The false consensus effect: A meta-analysis of 115 hypothesis tests. Journal of Experimental Social Psychology, 21(3), 262–283. https://doi.org/10.1016/0022-1031(85)90020-4
Mullen B., Brown R., Smith C. (1992). Ingroup bias as a function of salience, relevance, and status: An integration. European Journal of Social Psychology, 22(2), 103–122. https://doi.org/10.1002/ejsp.2420220202
Mullen B., Riordan C. A. (1988). Self-serving attributions for performance in naturalistic settings: A meta-analytic review. Journal of Applied Social Psychology, 18(1), 3–22.
Murrie D. C., Boccaccini M. T., Guarnera L. A., Rufino K. A. (2013). Are forensic experts biased by the side that retained them? Psychological Science, 24(10), 1889–1897. https://doi.org/10.1177/0956797613481812
Mussweiler T. (2007). Assimilation and contrast as comparison effects: A selective accessibility model. In Stapel D. A., Suls J. (Eds.), Assimilation and contrast in social psychology (pp. 165–185). Psychology Press.
Mussweiler T., Strack F., Pfeiffer T. (2000). Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin, 26(9), 1142–1150. https://doi.org/10.1177/01461672002611010
Mustard D. B. (2001). Racial, ethnic, and gender disparities in sentencing: Evidence from the US federal courts. The Journal of Law and Economics, 44(1), 285–314. https://doi.org/10.1086/320276
Mynatt C. R., Doherty M. E., Tweney R. D. (1978). Consequences of confirmation and disconfirmation in a simulated research environment. Quarterly Journal of Experimental Psychology, 30, 395–406.
Nestler S., Blank H., von Collani G. (2008). Hindsight bias doesn’t always come easy: Causal models, cognitive effort, and creeping determinism. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 1043–1054. https://doi.org/10.1037/0278-7393.34.5.1043
Nisbett R. E., Ross L. (1980). Human inference: Strategies and shortcomings of social judgment. Prentice Hall.
Noor M., Kteily N., Siem B., Mazziotta A. (2019). “Terrorist” or “mentally ill”: Motivated biases rooted in partisanship shape attributions about violent actors. Social Psychological and Personality Science, 10(4), 485–493. https://doi.org/10.1177/1948550618764808
O’Brien B. (2009). Prime suspect: An examination of factors that aggravate and counteract confirmation bias in criminal investigations. Psychology, Public Policy, and Law, 15(4), 315–334. https://doi.org/10.1037/a0017881
Oeberst A., Matschke C. (2017). Word order and world order. Titles of intergroup conflicts may increase ethnocentrism by mentioning the in-group first. Journal of Experimental Psychology: General, 146, 672–690. https://doi.org/10.1037/xge0000300
Oswald M. E. (2014). Strafrichterliche Urteilsbildung [Judgment and decision making in criminal law]. In Bliesener T., Lösel F., Köhnken G. (Eds.), Lehrbuch Rechtspsychologie (pp. 244–260). Huber.
Otten S. (2004). Self-anchoring as predictor of in-group favoritism: Is it applicable to real group contexts? Current Psychology of Cognition, 22(4–5), 427–443.
Otten S., Epstude K. (2006). Overlapping mental representations of self, ingroup and outgroup: Unraveling self-stereotyping and self-anchoring. Personality and Social Psychology Bulletin, 32, 957–969. https://doi.org/10.1177/0146167206287254
Otten S., Wentura D. (2001). Self-anchoring and in-group favoritism: An individual profiles analysis. Journal of Experimental Social Psychology, 37, 525–532. https://doi.org/10.1006/jesp.2001.1479
Pea R. D. (1980). The development of negation in early child language. In Olson D. R. (Ed.), The social foundations of language and thought (pp. 156–186). W. W. Norton & Company.
Pennycook G., Cheyne J. A., Barr N., Koehler D. J., Fugelsang J. A. (2015). On the reception and detection of pseudo-profound bullshit. Judgment and Decision Making, 10, 549–563.
Pohl R. F., Hell W. (1996). No reduction in hindsight bias after complete information and repeated testing. Organizational Behavior and Human Decision Processes, 67(1), 49–58. https://doi.org/10.1006/obhd.1996.0064
Popper K. J. (1963). Conjectures and refutations: The growth of scientific knowledge. Routledge & Kegan Paul.
Pronin E., Gilovich T., Ross L. (2004). Objectivity in the eye of the beholder: Divergent perceptions of bias in self versus others. Psychological Review, 111(3), 781–799. https://doi.org/10.1037/0033-295X.111.3.781
Pronin E., Kennedy K., Butsch S. (2006). Bombing versus negotiating: How preferences for combating terrorism are affected by perceived terrorist rationality. Basic and Applied Social Psychology, 28, 385–392. https://doi.org/10.1207/s15324834basp2804_12
Pronin E., Lin D. Y., Ross L. (2002a). The bias blind spot: Perceptions of bias in self versus others. Personality and Social Psychology Bulletin, 28(3), 369–381. https://doi.org/10.1177/0146167202286008
Pronin E., Puccio C., Ross L. (2002b). Understanding misunderstanding: Social psychological perspectives. In Gilovich T., Griffin D., Kahneman D. (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 636–665). Cambridge University Press. https://doi.org/10.1017/CBO9780511808098.038
Pruitt C. R., Wilson J. Q. (1983). A longitudinal study of the effect of race on sentencing. Law and Society Review, 17(4), 613–635.
Pyszczynski T., Greenberg J. (1987). Toward an integration of cognitive and motivational perspectives on social inference: A biased hypothesis-testing model. Advances in Experimental Social Psychology, 20, 297–340. https://doi.org/10.1016/S0065-2601(08)60417-7
Pyszczynski T., Greenberg J., Holt K. (1985). Maintaining consistency between self-serving beliefs and available data: A bias in information evaluation. Personality and Social Psychology Bulletin, 11(2), 179–190. https://doi.org/10.1177/0146167285112006
Rajsic J., Wilson D., Pratt J. (2015). Confirmation bias in visual search. Journal of Experimental Psychology: Human Perception & Performance, 41(5), 1353–1364. https://doi.org/10.1037/xhp0000090
Rassin E., Eerland A., Kuijpers I. (2010). Let’s find the evidence: An analogue study of confirmation bias in criminal investigations. Journal of Investigative Psychology and Offender Profiling, 7(3), 231–246. https://doi.org/10.1002/jip.126
Richards Z., Hewstone M. (2001). Subtyping and subgrouping: Processes for the prevention and promotion of stereotype change. Personality and Social Psychology Review, 5(1), 52–73. https://doi.org/10.1207/S15327957PSPR0501_4
Richardson J. D., Huddy W. P., Morgan S. M. (2008). The hostile media effect, biased assimilation, and perceptions of a presidential debate. Journal of Applied Social Psychology, 33(5), 1255–1270. https://doi.org/10.1111/j.1559-1816.2008.00347.x
Riedl R. (1981). Die Folgen des Ursachendenkens [The consequences of causal reasoning]. In Watzlawick P. (Hrsg.), Die erfundene Wirklichkeit (pp. 67–90). Piper Verlag.
Rietdijk N. (2018). (You drive me) crazy. How gaslighting undermines autonomy [Unpublished master’s thesis]. Utrecht University.
Risen J. L. (2016). Believing what we do not believe: Acquiescence to superstitious beliefs and other powerful intuitions. Psychological Review, 123(2), 182–207. https://doi.org/10.1037/rev0000017
Risinger D. M., Saks M. J., Thompson W. C., Rosenthal R. (2002). The Daubert/Kumho implications of observer effects in forensic science: Hidden problems of expectation and suggestion. California Law Review, 90(1), 1–56.
Roese N. J., Sherman J. W. (2007). Expectancy. In Kruglanski A. W., Higgins E. T. (Eds.), Social psychology: A handbook of basic principles (Vol. 2., pp. 91–115). Guilford Press.
Rosenthal R., Jacobson L. (1968). Pygmalion in the classroom. The Urban Review, 3(1), 16–20.
Rosenthal R., Rubin D. B. (1978). Interpersonal expectancy effects: The first 345 studies. The Behavioral and Brain Sciences, 3, 377–415.
Ross L. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process. In Berkowitz L. (Ed.), Advances in experimental social psychology (Vol. 10, pp. 173–220). Academic Press.
Ross L., Ward A. (1996). Naive realism in everyday life: Implications for social conflict and misunderstanding. In Reed E. S., Turiel E., Brown T. (Eds.), Values and knowledge (pp. 103–135). Psychology Press.
Rubin M., Hewstone M. (1998). Social identity theory’s self-esteem hypothesis: A review and some suggestions for clarification. Personality and Social Psychology Review, 2(1), 40–62. https://doi.org/10.1207/s15327957pspr0201_3
Sanbonmatsu D. M., Posavac S. S., Kardes F. R., Mantel S. P. (1998). Selective hypothesis testing. Psychology Bulletin & Review, 5(2), 197–220. https://doi.org/10.3758/BF03212944
Sassenberg K., Landkammer F., Jacoby J. (2014). The influence of regulatory focus and group vs. individual goals on the evaluation bias in the context of group decision making. Journal of Experimental Social Psychology, 54, 153–164. https://doi.org/10.1016/j.jesp.2014.05.009
Sedikides C., Alicke M. D. (2012). Self-enhancement and self-protection motives. In Ryan R. M. (Ed.), The Oxford handbook of human motivation (pp. 303–322). Oxford University Press.
Sedikides C., Gaertner L., Vevea J. L. (2005). Pancultural self-enhancement reloaded: A meta-analytic reply to Heine (2005). Journal of Personality and Social Psychology, 89(4), 539–551. https://doi.org/10.1037/0022-3514.89.4.539
Sedlmeier P., Hertwig R., Gigerenzer G. (1998). Are judgments of the positional frequencies of letters systematically biased due to availability? Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 754–770. https://doi.org/10.1037/0278-7393.24.3.754
Servick K. (2015). Forensic labs explore blind testing to prevent errors. Evidence examiners get practical about fighting cognitive bias. Science, 349, 462–463.
Sheldrake R. (1998). Experimenter effects in scientific research: How widely are they neglected. Journal of Scientific Exploration, 12(1), 73–78.
Shermer M. (1997). Why people believe weird things: Pseudoscience, superstition, and other confusions of our time. Freeman/Times Books/Henry Holt & Co.
Simon H. A. (1990). Invariants of human behavior. Annual Review of Psychology, 41, 1–19.
Skinner B. F. (1948). Superstition in the pigeon. Journal of Experimental Psychology, 38, 168–172.
Skov R. B., Sherman S. J. (1986). Information-gathering processes: Diagnosticity, hypothesis-confirmatory strategies, and perceived hypothesis confirmation. Journal of Experimental Social Psychology, 22, 93–121. https://doi.org/10.1016/0022-1031(86)90031-4
Snyder M., Uranowitz S. W. (1978). Reconstructing the past: Some cognitive consequences of person perception. Journal of Personality and Social Psychology, 36(9), 941–950. https://doi.org/10.1037/0022-3514.36.9.941
Sommers S. R., Ellsworth P. C. (2001). White juror bias: An investigation of prejudice against Black defendants in the American courtroom. Psychology, Public Policy, and Law, 7(1), 201–229. https://doi.org/10.1037/1076-8971.7.1.201
Steblay N., Hosch H. M., Culhane S. E., McWethy A. (2006). The impact on juror verdicts of judicial instruction to disregard inadmissible evidence: A meta-analysis. Law and Human Behavior, 30(4), 469–492. https://doi.org/10.1007/s10979-006-9039-7
Steblay N. M., Besirevic J., Fulero S. M., Jimenez-Lorente B. (1999). The effects of pretrial publicity on juror verdicts: A meta-analytic review. Law and Human Behavior, 23(2), 219–235.
Swann W. B. Jr., Buhrmester M. D. (2012). Self-verification: The search for coherence. In Leary M. R., Tangney J. P. (Eds.), Handbook of self and identity (2nd ed., pp. 405–424). Guilford Press.
Tajfel H., Turner J. C. (1979). An integrative theory of intergroup conflict. In Austin W. G., Worchel S. (Eds.), Social psychology of intergroup relations (pp. 33–47). Brooks Cole.
Tajfel H., Turner J. C. (1986). The social identity theory of intergroup behavior. In Worchel S., Austin W. G. (Eds.), Psychology of intergroup relations (pp. 7–24). Nelson-Hall Publishers.
Tarrant M., Branscombe N., Warner R., Weston D. (2012). Social identity and perceptions of torture: It’s moral when we do it. Journal of Experimental Social Psychology, 48(2), 513–518. https://doi.org/10.1016/j.jesp.2011.10.017
Taylor S. E., Brown J. D. (1988). Illusion and well-being: A social psychological perspective on mental health. Psychological Bulletin, 103(2), 193–210.
Taylor S. E., Brown J. D. (1994). Positive illusions and well-being revisited: Separating fact from fiction. Psychological Bulletin, 116(1), 21–27.
Taylor S. E., Kemeny M. E., Reed G. M., Bower J. E., Gruenewald T. L. (2000). Psychological resources, positive illusions, and health. American Psychologist, 55(1), 99–109. https://doi.org/10.1037/0003-066X.55.1.99
Tobias H., Joseph A. (2020). Sustaining systemic racism through psychological gaslighting: Denials of racial profiling and justifications of carding by police utilizing local news media. Race and Justice, 10(4), 424–455. https://doi.org/10.1177/2153368718760969
Todd A. R., Burgmer P. (2013). Perspective taking and automatic intergroup evaluation change: Testing an associative self-anchoring account. Journal of Personality and Social Psychology, 104(5), 786–802. https://doi.org/10.1037/a0031999
Traut-Mattausch E., Schulz-Hardt S., Greitemeyer T., Frey D. (2004). Expectancy confirmation in spite of disconfirming evidence: The case of price increases due to the introduction of the Euro. European Journal of Social Psychology, 34(6), 739–760. https://doi.org/10.1002/ejsp.228
Trope Y., Liberman A. (1996). Social hypothesis testing: Cognitive and motivational mechanisms. In Higgins E. T., Kruglanski A. W. (Eds.), Social psychology: Handbook of basic principles (pp. 239–270). Guilford Press.
Turner J. C., Hogg M. A., Oakes P. J., Reicher S. D., Wetherell M. S. (1987). Rediscovering the social group: A self-categorization theory. Basil Blackwell.
Vallone R. P., Ross L., Lepper M. R. (1985). The hostile media phenomenon: Biased perception and perceptions of media bias in coverage of the Beirut massacre. Journal of Personality and Social Psychology, 49(3), 577–585. https://doi.org/10.1037/0022-3514.49.3.577
van Boven L., Kamada A., Gilovich T. (1999). The perceiver as perceived: Everyday intuitions about the correspondence bias. Journal of Personality and Social Psychology, 77(6), 1188–1199. https://doi.org/10.1037/0022-3514.77.6.1188
van Prooijen J.-W., Douglas K. M., De Inocencio C. (2018). Connecting the dots: Illusory pattern perception predicts belief in conspiracies and the supernatural. European Journal of Social Psychology, 48(3), 320–335. https://doi.org/10.1002/ejsp.2331
van Veelen R., Otten S., Hansen N. (2011). Linking self and ingroup: Self-anchoring as distinctive cognitive route to social identification. European Journal of Social Psychology, 41(5), 628–637. https://doi.org/10.1002/ejsp.792
von der Beck I., Cress U., Oeberst A. (2019). Is there hindsight bias without real hindsight? Conjectures are sufficient to elicit hindsight bias. Journal of Experimental Psychology: Applied, 25(1), 88–99. https://doi.org/10.1037/xap0000185
Watzlawick P. (1981). Die erfundene Wirklichkeit. Wie wissen wir, was wir zu wissen glauben? Beiträge zum Konstruktivismus [The invented reality. How do we know, what we believe to know? Contributions to Constructivism]. Piper.
Weber R., Camerer C., Rottenstreich Y., Knez M. (2001). The illusion of leadership: Misattribution of cause in coordination games. Organization Science, 12(5), 582–598.
Webster D. M., Kruglanski A. W. (1997). Individual differences in need for cognitive closure. Journal of Personality and Social Psychology, 67, 1049–1062.
Wilson T. D., Centerbar D. B., Brekke N. (2002). Mental contamination and the debiasing problem. In Gilovich T., Griffin D., Kahneman D. (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 185–200). Cambridge University Press.
Witter R. A., Stock W. A., Okun M. A., Haring M. J. (1985). Religion and subjective well-being in adulthood: A quantitative synthesis. Review of Religious Research, 26(4), 332–342. https://doi.org/10.2307/3511048
Wright W. F., Bower G. H. (1992). Mood effects on subjective probability assessment. Organizational Behavior and Human Decision Processes, 52(2), 276–291.
Wyer R. S. Jr., Frey D. (1983). The effects of feedback about self and others on the recall and judgments of feedback-relevant information. Journal of Experimental Social Psychology, 19(6), 540–559. https://doi.org/10.1016/0022-1031(83)90015-X
Yong J. C., Li N. P., Kanazawa S. (2021). Not so much rational but rationalizing: Humans evolved as coherence-seeking, fiction-making animals. American Psychologist, 76(5), 781–793. https://doi.org/10.1037/amp0000674
Zuckerman M., Knee C. R., Hodgins H. S., Miyake K. (1995). Hypothesis confirmation: The joint effect of positive test strategy and acquiescence response set. Journal of Personality and Social Psychology, 68, 52–60. https://doi.org/10.1037/0022-3514.68.1.52