Guide to cognitive biases
en.shortcogs.comWhen I see someone on Facebook/Twitter link to a site like this to demonstrate that the person they’re arguing with is “doing a fallacy”, I just accuse them of Appeal To Authority. I feel like these kinds of sites are often just used to shut someone down by saying that they’re doing something wrong, without actually going through the work of following through with why the “fallacy” makes their argument weak. Also, in my experience fallacies/biases are often more indicative of a weak argument than a wrong one.
Well, if you really want to go this route, you can beat them at their own game and accuse them of Arguing From Fallacy. (A nice illustration: https://theupturnedmicroscope.com/comic/logical-fallacies-ar...)
That being said, I do think it’s important to avoid fallacious reasoning — it helps in making arguments clearer. As mentioned above, just because someone uses a fallacy doesn’t mean their conclusion is wrong, but it does mean that you can prove their argument wrong. (And any reply should ideally be phrased in these terms: ‘your argument has problem X’ is easier to respond to than ‘your argument has fallacy Y’.) Furthermore it means they are thinking less clearly than they perhaps should be.
The Fallacy Fallacy doesn't actually do anything because it is covered by itself and creates a loop.
I'd say the bigger problem is that it's roughly equivalent to saying, "Just because my argument is bad doesn't mean I'm wrong."
I mean, yes, that's true, but you have yet to prove yourself correct, so why should anyone change their mind?
And that’s how you get a stack overflow.
Always remember your exit condition when doing recursion, folks!
And by extension, fallacious reasoning means that you have no good reason to take what the reasoner is arguing as true (unless they can a non-fallacious argument) – right?
The fact that an argument is fallacious doesn't necessarily mean that the conclusion is wrong, but there does (logically, by necessity) exist (at least) a non-fallacious argument for a correct conclusion.
Yes, exactly this. It’s useful to understand biases: cognitive diversions from the truth.
Echoing OP’s sentiment: How one uses/weaponizes such knowledge depends on one’s goals: to win petty ego battles in debate or seek the truth?
> just because someone uses a fallacy doesn’t mean their conclusion is wrong, but it does mean that you can prove their argument wrong.
One way this can go wrong is if the two people are thinking at different levels of abstraction or dimensionality (variables), especially when one or both participants don't understand what that means (which seems to be "most" people in the general public).
What do you mean?
> accuse them of Arguing From Fallacy
But then, you're just doing an ad hominem...
That's not an ad hominem. You're attacking their argument, not them as a person.
An ad hominem would be like saying, "Oh yeah, well you're an alcoholic", for instance, as though that could discredit their argument even if it were true.
I know I am, but what... am... I?
Yeah I remember seeing a lot of people deploying the fallacy list(s) as an argumentative tool to say "ha! you're argument vaguely resembles this 'fallacy' so you're instantly wrong." It's a lot less now I think, they've fallen out of style I guess, or I'm just getting in/reading less pointless arguments.
I call this the "fallacy" fallacy.
The lengths people will go to, to avoid thinking, is impressive.
Also, a cognitive bias isn’t necessarily a fallacy.
It’s only a prism that deforms our perspective of the world.
Fallacies are a different beast altogether.
It's a bit funny because Appeal to Authority isn't a fallacy. The actual fallacy is Appeal to Incorrect / False Authority:
"An appeal to false authority is a fallacious argument that relies on the statements of a false authority figure, who is framed as a credible authority on the topic being discussed."
https://effectiviology.com/false-authority/#:~:text=An%20app....
Does it actually matter?
An argument doesn't get any more or less fallacious based on who believes in it (the authority, in this case) – that's not an argument in itself.
It might be reasonable (a useful heuristic) to lean to the side of the expert, but the fact that an expert believes something doesn't in itself make the conclusion correct.
To the contrary, if someone is a correct authority, their statements have more weight on their domain of expertise. While all of us on HN are very clever and could think of a number of exceptions, I would trust a climate scientist as a legitimate authority on climate change over a religious leader or a PR person.
I too would trust a climate scientist as a legitimate authority on climate change over a religious leader or a PR person – but that's not what I'm discussing.
I'm saying global warming isn't true because scientists believe it to be true (this would be a fallacious argument).
That would be a fallacious argument because it does not follow (non-sequitur) that the beliefs of scientists can cause shifts in reality, but nobody was arguing that.
They were arguing that it is logical to conclude that climate change is real because the overwhelming consensus among climate scientists (the actual domain experts) is that it is real.
That's not an argument that deals with whether climate change is real or not – but whether it's reasonable to believe it is, based on one's own lack of understanding in combination with making (reasonable) assumptions.
It is an argument that the consensus of the overwhelming majority of experts on any given subject with such a consensus is strong evidence for that position.
> The actual fallacy
... according to linguistics researcher with a website. He sounds like as much a false authority on philosophy as anyone else.
From the Standford Encyclopedia of Philosophy:
9. The ad verecundiam fallacy concerns appeals to authority or expertise. Fundamentally, the fallacy involves accepting as evidence for a proposition the pronouncement of someone who is taken to be an authority but is not really an authority. This can happen when non-experts parade as experts in fields in which they have no special competence—when, for example, celebrities endorse commercial products or social movements. Similarly, when there is controversy, and authorities are divided, it is an error to base one’s view on the authority of just some of them. (See also 2.4 below.)
The point the parent was making is that citing authority isn't a bad thing to do. Citing authorities in _a different field_ often is, though.
(Citing a doctor on the common cold is not a fallacy, but citing a doctor on global economics is)
And in doing so they cited a linguist about philosophy.
And it doesn’t matter anyway - thankfully we use science now and not philosophy to determine truth.
It also tends to derail the conversation and you end up arguing about fallacy definitions instead of the actual point.
There is also a very poster-suited svg over at wikipedia that categorizes cognitive biases: https://upload.wikimedia.org/wikipedia/commons/6/65/Cognitiv...
(There are also hyperlinks to the corresponding wikipedia pages)
As Richard Feynman said: The first principle is that you must not fool yourself and you are the easiest person to fool.
Most cognitive biases are effective ways of fooling your self.
This is what I take from it. As mentioned upthread, so many people see lists like this and think it's a way to prove other people wrong; when I look down the list I think, yep, gotta be careful to avoid that one. Yep, that one is a minefield for me specifically.
It takes a lot of work to think critically, and the hardest work is thinking critically about the things that align with your biases. It's amazing how many people can see it in other people but not themselves.
You can also view these as tools. Like, these are the rules of thumb that the brain lazily relies on. So if you’re pitching something, maybe try slipping a couple of these biases in there... (just don't make it too obvious!)
Seeing it in others and not yourself is the ultimate confirmation bias.
Why are there so many of these cognitive biases? Can we infer they persist widely because they confer some benefit?
I found Daniel Kahneman’s Thinking Fast and Slow persuasive on this question. I don’t have a psych or neuro degree, but Crudely speaking and possibly butchering it: the mind reflects two types of thinking, which Kahneman terms System 1 (fast) and System 2 (slow). System 1 is closer to instinct and helps us respond quickly in, for example, fight or flight situations. System 2 requires significant mental effort to more thoroughly analyze things like complex math problems. It’s easier to coast on System 1 thinking. The book provides examples and much better and more in depth explanations. There was a replication controversy about some of it, but still very worth reading I think.
Fair enough. I’ve read Kahnemann’s book, and hadn’t considered how to integrate it with TFA.
The sheer number of cognitive biases presented in TFA, and the overlap of the categorisations had me bamboozled.
But yeah, if these are System 1 staples then I can imagine how they might have made do for regular people for a long time. Some biases and prejudices affirm survival in a less civilised setting eg prehistoric tribes that compete for land and food with neighbouring tribes.
So perhaps it is the compact of civilisation that opens up the vistas for System 2 thinking to yield benefits.
Check out this one on a large screen: https://upload.wikimedia.org/wikipedia/commons/6/65/Cognitiv...
While still bamboozling, it brings some order and visual overview (in a beatiful way IMO), plus links for further study.
All credit for posting it belongs to: https://news.ycombinator.com/item?id=28168269
Excellent. This diagram provides context and accessibility to the subject matter that was missing from TFA.
Thank you for bringing this to my attention.
They don't have to have an individual benefit (or even be an intentional thing), evolution isn't great at perfecting things it deals very well in just good enough and doesn't really have a great mechanism to deal with small issues with small effects to reproductive strength. See the recurrent laryngeal nerve for a pretty simple example of it. [0] It has little/no downside to reproductive ability so there's no reason to 'fix' it evolutionarily speaking.
For the vast majority of our time evolving the biggest issue was spotting a stalking predator or hidden danger before it could hurt us or spotting food better than other chimps/homo */lizards (at various points on the evolutionary history). For that quick efficient pattern matching is what you need not solid reasoning about large groups so that's what we've got.
This is an active area of research in the cognitive science world.
First, 'bias' is defined with respect to an ideal rational actor with perfect information and infinite computing power. The reasons for this are mostly historical (rational animal, Homo economicus, etc), but it also serves as a useful baseline 'ideal observer' model to compare human behavior against.
In the 1950s, Herbert Simon coined the term 'bounded rationality' to describe rationality within a set of computational bounds. For example, if we have finite working memory and limited computing time, but were still trying to make optimal decisions within those bounds, what behavior would we see? In this case, decision making turns from unconstrained to constrained optimization. What may LOOK like a 'bias', with its connotation of sub-optimality, may actually be optimal behavior given constraints.
More recently, people like Gerd Gigerenzer suggested that human decision making is largely composed of heuristics and tricks that enable 'fast and frugal' responses to scenarios. They don't need to be perfect, just good enough - and 'cheap' enough with respect to time that they are worth developing. This is probably true to a certain extent, but to me it's scientifically unsatisfying, as there is no general principle (except for 'cognitive miserliness') to explain behavior - and specifically, there is no longer a way to specify 'normative' or expected behavior in any given situation.
More recently still, there is a trend to revive Simon's perspective under the name 'Resource Rationality'. Tom Griffiths is one of the active researchers in this field. The idea is the similar to Simon's - we have limited 'cognitive resources' and strive to be rational. Griffiths and others have attempted to show that many behaviors that are traditionally called 'cognitive biases' are actually predicted if we are behaving optimally but with constrained cognitive resources.
From the resource rationality perspective, a cognitive bias is a way that a solution to unconstrained optimization differs from a solution to a corresponding constrained optimization problem. Roughly speaking, any combination of limitation on (memory, computation, time, energy, information) will produce a 'bias', and different scenarios we encounter push up against these boundaries in different ways, leading to a plethora of 'biases'.
> a cognitive bias is a way that a solution to unconstrained optimization differs from a solution to a corresponding constrained optimization problem
That is a narrow and incorrect definition IMHO. Two resource constrained solutions can also differ wildly in their rationality (or reality approximation if you will) and there is no resource unconstrained general intelligence in the universe so we wouldn’t even really know what that normativity looks like.
Biases are more of a classification of errors specific to our cognitive machinery; framing errors, recall errors, precision errors, proportionality errors, inference errors etc wrt the best we could have done with it.
Yes it's narrow. I was trying to summarize the 'resource rationality' approach, and there are certainly other legitimate accounts of cognitive bias. My interpretation of that work is that it's a project to convert the list of cognitive biases from a classification of errors to a general generative principle.
I think it's mostly because they are quite ad-hoc and superficial. It's often not even clear what is the normative behavior that the bias is deviation of, or whether that normative behavior is somehow better than the "biased" one. It's a weird scene.
I think the most obvious answer to this is that understanding the world perfectly is impossible (for an individiual human in a finite lifespan), and so in reality we use a whole slew of hacks and simplifications. Things fall to the ground at about 9.8m/s. Is that true? No. Is it helpful to assume it's true, sure. And so it's extra-ordinarily easy to therefore find conditions where these simplifications don't hold. We optimize for survival and reproduction. Anchoring[1] is a great example - as long as no one knows about anchoring, anchoring is a great technique for negotiation, in fact there are studies that show some form of anchoring is optimal. It's far more optimal to use these cognitive biases than to invent new algorithms for life- because most of these are incredibly difficult problems- what is maybe more surprising is how effective our cognitive biases are and how they've propogated, isn't it more crazy that we're all fairly good at applying these intuitive rules of thumb?
[1]: Setting an initial price in order to later favourably negotiate a price.
My own speculation basically boils down to: Human minds are primarily meaning-seeking machines, to such a large extent that we even create meaning where there is none (apophenia).
Take pareidolia, for instance – it's likely been more evolutionary advantageous to see faces where there are none (and thus flee tot often), as opposed to not seeing faces where there are faces (and thus be eaten).
And in evolutionary terms, not everything that exists has some benefit. Some features just haven't been subject to evolutionary pressures; Not selected for, just not selected against. Vestigial features are prime examples of this.
They are only a problem if you trying to use your cognitive power to prove mathematical theorems or discover natural physical laws. Which are not things that our brains are designed for.
In all other situations, they are working as expected.
e.g. risk avoidance is very important because one single mistake in risk assessment (underestimating a real threat) can wipe you out.
https://www.youtube.com/watch?v=m26jz1YyIk8
The short but very very accurate documentary above (2 min) explains why.
Basically, evolution does not reward survival of the "smartest" or survival of the person with "less cognitive biases."
Because discovering a “new bias”, like discovering new market failures, is an instant path to book authorship and possibly the nobel.
A lot of them could frankly be considered versions of the same bias, or combinations of some of them.
Others are not bias at all, just a result of bad thinking.
Thinking there's a reason for it is probably a bias, too.
What’s the use of this really?
Many of these are used as ammunition for arguments vs. clarifing reasoning.
Pardon my cynicism, but you could use these as a guide for drawing money from people that are predisposed towards having many strong cognitive biases. There seems to be a near limitless number of people with money suffering from extreme social isolation in addition to narcissistic personality disorder. These people have little interest in reality and will happily give you their money if you play into these biases.
Bingo. These (the authors) are people with no ideas of their own, but they still need funding. So they "classify" other people's ideas and make some nice graphics.
Read "Thinking Fast and Slow" (which, to their credit, they do list as a resource) and skip the rest of their grant seeking.
Getting closer to the truth, that's why greek philosophers created logic and focused part of their studies on it.
Using shields (against self-sabotaging traps) as ammo to destroy others is very dumb, but yes it happens.
What's the use of knowledge of optical illusions? So that you know that "seeing is believing" is not 100% dependable.
Also you just listed their "use" in your very comment. People build upon these to convince others, and if you would like to be less manipulated, then they are useful to know.
It’s useful when evaluating your own actions and reasoning, especially when you might have a vested interest in the result. From evaluating investment opportunities to understanding the state of science in an area, being aware of cognitive biases can help you get to the real truth.
Knowing about them is supposed to be an introspection tool to help you make better arguments/philosophies. For example, I've seen some ex-christians who decided to turn atheists by learning more about cognitive biases, and then realizing that some of their former religious arguments were circular reasoning or appeal to authority or whatever.
There is a bit of a dunning-kruger effect to be sure: accusing other of cognitive biases that one just stumbled into on wikipedia is itself going off-topic. It takes a bit more effort to actually internalize it.
On a semi-related topic, I find it a bit weird the way people think about "winning an argument". People usually prize being "right", but if you think about it from a strictly selfish perspective, you don't really gain anything from that outcome, whereas learning seems like a better outcome.
> For example, I've seen some ex-christians who decided to turn atheists by learning more about cognitive biases, and then realizing that some of their former religious arguments were circular reasoning or appeal to authority or whatever.
Similarly, there are some people (a much smaller number I would expect) that have moved from atheism to "spirituality", for the same reasons.
Yeah, I'm one of those, though I'm not sure my belief system qualifies as spirituality. The gist in these cases is realizing that atheism is not the opposite of christianity (in the sense that there ought to be a conflict between two incompatible ideologies) and that there's quite a bit more depth and breadth to theism, moral frameworks and philosophy in general than the who-is-more-right crowd might think.
Totally agree - my general take on it is that standard Atheism and Christianity/etc (as understood & practiced by the average person) are incredibly simplistic, but most people in either ideological camp are fairly oblivious to it.
I’ve become disillusioned with knowledge in general.
A lot of what i know turns out to be wrong or disproven. I do just as well acting emotionally and impulsively as i do when i think logically.
Not to mention following what everyone is doing is fine most of the time. Starting to feel that knowledge is a trap. Only way out is to not play.
Only exception is for basic math/physica that I can grok.
The mantra I follow is "strong opinions held weakly" (meaning, strive to be as well-informed as I can, but be willing to change my mind to even the polar opposite of my current view in light of new evidence)
IMHO, the key is in realizing that your ego has a tendency to gets tangled with the notion of being right. It's a lot easier to "let go" if you internalize the idea that your worth/identity is not the same thing as your beliefs and that a lot of what you think you "know" is actually not factual knowledge, but merely interpretation or inference of previous experiences.
> Cognitive biases refer to the identifiable and indexable errors that are found in our judgment in a predictable and systematic way.
Ok.
Clicking through to the first one:
> The Barnum effect is a cognitive bias that induces an individual to accept a vague description of personality traits as applying specifically to themselves.
Then they give an example story of something reading like a horoscope.
The story could reasonably describe a large swath of people so I fail to understand how this is an "error" on the part of someone thinking that it describes them.
The key word is ‘specifically.’
I think this makes sense if specifically meant "only" but it doesn't. It means "exact":
The story can specifically refer to multiple people. There is no definite error here as I see it.Specifically: in a way that is exact and clear; precisely.The word "specifically" can also be used as an antonym of "generally", meaning "by most people, or to most people".[0] If a story applies to most people, but someone reading the story doesn't consider that possibility, then they have made a small error.
[0] https://dictionary.cambridge.org/dictionary/english/generall...
Is there somewhere a guide for preventing and/or overcoming these cognitive biases?
Practice cognitive behavioral therapy and specifically their cognitive distortions, eg fill in a “daily mood log” form (look it up). Reading about distortions propositionally is not going to get you better, you have to train yourself finding them in the context of your life. Turns out most of the time we are bummed there is a high chance of these biases being in play.
Honesty and a willingness to accept that you're just as biased as anyone else can get you pretty far. A reasonable enough test might be: do you believe anything to be true that you wish weren't true? If every explanation you adopt for every social ill, every natural phenomenon, every historical event neatly fits a model that generates no discomfort, do you think it's more likely that the world really is that way or that you're privileging particular explanations?
> Honesty and a willingness to accept that you're just as biased as anyone else can get you pretty far.
I am very confident that there is a phenomena whereby someone can do this, but that in doing so their mind is in a certain "mode" of some kind (abstract, for starters), but when the mind is in a different mode (discussion or arguing about object level ideas, culture war topics tend to work best), the knowledge that they formerly had becomes inaccessible, often even if reminded of it.
My armchair theory is that this plays a part in it (but there are surely other things going on):
I like this book: "The Art of Thinking Clearly" by Rolf Dobelli. It has some tips on how to avoid some of them. https://www.amazon.com/Art-Thinking-Clearly-Rolf-Dobelli/dp/...
For those who prefer video there's Julia Galef's channel: https://www.youtube.com/channel/UCz-RZblnhjXK_krP1jDybeQ
She is a co-founder of the Center for Applied Rationality & host of the Rationally Speaking podcast.
Apex Fallacy is not listed again:
https://www.urbandictionary.com/define.php?term=Apex%20Falla...
I think that's better a better fit for a logical fallacy chart than a cognitive bias list.
I wouldn't exactly call the urban dictionary a reliable source.
Doing a DuckDuckGo for "apex fallacy" turns up a bunch of incel and mgtow sites.
Is this even a _real_ fallacy?
This is my first time seeing this one mentioned, but I'm curious, what makes something a "real" fallacy?
It seems like it's a real mistake that people make. I guess one could categorize it as a more specific case of misunderstanding a distribution of values. (In the same vein as mean vs median, outlier skewing, assuming unimodal vs bimodal distribution, etc.)
The "realness" is determined by the signal to noise ratio. In this case I find mostly joke sites, memes, and sites trying to sell a political/social agenda using it.
The first result I see on DDG for "apex fallacy" is RationalWiki, which is a left-wing site. The definition there is "when someone evaluates a group based on the performance of best group members, not a representative sample of the group members."
I can see why that's not included in most lists of fallacies. It's just a specific example of cherry picking, a well known fallacy that is already included in the lists.
It also most likely occurs when the speaker identifies with the group whose performance they are over-estimating, and imagines that a similar level of performance can be imputed to them.
Such cases are likely the result of the ingroup half of the Ultimate Attribution Error, where positive characteristics are seen as inherent properties of members of the ingroup, but exceptional when displayed by members of the outgroup.
While i do not see the immediate use for me personally beyond being an interesting read, i must confess that this is a great domain name.
Bias is a good way to look at news sources and someone trying to sell you something. Many of these use your bias against you to fool you and/or manipulate you into something. Either buying something or thinking something.
Knowing what your bias is and which ones you fall into help you cope and guard yourself against them.
https://en.shortcogs.com/bias/blind-spot-bias
(Just kidding.)
The page dedicated to COVID-19 [1] is probably the most depressing thing I read today (and I read twitter.)
Also, among the generaly sane page, there is this bit:
> Conspiracy theories regarding the COVID-19 pandemic are plentiful and varied. One of them suggests that the authorities declared a health emergency in order to force the population to accept a vaccine that they do not need, in order to promote the economic domination of the pharmaceutical industry. Several types of information can be presented in support of this theory, such as proposals for alternative treatments to COVID-19, some data taken out of context from vaccine approval protocols, or annual death rates from seasonal influenza.
> While this information may hold some veracity, it is not sufficient to support the conspiracy theory put forward when compared to scientific studies carried out by both the pharmaceutical industry and public health authorities.
Hardly playing the Devil's advocate, this is the crux of the problem : once you're sincerely convinced that scientists are wrong (which is bound to happen faced with novelly) and governement are lying to you (which is bound to happen because governements are full of politicians), isn't it _rational_ not to believe scientists, and not to trust governements ?
So the rules of the game for scientists are to never ever be wrong, and governments to never ever lie.
But of course, it's not the case, and the "right" thing is to believe scientists because they're right "in general", and to trust politicians "to a degree". But how is advocating that not falling in "appeal to authority" ?
Etc, etc, etc...
At least I'm having those brain farts in a quiet office with a cat on my nap and two jabs in my arms, instead of in a ICU :/ ...
> once you're sincerely convinced that scientists are wrong ... and governement are lying to you
This is the crux. How did those beliefs form? I would argue that in many cases, it wasn't a dispassionate weighing of evidence and calculation of probabilities. If not that, then what? We might also ask - in whose interest is it that citizens don't believe scientists or the government?
> scientists are wrong
Every medical scandal (For example, in the French Antilles, the scandal around "chlordecone", a pesticide, is supposed to be fueling anti-vaccine fear today.) Earlier uncertainties about masks, vaccinne efficiency, vaccines from different countries, possible early treatments, etc...
Even though the "consensus" settles quickly, the dust of the discussion remains in the air for very long. Media is playing a part here, ironicaly because "not blindly trusting the authority" and "presenting a balanced view of facts" is good for ratings.
> ... and governement are lying to you
Do we really need to test this hypothesis :) ? Of course it is a generalization that "all governments are lying all the time". But it is such an easy one to make...
> Media is playing a part here, ironicaly because "not blindly trusting the authority" and "presenting a balanced view of facts" is good for ratings
Yes, this is part of it. Of course scientists are wrong sometimes, but it's in someone's interest to stoke general distrust.
Sure.
I would love to have a way to quantify how much of the distrust is knowingly fueled by "chaotic" actors (snake-oil salesmen, foreign powers, cynical policitians, etc..) vs how much is "baked into" the history, political and mediatic structure and, as the original content explains... in our brains.
History and brains being unfixable, I hope there is something we can improve in the structures.
Suppose you are the ruler of a country which views geopolitics as a zero-sum game, because your goals are incompatible with those of other nations. It is therefore in your interest to cause other governments to be distracted with internal mistrust and divisions and health crises, so that there isn't a strong unified position to take action on the world stage.
As a test of this theory, look at which countries have this sudden mistrust of authority, and which country(s) they would refuse to buy a vaccine from; then look at countries which don't have this problem and consider whose vaccine they buy.
There's more than enough evidence that the government is lying to you. Past and present.
This website is merely ok. Summary: 5/5 in science communication; educators and scientists should learn from its brilliant innovations. 2/5 in psychology, with deep knowledge but severe errors and misunderstandings.
Here's a full list of every page I saw (no selection bias!) and my evaluations.
Anchoring heuristic: this category is not accurate. Anchoring takes two forms: as a priming effect, or as insufficient adjustment from a starting point. Instead, this category lists "Confirmation bias, Echo chamber, Effort justification bias, Escalation of commitment bias, Hindsight bias, Illusion of transparency, Self-fulfilling prophecy". None of these relate to anchoring either directly or through mechanisms. The omission of anchoring, a major effect, is glaring.
Automation bias: the effect as described in the article is not even correct. There is an automation bias, but there is also the opposite anti-automation bias, where humans unfairly disregard the opinions of machines in some contexts, such as algorithmic recipe recommendations. Their CDS example is poorly chosen and barely illustrates the subject. There is a better example from the literature, where humans accept the results of a blatantly wrong calculator over their own estimates. In addition, automation bias is frequently justified even when compared to human "rational" thinking, as summarized in Thinking, Fast and Slow, Ch 21.
I recall this quote from Gelman: "Duncan notes that many common sayings contradict each other. For example, The early bird catches the worm, but The early worm is eaten by the bird." This page on automation bias is no better than one of two contradictory sayings.
I like the three meters, of literature, impact, and replication. Their existence is a well-thought and marvelous insight on science communication. It's a giant improvement over resources from 10 years ago, when such meters were barely considered by psychologists, much less communicators.
I like that they cited references to research papers. I like that they describe how the experiments measure the effect. These are major advantages over comparable websites.
Representativeness heuristic: this category is not accurate. Base rate and conjunction fit. The rest do not.
Base rate neglect: the explanation is quite bad. Kahneman's theory is much more careful and requires huge contortion in "just-so" stories to fit experiments, about how statistical base rates are not always statistical. (I believe his contortions are correct.) But this page doesn't even attempt to describe what qualifies as a "base rate".
The Trump example is a loose fit and the test example is so vague as to be meaningless. The examples in the literature, with criminal identification and test positivity rates, have better writing.
Conjunction fallacy: don't pick that famous Linda example if you have to explain all its linguistic caveats. Your explanations are not convincing, just assertions which the reader can't tell the truth of. (And there are better explanations, like replication under single evaluation vs joint evaluation, or replications with clarifying language, which are not given.) Nevertheless, this page is broadly correct. It is over-specific, focusing on the direct violation of conjunction rather than the representativeness process that underlies the probability estimation, but perhaps the specificity is justified if one wants to hew to the literature.
About->Our team: oh my god, the authors are psychology PhD students. I think psychology needs more academic interest from students, so that graduate programs can have stricter filtering. This website does not give hope that psychology will cast off its bad reputation soon.