Press enter or click to view image in full size
The Rationalist Dream
For centuries, Western philosophy has celebrated human reason as our defining feature — what separates us from animals and allows us to discover moral truths. From Plato to Kant to modern political discourse, we’ve maintained an almost religious faith in the power of rational thought to guide our moral and political judgments.
This rationalist tradition reached its apex in the Enlightenment, when thinkers like Kant proposed that pure reason could discover universal moral laws. In the 20th century, psychologists like Lawrence Kohlberg developed influential theories of moral development based on this same premise: moral maturity means learning to reason according to abstract principles of justice, moving beyond mere emotional reactions and cultural conventions.
This rationalist view isn’t just academic theory — it permeates our collective understanding of politics and persuasion. Politicians appeal to voters with detailed policy positions, assuming we evaluate these proposals through careful cost-benefit analysis. Op-eds meticulously dissect logical inconsistencies in opposing arguments. University courses teach critical thinking as the antidote to poor decision-making. Public intellectuals tell us that if only people would “follow the science” or “look at the evidence,” our political divides would dissolve.
The modern political discourse operates on this fundamental assumption: we reach our moral and political positions through reasoning, and disagreement stems from either faulty logic or missing information. If your neighbor supports Trump’s immigration policies while you support progressive alternatives, surely one of you has reasoned incorrectly or lacks crucial facts. The solution seems obvious: better reasoning and more information.
But what if this entire framework — this faith in reason as the path to moral truth — is fundamentally misguided?
Cracks in the Rationalist Façade
When Jonathan Haidt began researching moral psychology in the late 1980s, he shared this rationalist assumption. A liberal academic studying under Kohlberg’s colleagues, he initially accepted the dominant paradigm: moral development means developing better reasoning skills.
Yet as he conducted research across cultures and political divides, troubling evidence began to accumulate. Consider these surprising findings:
The Moral Dumbfounding Experiments
Haidt and his colleague Scott Murphy presented participants with scenarios designed to trigger moral judgments without corresponding harm:
> Mark and Julie are brother and sister traveling in France. One night in their hotel room, they decide it would be interesting to have sex. Julie is already taking birth control pills, but Mark uses a condom anyway. They enjoy the experience but decide not to do it again. They keep it as a special secret between them.
Participants immediately condemned this behavior as wrong. When asked why, they cited reasons like “they might produce children with genetic abnormalities” or “it would damage their relationship.” But these objections were explicitly addressed in the scenario — no risk of pregnancy, no relationship damage.
When researchers pointed this out, something fascinating happened. Rather than changing their judgment, participants simply searched for new justifications. When those were debunked, many eventually reached a state Haidt calls “moral dumbfounding”: “I know it’s wrong, I just can’t explain why.”
This pattern repeated with other scenarios, like a woman cutting up an American flag to use as bathroom cleaning rags, or a family eating their pet dog after it was accidentally killed by a car. People made instant moral judgments, then struggled to provide reasons that matched their emotional reactions.
If moral judgments truly came from reasoning, we would expect people to revise their views when their reasoning failed. Instead, the judgment remained fixed while the reasoning shifted to accommodate it — suggesting the judgment came first, with reasoning following as justification.
Brain Damage and Moral Decision-Making
More compelling evidence came from neuroscientist Antonio Damasio’s work with patients who had damage to the ventromedial prefrontal cortex (vmPFC) — the brain region where emotional reactions are integrated into decision-making.
These patients maintained normal IQ and abstract reasoning abilities. They could discuss moral dilemmas and recite moral principles flawlessly. Yet their lives fell apart. They made catastrophically bad decisions, alienated friends and family, and couldn’t function in society.
One patient, “Elliot,” could reason through the pros and cons of different options endlessly but couldn’t actually make a simple decision like scheduling an appointment. Without emotional guidance, he was paralyzed by equally valid alternatives.
This contradicted the rationalist model completely. If reasoning were the foundation of moral decision-making, these patients with intact reasoning abilities should have excelled. Instead, without emotional input, their moral compass collapsed — suggesting emotions aren’t obstacles to good moral judgment but essential components of it.
The Cultural Variations
When Haidt expanded his research beyond American college students to diverse cultures, the rationalist model faced even more challenges. In Brazil, India, and working-class Philadelphia, he found moral domains that elite American academics barely recognized as moral issues at all.
In these communities, violations of hierarchy, loyalty, and purity were condemned as moral failings, not merely as social conventions or personal preferences. A young person disrespecting an elder wasn’t just being rude — they were behaving immorally. Failing to defend one’s family honor wasn’t just strange — it was wrong.
Importantly, these weren’t just different conclusions reached through the same reasoning process. They represented fundamentally different moral intuitions about what matters. The rational principles that Western philosophers had laboriously derived over centuries — focused primarily on individual rights and harm prevention — appeared to be culturally specific rather than universal human truths.
The Political Laboratory
Today’s American political landscape provides a natural experiment that reveals these dynamics in stark relief. Consider how Americans respond to key political flashpoints:
Trump’s Border Policies
When the Trump administration implemented family separation policies at the southern border, progressive Americans reacted with immediate moral horror. The sight of children in detention facilities triggered powerful intuitions about care and harm. Their reasoning followed these intuitions: this policy causes suffering, violates human rights, and traumatizes vulnerable children.
Many conservative Americans, viewing the same images, had different intuitive reactions. Their moral concerns about loyalty (protecting the nation’s borders), authority (upholding laws), and sanctity (maintaining cultural integrity) were activated. Their reasoning followed these different intuitions: illegal immigration undermines rule of law, threatens national sovereignty, and strains resources.
What’s crucial here is that both sides believe their position is self-evidently correct. Neither arrived at their view through a dispassionate analysis of immigration statistics. Rather, their intuitive moral reactions came first, with reasoning constructed afterward to justify these reactions.
The more polarized the issue, the more this pattern appears. On transgender rights, gun control, COVID restrictions, or racial justice initiatives, Americans across the political spectrum experience immediate intuitive reactions and then construct seemingly rational arguments to support what they already believe.
The DEI Controversy
Consider how differently Americans perceive Diversity, Equity, and Inclusion initiatives. Progressive supporters intuitively see these programs as moral imperatives addressing historical injustices. Their reasoning follows: DEI policies help the vulnerable, promote fairness, and create equal opportunity.
Conservative critics intuitively experience these same initiatives as threats to meritocracy, institutional traditions, and individual liberty. Their reasoning follows: DEI policies undermine standards, impose ideological conformity, and discriminate against non-preferred groups.
What’s striking is how confident each side is in the rationality of their position — and how irrational the opposing view appears. Each side can identify logical flaws and factual errors in the other’s arguments, yet somehow can’t see the intuitive foundations of their own position.
The Rider and the Elephant
These observations led Haidt to a radical conclusion: the rationalist model gets the relationship between reasoning and intuition exactly backward. Rather than reasoning driving our moral judgments, our intuitions and emotions come first, with reasoning primarily serving to justify these intuitive reactions.
Haidt uses a powerful metaphor to explain this relationship: the rider and the elephant. The elephant represents our intuitive reactions — immediate, powerful, and driven by evolutionary and cultural programming. The rider represents our conscious reasoning — articulate, deliberate, but ultimately serving the elephant rather than controlling it.
The rider isn’t completely powerless — it can influence the elephant gradually through reflection and planning. But in the moment of judgment, the elephant leans first, and the rider’s job is primarily to explain and justify that leaning.
This model illuminates why political and moral disagreements are so difficult to resolve through rational argument alone. When you present facts and logic to someone who disagrees with you, you’re addressing their rider while ignoring their elephant. The rider might acknowledge your points, but unless you’ve influenced the elephant, no real change occurs.
The Moral Taste Buds
What guides these powerful intuitive reactions? Through cross-cultural research, Haidt identified several distinct “moral foundations” that operate like taste receptors for moral concerns:
- Care/harm: Sensitivity to suffering and the need to protect the vulnerable
- Fairness/cheating: Concerns about equality, proportionality, and reciprocity
- Loyalty/betrayal: Valuing group cohesion and detecting threats to the group
- Authority/subversion: Respect for legitimate hierarchy and tradition
- Sanctity/degradation: Protection against contamination, both physical and spiritual
- Liberty/oppression: Resistance to domination and restriction of freedom
Haidt’s research revealed a crucial pattern: progressives primarily emphasize care and fairness, while conservatives value all six foundations more equally. This isn’t a matter of conservatives having additional arbitrary values — these foundations evolved to solve social coordination problems faced by our ancestors.
Consider how these different moral palates manifest in current events:
Trump’s Appeal Through Moral Foundations
Despite behavior that violates traditional conservative values in many ways (multiple divorces, crude language, questionable business ethics), Donald Trump maintains remarkable support from conservative Americans — especially evangelical Christians who might be expected to value personal morality.
This seeming contradiction makes sense through Haidt’s framework. Trump powerfully triggers the binding moral foundations that conservatives value:
- Loyalty: His “America First” rhetoric and criticisms of globalism signal fierce in-group loyalty
- Authority: His strongman persona appeals to desires for clear leadership and hierarchy
- Sanctity: His promises to protect religious liberty and traditional values activate purity concerns
- Liberty: His attacks on government regulation and “political correctness” frame him as a defender against oppression
When progressives criticize Trump supporters using care and fairness arguments (“How can you support someone who separates families?”), they’re speaking a moral language that doesn’t address the foundations driving Trump support. The elephants simply move in different directions, while the riders argue past each other.
DEI Through Moral Foundations
Similarly, DEI initiatives activate different moral foundations for different Americans:
For progressives, these policies powerfully trigger:
- Care: Protecting marginalized groups from discrimination
- Fairness: Addressing historical inequities to create equal opportunity
For conservatives, these same policies often trigger concerns about:
- Liberty: Restrictions on speech and association through mandatory trainings
- Authority: Undermining traditional institutional values and hierarchies
- Fairness: But in its proportionality aspect — rewards should be earned through merit
Neither side is ignoring morality — they’re operating with different moral intuitions about what matters most in this context.
The Social Dimension
Haidt’s model goes further by recognizing that moral judgment isn’t just individual but deeply social. His “Social Intuitionist Model” incorporates how we influence each other’s moral intuitions:
- Intuitions come first: Your elephant leans in response to events
- Reasoning follows: Your rider justifies where the elephant leaned
- Reasoning is for social influence: You share these justifications to influence others
- Social influence affects intuitions directly: Others’ reactions shape your future intuitions
This social dimension explains why political tribes become increasingly extreme over time. In politically homogeneous social networks, whether progressive university departments or conservative rural communities, members continually activate and reinforce the same moral intuitions in each other. Without exposure to opposing moral foundations, our elephants grow ever more certain of their path.
The Implications for American Democracy
This understanding of moral psychology reveals why America’s political divide isn’t just about different policy preferences — it reflects fundamentally different moral intuitions about what matters most. The traditional assumption that providing better information or more logical arguments will resolve these differences appears deeply misguided.
Consider how this plays out in Congress or on cable news: Representatives from opposing parties offer seemingly rational arguments that completely fail to persuade the other side. Each believes the other is being willfully obtuse or dishonest, when in reality, they’re operating from entirely different moral starting points.
The 2024 presidential election cycle shows this dynamic in stark relief. Biden supporters and Trump supporters don’t just disagree about policies — they inhabit different moral universes with different intuitive responses to the same events:
- When Trump promises to conduct “the largest deportation operation in American history,” his supporters intuitively sense the protection of order, sovereignty, and cultural integrity. His detractors intuitively sense cruelty, xenophobia, and human rights violations.
- When universities promote DEI programs, progressives intuitively sense compassion, inclusion, and historical justice. Critics intuitively sense ideological conformity, discrimination, and the undermining of meritocracy.
These intuitive differences then drive entirely separate chains of reasoning that appear self-evident to believers and absurd to opponents.
A New Approach to Political Persuasion
If Haidt’s model is correct, we need to radically rethink political persuasion. Appeals to reason and evidence remain important, but they must come after we’ve addressed the elephant in the room — the moral intuitions driving opposition.
Effective persuasion might look like this:
- Recognize the validity of multiple moral foundations: All six foundations evolved to solve real human problems.
- Speak to the elephant before the rider: Address moral concerns that matter to your audience, not just those that motivate you.
- Frame issues in terms of multiple foundations: For instance, progressives might frame climate policy not just in terms of care (protecting future generations) but also loyalty (patriotic energy independence) and sanctity (preserving God’s creation).
- Create safe spaces for attitude change: People rarely change moral positions when feeling attacked or defensive.
The Path Forward
Perhaps the most profound implication of Haidt’s work is that moral righteousness itself is the enemy of moral progress. The more certain we are of our moral correctness, the less able we are to understand opposing viewpoints or find common ground.
America’s political tribalism has reached dangerous levels precisely because each side is convinced of its moral superiority. Progressives see conservatives as callous, prejudiced, and authoritarian. Conservatives see progressives as naïve, disloyal, and tyrannical. These caricatures feed a cycle of contempt that makes compromise nearly impossible.
The way forward begins with moral humility — the recognition that our own intuitions, however compelling they feel, reflect just one configuration of moral foundations. This doesn’t mean abandoning our moral convictions, but rather understanding them as products of evolution, culture, and personality rather than direct perceptions of objective moral truth.
With this humility comes curiosity about different moral perspectives. When we encounter someone who supports policies we find objectionable — whether a Trump voter concerned about cultural change or a DEI advocate pushing for equity over equality — we might ask: “What moral foundations are guiding this person’s intuitions? What are they seeing that I’m missing?”
Haidt suggests we might become more like anthropologists in our political discussions — genuinely curious about different moral worlds rather than missionaries determined to convert the misguided.