AI Is Breaking the Moral Foundation of Modern Society

4 min read Original article ↗

Ryan Moser

Press enter or click to view image in full size

I’ve been thinking about why AI feels like such a destabilizing force for society, and it isn’t just the threat of even greater concentrations of wealth. It’s that AI is dissolving the philosophical foundations that made our current system feel legitimate, even to those who disagreed about how it should work.

The search for a middle way between collectivism and laissez-faire capitalism was a key topic of philosophical debate in the 20th century. The challenge AI presents is best understood through the lenses of two people who took on this topic: John Rawls and Robert Nozick.

In his magnum opus, A Theory of Justice, Rawls imagines how we would organize society if we did so behind a veil of ignorance: absent the knowledge of what position you would hold in society. Rawls’s fundamental argument underpinning this idea is that our talents are not our own, because so much of what forms the basis of our achievements comes down to luck. His solution is the difference principle, which states social and economic inequalities are only justified if they work to the greatest benefit of the least-advantaged members of society.

Robert Nozick famously answered Rawls in Anarchy, State, and Utopia. Nozick argues that Rawls treats talents and earnings as “social assets,” but individuals have moral claims over the products of their labor. The difference principle requires ongoing redistribution, which means ongoing violation of people’s self-ownership and entitlements. Nozick believed that if a distribution comes from voluntary exchanges, no one, not even the state, has the right to rearrange it for social patterns.

Rawls and Nozick explicitly agree with Kant’s thesis that people can never be mere means to an end, but rather are ends in themselves. Their debate was about how to honor that constraint — Rawls through institutional design that respects everyone’s fundamental equality, Nozick through inviolable property rights and self-ownership.

Get Ryan Moser’s stories in your inbox

Join Medium for free to get updates from this writer.

AI renders their disagreement moot by violating the premise they shared. When your talents become training data harvested without consent, when your creative work becomes parameters in a model, you’re being used as a pure instrument. Not for social benefit (Rawls’s concern) and not with your voluntary consent (Nozick’s requirement). You’re raw material extracted for someone else’s capital accumulation. Both philosophers would recognize this as the instrumentalization they were trying to prevent.

Perhaps more destabilizing for society at large: it destroys the folk justification for inequality that the previous system relied on. Right now, even people who reject meritocracy understand its logic. You develop rare skills, you work hard, you create value, and you capture some of that value. You can dispute whether this is true or fair, but it has intuitive coherence. Critically, meritocracy works as folk ideology because most people know someone, be it a neighbor, a relative, a friend, who made it through effort and talent. The story feels real because it maps to lived experience, even if statistically it’s the exception.

“I own the compute infrastructure” or “I owned the company that bought the AI systems early”, on the other hand, is just being in the right place at the right time. It’s not about your talents, effort, or contribution. It’s pure capital ownership divorced from any human quality.

Even older defenses of capital ownership often gestured toward something: risk-taking, entrepreneurship, delayed gratification, building something. But inheriting wealth from AI systems that replaced human labor? There’s no story there that connects to widely-shared moral intuitions.

This creates a different kind of instability than economic unsustainability. People tolerate inequality they believe is earned, even if they wish it were less extreme. They don’t tolerate inequality that seems arbitrary or based purely on luck, or worse, theft.

We’re heading toward massive inequality without even a fig leaf of moral justification that regular people would find compelling. The question we should be asking ourselves is: What mechanisms could force correction before enormous harm? The economic contradictions may eventually become unsustainable; you can’t run a consumer economy when consumers have no income. Waiting for market forces to correct this could mean decades of instability at immense human cost.

We’re in a narrow window where institutional choices still matter, where there’s enough distributed economic and political power to reshape who controls AI infrastructure and how its benefits flow. Once wealth and control sufficiently concentrate, changing the rules becomes exponentially harder. The answers won’t look like 20th century compromises — those all assumed labor would retain leverage. AI is removing that assumption, forcing us back to more fundamental questions about ownership and economic power that many believed were settled.