Originally a talk for incoming AI journalists. Disclosure: While I’m posting this in a personal capacity, I work at Tarbell Center for AI Journalism, which is funded by Coefficient Giving (formerly Open Philanthropy), a major player in this story.
The man most responsible for warning the world about AI extinction also helped launch Google DeepMind. The company founded to build AI “safely” is now a flashpoint for accusations of hypocrisy. Steve Bannon and Elizabeth Warren ended up on the same side of an AI bill. Why? How?
Behind each of these ironies is a clash of AI worldviews—worldviews that shape how billions of dollars flow, which policies get written, and what the public comes to believe about the technology. These worldviews can be nuanced, as all worldviews can be, but the people and institutions holding them cluster into distinct groups. Call them “tribes.” This post unpacks those tribes: their origins, assumptions, alliances, and conflicts.
The term “tribe” captures something important about how these groups function: they spawn their own status ladders and tend to sort people into in-group and out-group. But not everyone in a given tribe is a zealot, and within-tribe variation is high. Understanding someone’s rough group affiliation isn’t enough to know their exact beliefs.
What’s a tribe?
A collection of individuals who:
View AI as something larger than a new tech product, or the subject of their day job
Share core beliefs/assumptions and framings of AI
Feel a sense of camaraderie with others who share their beliefs/assumptions
Share some community infrastructure (e.g., annual events, funding sources, online forums or social media pockets)
Any attempt to map the tribes of 2025 runs into a problem: the 2025 camps are really a product of the pre-ChatGPT camps, and those camps don’t make sense without going back further still. So this story begins 30 years ago.
The story starts around 1995 with a mailing list called the Extropians. The mailing list was transhumanist and techno-optimist, drawn to accelerating the technological singularity. A teenage Eliezer Yudkowsky found the list and quickly became a prominent voice. Other important names on this list included Shane Legg, co-founder of DeepMind (more on that soon), and Nick Bostrom, who went on to write Superintelligence and found the Future of Humanity Institute.
The lively, niche online forum had an outsized influence on early AI discourse. Amidst posts about “the removal of political, cultural, biological, and psychological limits to self-actualization”, its users popularized concepts like today’s oft-mentioned Artificial General Intelligence (AGI).
At the time, the groups thinking about AI exuded techno-optimism. Eliezer Yudkowsky’s views circa 1998 are unrecognizable from his stop-AI-development views of today:
“Our fellow humans are screaming in pain, our planet will probably be scorched to a cinder or converted into goo, we don’t know what the hell is going on, and the Singularity will solve these problems. I declare reaching the Singularity as fast as possible to be the Interim Meaning of Life, the temporary definition of Good, and the foundation until further notice of my ethical system.” (Source.)
But over time, Yudkowsky and others grew much more worried about safety. His 2001 technical agenda on “Creating Friendly AI” began to tackle the question of how to design a benevolent AI system, and he never found an answer he was satisfied with.
Yet, whether from fear or fascination, Eliezer and the Extropians’ fixation on AGI planted the seeds for real work towards AGI: DeepMind.
At an afterparty for the Singularity Summit in 2010, Yudkowsky personally introduced DeepMind co-founders Demis Hassabis and Shane Legg to investor Peter Thiel. Within weeks, Thiel gave them their first major investment.

Sam Altman@sama
eliezer has IMO done more to accelerate AGI than anyone else. certainly he got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc.
9:28 PM · Feb 3, 2023 · 1.29M Views
123 Replies · 114 Reposts · 2.29K Likes
The race was now on. Sam Altman and Elon Musk would later found OpenAI 5 years later so Demis couldn’t create an “AGI dictatorship.”
Read more: How Peter Thiel’s Relationship With Eliezer Yudkowsky Launched the AI Revolution | WIRED
Over time, the now AI-spooked spirit of the Extropians and Yudkowsky’s blog morphed into LessWrong, a community centered on rationality (see rationalist community wikipedia). The rationalist worldview focuses on clear thinking and avoiding cognitive biases, which drew in a community interested in self-improvement, the details of the world, effective altruism (more on that later), and transhumanism (and a few cults along the way).
LessWrong in the 2010s was the place to take AGI seriously before the mainstream took AGI seriously. So, unsurprisingly, many influential AI figures read and engaged with it over the years.
Greg Brockman, OpenAI’s president and co-founder, reportedly organized a LessWrong reading group while still at Stripe.
Paul Christiano, one of the leading architects behind the influential Reinforcement Learning from Human Preferences (RLHF) AI technique and the current Head of Safety at NIST’s Center for AI Standards and Innovation (CAISI), was a prolific poster.
LessWrong remains an important home for debates on AI and, in general, continues to harbor grave concerns about its risks. Eliezer’s post AGI Ruin (“a list of reasons why AGI will kill you”) is among the most upvoted posts of all time.
The Eliezer/LessWrong throughline shaped today’s “AI safety” network and still falls under that banner. But AI safety is not a monolith. There is vast disagreement.
A second key throughline in AI safety is Open Philanthropy and its network, recently re-named to Coefficient Giving, formerly GiveWell Labs. To understand Coefficient Giving, you have to understand Effective Altruism (EA).
Like the Bay Area-centered rationalist scene, you can both describe EA as a philosophy (“using reason and evidence to do the most good”) and as the network of people who take those ideas seriously. (See EA wikipedia.)
Origins: EA’s community infrastructure and many of its core ideas originated in Oxford philosophy circles, but over time the ideas spread to—and were influenced by—rationalist circles in the San Francisco Bay Area.
Drawn to AI because of a focus on existential risk: EA’s initial focus was on evidence-based donations to improve and lengthen the lives of people in extreme poverty. This remains a core focus area. But, over time, the network’s list of “most pressing problems” shifted. EA thought-leaders began taking the moral worth of future people and animals more seriously (see Longtermism). Since an extinction-level event would wipe out all the potential flourishing in the future, mitigating such events became a priority. By 2019, rogue AI was considered the most pressing existential threat of the next century among EA thought-leaders, and thus the most pressing problem to work on, as seen in Toby Ord’s book The Precipice and the career recommendations of EA-aligned organization 80,000 Hours.

Open Philanthropy (now Coefficient Giving) is the most influential institution in the EA orbit: it has the money. It receives this money primarily from Facebook co-founder and Asana co-founder Dustin Moskovitz.
Open Philanthropy was initially very skeptical of AI safety, focusing instead on global poverty. Holden Karnofsky, its co-founder and former CEO, wrote a cutting post about Eliezer’s “Singularity Institute” in 2012. But in the following years, Holden, like many other thought-leaders in EA, began more seriously reckoning with the possibility of an extinction-level AI catastrophe this century. “Potential Risks from Advanced AI” became an official grantmaking focus for Open Philanthropy in 2016, and they have moved more than $530M to the cause since then1
Despite the ostensible agreement between Eliezer’s worldview and Open Philanthropy’s worldview (powerful AI systems are likely to be a big deal, come in the next decade, and end badly) there have been persistent disagreements between the two camps.
These disagreements are not always diplomatic. There are open wounds and persistent online attacks between the camps.
Anthropic as a flashpoint in AI safety
The AI safety camp’s judgments of Anthropic illustrate its internal disagreements.
Anthropic was founded in 2021 by seven dissatisfied senior OpenAI staff. The split was reportedly driven by safety concerns: they did not trust OpenAI’s leadership to take AI safety sufficiently seriously. Even as Anthropic tries to downplay them, there are clear connections between their safety-conscious views and EA and LessWrong, from their “Long-Term Benefit Trust” and trustees to their guiding principle to “make decisions that maximize positive outcomes for humanity in the long run.”
These days Anthropic is a flashpoint in the AI safety community.
To some, especially in Eliezer’s orbit, it is yet another company advancing a technology that could wreck havoc—and a hypocrite, for doing this despite all its talk of safety.
To others, especially in Open Philanthropy’s orbit, it is the ‘cautious actor’ that may save the day. The argument goes that only an AI company at the technology’s frontier can 1) conduct or implement the technical research required to align advanced AIs, 2) set a new bar for voluntary commitments that it’s peers are pressured to adopt, and 3) ring the alarm to governments and the public in case of field-wide warning signs in AI development.
Open Philanthropy, in particular, has a complicated relationship to Anthropic:
Personal Ties: Holden Karnofsky, the influential co-founder and thought-leader of Open Philanthropy up until 2024, was married to Daniela Amodei, a co-founder of Anthropic and the CEO’s sister. Holden eventually joined Anthropic himself in Jan 2025.
Board Ties: Luke Muelhauser, the program lead for Open Philanthropy’s AI governance grantmaking, was on Anthropic’s Board of Directors until he resigned in 2024.
Financial Ties: Dustin Moskovitz, Open Philanthropy’s primary funder, was a major early investor in Anthropic in 2021, along with his wife Cari Tuna. Their Anthropic stake, now worth more than $500 million, was reportedly transferred into a non-Open Philanthropy nonprofit vehicle in early 2025..
Anthropic staff working on AI safety, many from EA-influenced circles, embody this tension. Some have written openly about doubts over whether it’s right to work at a frontier AI company while fearing the future of AI.
Beyond AI safety, another important tribe is the Ethics & Bias camp2
While this camp shares skepticism of unregulated AI progress with the AI safety community, their perspective diverges a great deal too:
The scene emerged in the mid-2010s from a variety of early AI research strands on fairness and bias, both in academia and AI companies themselves. For example, energy for more equitable AI was catalyzed by 2016 research revealing racial bias in the accuracy of facial recognition and in prison sentencing tools.
Timnit Gebru is a leading figure in this scene. She researched bias in AI systems at Google until 2020, when she was controversially fired. Other thought-leaders in this space are Margaret Mitchell, Meredith Whittaker, and Joy Buolamwini.
Conferences like ACM Conference on Fairness, Accountability, and Transparency (FAccT) became infrastructure for this budding field, along with institutions like the AI Now institute.
The 2022 Blueprint for an AI Bill of Rights, released by the White House Office of Science and Technology Policy (OSTP) during the Biden admin, was an example of work inspired by this scene. Earlier, Margret Mitchell and Timnit Gebru also worked together on ‘Model Cards,’ which are now widely used.
Over time, this camp has merged into a broader left-leaning coalition largely opposed to AI, at least in its current form. I call this Bluesky AI. Longstanding concerns about AI’s biases and data worker exploitation remain, alongside more recent concerns about AI’s water usage, energy usage, copyright violations, and impact on critical thinking.
In general, Bluesky AI tends to despise AI products even more than the people who think they may someday end the world: AI commentary on Bluesky is filled with palpable anti-tech sentiment, while the prototypical AI safety advocate remains generally-pro-tech and loves a good Waymo ride.
Read more: AI Now Institute’s ‘Artificial Power’ (2025), explaining their views on AGI and the AI safety world.
The release of ChatGPT in late 2022 and GPT-4 in 2023 dramatically changed AI discourse. AI was now mainstream. And with it, AI risk.
AI safety organizations, who had been thinking about the technology’s risks for years, were well placed to capitalize on the media attention AI received after GPT-4. And they did:
The Future of Life Institute wrote an open-letter calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
Eliezer wrote a provocative and widely-read TIME op-ed saying this pause wasn’t enough: “we need to shut it all down.”
The Center for AI Safety published a statement on AI risk (“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”) that was endorsed by Sam Altman, Demis Hassabis, Dario Amodei, and many other notable figures, including the “godfathers of AI” Geoffrey Hinton and Yoshua Bengio.
All of this did two things:
Broadened the AI safety movement to be much bigger than just LessWrong and Open Philanthropy. Heavy-weight AI scientists like Geoffrey Hinton and Yoshua Bengio threw their support behind AI safety. As the coalition grew, and the consequences of increasingly advanced LLMs became less speculative, AI safety also became less solely focused on existential risk. Wide-spread job loss became a major concern, along with AI-enabled power concentration. The general throughline that now unites a lot of AI safety is “These for-profit companies are accumulating enormous power and are not taking responsibility for the negative externalities they impose on society, now and in the future.” The specific concerns range from child safety to job loss to engineered pandemics.
Sparked a backlash to catastrophic-risk-centered discussions of AI. The backlash came from two directions: First, the Ethics & Bias camp questioned why these speculative issues were getting all the attention over present-day harms. And second, a new pro-tech coalition emerged, arguing that catastrophic risks were doomsday fantasies and humanity should accelerate AI progress. Enter e/acc.
Effective Accelerationism, or e/acc, emerged in late 2022, rooted in X/Twitter culture.
The original founding vision, popularized by the then-anonymous X/Twitter account BasedBeffJezos, articulated a “physics-first” moral philosophy:
“Effective accelerationism aims to follow the ‘will of the universe’: leaning into the thermodynamic bias towards futures with greater and smarter civilizations that are more effective at finding/extracting free energy from the universe and converting it to utility at grander and grander scales”
In this view, AI is necessary part of this evolution, to the point that Beff considers himself “post-humanist”: “in order to spread to the stars, the light of consciousness/intelligence will have to be transduced to non-biological substrates.”
Watered down (but still provocative) versions of this philosophy were popularized by figures like Y Combinator’s Gary Tan and a16z’s venture capital investor Marc Andreessen, exemplified in Andreessen’s “Techno-Optimist Manifesto“.
At its core, accelerationism posits a fundamentally optimistic, almost utopian stance towards a regulation-free AI trajectory. They express concerns about:
Regulations being used for Orwellian-type government power
“Safety” rhetoric being weaponized for regulatory capture by incumbents (like Anthropic), especially against “little tech” and the open-source ecosystem
The opportunity costs of delays to advanced AI’s development (lives not saved, breakthroughs forgone)
Today, fewer people identify as e/acc than at its peak in 2024, but its spirit lives on. As Silicon Valley shifted towards Trump, the anti-regulation, pro-acceleration spirit became absorbed in a new tribe: The Tech Right.
Now empowered in the Trump administration, this Tech Right is essential to understand. But it isn’t the only AI tribe with footing in Trump’s administration.
Drastically different AI worldviews currently vie for AI influence in Trump’s White House.
The Tech Right, exemplified by Marc Andreessen and David Sacks, Trump’s AI and crypto czar. America First. Accelerate AI development. Oppose regulation.
The MAGA Base, exemplified by figures like Steve Bannon and Marjorie Taylor Greene. True to the MAGA base’s original distrust of Big Tech, they distrust the Sam Altman’s of the world. More recently, they’re concerned about AI-driven job loss, child safety, and AI’s threat to traditional family and religious values.
(Neither AI safety nor Bluesky AI hold positions of power in the White House. Both are frequently targeted.)
The Tech Right seems to be the strongest of these camps at the moment. The White House’s July 2025 “AI Action Plan“ was co-hosted with David Sacks’ “All-In” podcast and included pillars like “Accelerate AI Innovation” and the Silicon Valley-applauded “Build American AI Infrastructure.”
But the Tech Right made enemies in 2025 over state regulation. In July 2025, Sacks and his allies attempted to pass a moratorium that would punish states for regulating AI. The move angered republicans and democrats alike, leading to an unlikely ‘horseshoe coalition’: Steve Bannon, Marjorie Taylor Greene, and Republican senators Josh Hawley and Rand Paul found themselves advocating against the bill alongside Bernie Sanders and Elizabeth Warren. After heated debates, this horseshoe coalition prevailed and the moratorium was rejected 99-1.
Yet the Tech Right was not deterred. They tried again in November 2025, pushing to insert preemption language into the must-pass National Defense Authorization Act (NDAA). Once more, the coalition held, with over 290 state lawmakers signing letters opposing the measure.
The MAGA Base, spear-headed by Steve Bannon, became more organized and vocal in their opposition to the tech right, as ABC reports.
But in December 2025 the Tech Right just bypassed their opposition. Trump signed an executive order that directs the Justice Department to establish an “AI Litigation Task Force” to sue states over their AI laws, threatens to withhold federal broadband funding from states with “onerous” regulations, and instructs the FTC and FCC to develop preemptive federal standards. Sacks stood besides Trump at the signing ceremony, while Bannon blasted the executive order
“After two humiliating face plants on must-past legislation now we attempt an entirely unenforceable EO— tech bros doing upmost to turn POTUS MAGA base away from him while they line their pockets.” (source: Steve Bannon)
Read more: Inside MAGA’s growing fight to stop Trump’s AI revolution.
A few groups didn’t make my list of tribes, but deserve to be contextualized.
Princeton’s Arvind Narayanan and Sayash Kapoor (authors of AI Snake Oil) have articulated a vision of AI that rejects both utopian and dystopian framings. They argue AI is transformative but “normal”—like electricity or the internet—and shouldn’t be treated as a separate species or impending superintelligence. Their April 2025 essay, complemented by pieces in New York Times and the Economist, gathered significant attention, and became a canonical view against which others compare their differences. The AI safety-coded team behind the ‘AI 2027’ scenario, for example, co-authored a piece with Arvind Narayanan and Sayash Kapoor about where they agree and don’t agree.
But ‘AI as a normal technology’ hasn’t coalesced into a tribe. Nobody is hosting conferences, lobbying policymakers, or otherwise spending millions to push forward this thesis and build a community that shares it.
Traditional computer science academics at universities once drove AI progress. No longer. The center of gravity in AI development and discourse has shifted to industry.
Money drove this shift. Training cutting-edge AI models requires an enormous amount of computing power which only companies have the resources to buy. The wealth of AI companies also allows them to pay far higher salaries than academia, to the point that 71% of AI PhD graduates join industry and only 20% enter academia, up from a roughly equal split in 2011.
There’s a growing disconnect between academia and the frontier. Many academics remain skeptical of the “AGI soon” framing that dominates industry, and within industry circles, this skepticism reads as being out of touch—underestimating how quickly AI has and will continue to improve, focusing on problems that seem distant from what matters most. The academics may ultimately be vindicated. But for now, they’re not shaping frontier AI or the discourse around it, and they’re not a tribe because they lack a shared, passionate ideology about AI’s future.
“Beat China” is something nearly everyone in Washington agrees on. That’s the problem: different AI tribes invoke it to justify contradictory policies, which means ‘China Hawks’ don’t cohere into a distinct group.
On domestic regulation, AI safety proponents have argued that regulation builds trust and drives adoption... which will help beat China. The Tech Right and OpenAI have argued for a domestic approach that “let[s] the private sector cook”... which will help beat China.
On export controls, AI safety proponents and traditional national security hawks have argued for strict export controls that deny China advanced chips... in order to beat China. The Tech Right argues that the US should instead sell chips liberally to “entrench the American tech stack“... in order to beat China.
While genuine China hawks exist in Washington, they’re not a salient or coherent enough AI group to count as a tribe. When someone invokes “beat China,” the more revealing question is often what policy they’re using it to sell.
Is AI progress too fast or too slow? Does the US need more regulation or less? Should we focus on today’s harms or tomorrow’s harms? Putting on the tribal lens introduced in this post can clarify what perspectives shape each of these AI debates swirling around us.
Now you can choose your tribe. Except please don’t.
I’ve used the term ‘tribe’ deliberately. I don’t want it to sound like a compliment. Commenting on AI today bears far too much resemblance to sports fandoms: choose your side, cheer when it scores a point (GPT 5 was a disappointment!) and jeer when it’s attacked (GPT 5 was actually on an exponential improvement trend!).
This is not the doing of any one tribe. All have spawned some degree of echo chambers, hero worship, in-group status signaling, and selective attention, as tribes do.
But ultimately the stakes are too high to settle for such shortcuts. On the cusp of a new, uncertain technology, actually finding the right ways to ~ maximize the benefits while minimizing the risks ~ matters much more than scoring points for your tribe. And it’s much harder to skillfully navigate trade offs when you ignore arguments and evidence from those who disagree with you. So, while the tribal lens is an important lens to put on, it’s also an important lens to take off.






