Credits
James O’Sullivan lectures in the School of English and Digital Humanities at University College Cork, where his work explores the intersection of technology and culture.
The machines are coming for us, or so we’re told. Not today, but soon enough that we must seemingly reorganize civilization around their arrival. In boardrooms, lecture theatres, parliamentary hearings and breathless tech journalism, the specter of superintelligence increasingly haunts our discourse. It’s often framed as “artificial general intelligence,” or “AGI,” and sometimes as something still more expansive, but always as an artificial mind that surpasses human cognition across all domains, capable of recursive self-improvement and potentially hostile to human survival. But whatever it’s called, this coming superintelligence has colonized our collective imagination.
The scenario echoes the speculative lineage of science fiction, from Isaac Asimov’s “Three Laws of Robotics” — a literary attempt to constrain machine agency — to later visions such as Stanley Kubrick and Arthur C. Clarke’s HAL 9000 or the runaway networks of William Gibson. What was once the realm of narrative thought-experiment now serves as a quasi-political forecast.
This narrative has very little to do with any scientific consensus, emerging instead from particular corridors of power. The loudest prophets of superintelligence are those building the very systems they warn against. When Sam Altman speaks of artificial general intelligence’s existential risk to humanity while simultaneously racing to create it, or when Elon Musk warns of an AI apocalypse while founding companies to accelerate its development, we’re seeing politics masked as predictions.
The superintelligence discourse functions as a sophisticated apparatus of power, transforming immediate questions about corporate accountability, worker displacement, algorithmic bias and democratic governance into abstract philosophical puzzles about consciousness and control. This sleight of hand is neither accidental nor benign. By making hypothetic catastrophe the center of public discourse, architects of AI systems have positioned themselves as humanity’s reluctant guardians, burdened with terrible knowledge and awesome responsibility. They have become indispensable intermediaries between civilization and its potential destroyer, a role that, coincidentally, requires massive capital investment, minimal regulation and concentrated decision-making authority.
Consider how this framing operates. When we debate whether a future artificial general intelligence might eliminate humanity, we’re not discussing the Amazon warehouse worker whose movements are dictated by algorithmic surveillance or the Palestinian whose neighborhood is targeted by automated weapons systems. These present realities dissolve into background noise against the rhetoric of existential risk. Such suffering is actual, while the superintelligence remains theoretical, but our attention and resources — and even our regulatory frameworks — increasingly orient toward the latter as governments convene frontier-AI taskforces and draft risk templates for hypothetical future systems. Meanwhile, current labour protections and constraints on algorithmic surveillance remain tied to legislation that is increasingly inadequate.
In the U.S., Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” mentions civil rights, competition, labor and discrimination, but it creates its most forceful accountability obligations for large, high-capability foundation models and future systems trained above certain compute thresholds, requiring firms to share technical information with the federal government and demonstrate that their models stay within specified safety limits. The U.K. has gone further still, building a Frontier AI Taskforce — now absorbed into the AI Security Institute — whose mandate centers on extreme, hypothetical risks. And even the EU’s AI Act, which does attempt to regulate present harms, devotes a section to systemic and foundation-model risks anticipated at some unknown point in the future. Across these jurisdictions, the political energy clusters around future, speculative systems.
Artificial superintelligence narratives perform very intentional political work, drawing attention from present systems of control toward distant catastrophe, shifting debate from material power to imagined futures. Predictions of machine godhood reshape how authority is claimed and whose interests steer AI governance, muting the voices of those who suffer under algorithms and amplifying those who want extinction to dominate the conversation. What poses as neutral futurism functions instead as an intervention in today’s political economy. Seen clearly, the prophecy of superintelligence is less a warning about machines than a strategy for power, and that strategy needs to be recognized for what it is. The power of this narrative draws from its history.
Bowing At The Altar Of Rationalism
Superintelligence as a dominant AI narrative predates ChatGPT and can be traced back to the peculiar marriage of Cold War strategy and computational theory that emerged in the 1950s. The RAND Corporation, an archetypal think tank where nuclear strategists gamed out humanity’s destruction, provided the conceptual nursery for thinking about intelligence as pure calculation, divorced from culture or politics.
“Whatever it’s called, this coming superintelligence has colonized our collective imagination.”
The early AI pioneers inherited this framework, and when Alan Turing proposed his famous test, he deliberately sidestepped questions of consciousness or experience in favor of observable behavior — if a machine could convince a human interlocutor of its humanity through text alone, it deserved the label “intelligent.” This behaviorist reduction would prove fateful, as in treating thought as quantifiable operations, it recast intelligence as something that could be measured, ranked and ultimately outdone by machines.
The computer scientist John von Neumann, as recalled by mathematician Stanislaw Ulam in 1958, spoke of a technological “singularity” in which accelerating progress would one day mean that machines could improve their own design, rapidly bootstrapping themselves to superhuman capability. This notion, refined by mathematician Irving John Good in the 1960s, established the basic grammar of superintelligence discourse: recursive self-improvement, exponential growth and the last invention humanity would ever need to make. These were, of course, mathematical extrapolations rather than empirical observations, but such speculations and thought experiments were repeated so frequently that they acquired the weight of prophecy, helping to make the imagined future they described look self-evident.
The 1980s and 1990s saw these ideas migrate from computer science departments to a peculiar subculture of rationalists and futurists centered around figures like computer scientist Eliezer Yudkowsky and his Singularity Institute (later the Machine Intelligence Research Institute). This community built a dense theoretical framework for superintelligence: utility functions, the formal goal systems meant to govern an AI’s choices; the paperclip maximizer, a thought experiment where a trivial objective drives a machine to consume all resources; instrumental convergence, the claim that almost any ultimate goal leads an AI to seek power and resources; and the orthogonality thesis, which holds that intelligence and moral values are independent. They created a scholastic philosophy for an entity that didn’t exist, complete with careful taxonomies of different types of AI take-off scenarios and elaborate arguments about acausal trade between possible future intelligences.
What united these thinkers was a shared commitment to a particular style of reasoning. They practiced what might be called extreme rationalism, the belief that pure logic, divorced from empirical constraint or social context, could reveal fundamental truths about technology and society. This methodology privileged thought experiments over data and clever paradoxes over mundane observation, and the result was a body of work that read like medieval theology, brilliant and intricate, but utterly disconnected from the actual development of AI systems. It should be acknowledged that disconnection did not make their efforts worthless, and by pushing abstract reasoning to its limits, they clarified questions of control, ethics and long-term risk that later informed more grounded discussions of AI policy and safety.
The contemporary incarnation of this tradition found its most influential expression in Nick Bostrom’s 2014 book “Superintelligence,” which transformed fringe internet philosophy into mainstream discourse. Bostrom, a former Oxford philosopher, gave academic respectability to scenarios that had previously lived in science fiction and posts on blogs with obscure titles. His book, despite containing no technical AI research and precious little engagement with actual machine learning, became required reading in Silicon Valley, often cited by tech billionaires. Musk once tweeted: “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.” Musk is right to counsel caution, as evidenced by the 1,200 to 2,000 tons of nitrogen oxides and hazardous air pollutants like formaldehyde that his own artificial intelligence company expels into the air in Boxtown, a working-class, largely Black community in Memphis.
This commentary shouldn’t be seen as an attempt to diminish Bostrom’s achievement, which was to take the sprawling, often incoherent fears about AI and organize them into a rigorous framework. But his book sometimes reads like a natural history project, in which he categorizes different routes to superintelligence and different “failure modes,” ways such a system might go wrong or destroy us, as well as solutions to “control problems,” schemes proposed to keep it aligned — this taxonomic approach made even wild speculation appear scientific. By treating superintelligence as an object of systematic study rather than a science fiction premise, Bostrom laundered existential risk into respectable discourse.
The effective altruism (EA) movement supplied the social infrastructure for these ideas. Its core principle is to maximize long-term good through rational calculation. Within that worldview, superintelligence risk fits neatly, for if future people matter as much as present ones, and if a small chance of global catastrophe outweighs ongoing harms, then preventing AI apocalypse becomes the top priority. On that logic, hypothetical future lives eclipse the suffering of people living today.
“The loudest prophets of superintelligence are those building the very systems they warn against.”
This did not stay an abstract argument as philanthropists identifying with effective altruists channeled significant funding into AI safety research, and money shapes what researchers study. Organizations aligned with effective altruism have been established in universities and policy circles, publishing reports and advising governments on how to think about AI. The UK’s Frontier AI Taskforce has included members with documented links to the effective altruism movement, and commentators argue that these connections help channel EA-style priorities into government AI risk policy.
Effective altruism encourages its proponents to move into public bodies and major labs, creating a pipeline of staff who carry these priorities into decision-making roles. Jason Matheny, former director of Intelligence Advanced Research Projects Activity, a U.S. government agency that funds high-risk, high-reward research to improve intelligence gathering and analysis, has described how effective altruists can “pick low-hanging fruit within government positions” to exert influence. Superintelligence discourse isn’t spreading because experts broadly agree it is our most urgent problem; it spreads because a well-resourced movement has given it money and access to power.
This is not to deny the merits of engaging with the ideals of effective altruism or with the concept of superintelligence as articulated by Bostrom. The problem is how readily those ideas become distorted once they enter political and commercial domains. This intellectual genealogy matters because it reveals superintelligence discourse as a cultural product, ideas that moved beyond theory into institutions, acquiring funding and advocates. And its emergence was shaped within institutions committed to rationalism over empiricism, where individual genius was fetishized over collective judgment, and technological determinism was prioritized over social context.
Entrepreneurs Of The Apocalypse
The transformation of superintelligence from internet philosophy to boardroom strategy represents one of the most successful ideological campaigns of the 21st century. Tech executives who had previously focused on quarterly earnings and user growth metrics began speaking like mystics about humanity’s cosmic destiny, and this conversion reshaped the political economy of AI development.
OpenAI, founded in 2015 as a non-profit dedicated to ensuring artificial intelligence benefits humanity, exemplifies this transformation. OpenAI has evolved into a peculiar hybrid, a capped-profit company controlled by a non-profit board, valued by some estimates at $500 billion, racing to build the very artificial general intelligence it warns might destroy us. This structure, byzantine in its complexity, makes perfect sense within the logic of superintelligence. If AGI represents both ultimate promise and existential threat, then the organization building it must be simultaneously commercial and altruistic, aggressive and cautious, public-spirited yet secretive.
Sam Altman, OpenAI’s CEO, has perfected the rhetorical stance of the reluctant prophet. In Congressional testimony, blog posts and interviews, he warns of AI’s dangers while insisting on the necessity of pushing forward. “Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity,” he wrote on his blog earlier this year. There is a very we must build AGI before someone else does feel to the argument, because we’re the only ones responsible enough to handle it. Altman seems determined to position OpenAI as humanity’s champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.
Still, OpenAI is also seeking a profit. And that is really what all this is about — profit. Superintelligence narratives carry staggering financial implications, justifying astronomical valuations for companies that have yet to show consistent paths to self-sufficiency. But if you’re building humanity’s last invention, perhaps normal business metrics become irrelevant. This eschatological framework explains why Microsoft would invest $13 billion in OpenAI, why venture capitalists pour money into AGI startups and why the market treats large language models like ChatGPT as precursors to omniscience.
Anthropic, founded by former OpenAI executives, positions itself as the “safety-focused” alternative, raising billions by promising to build AI systems that are “helpful, honest and harmless.” But it’s all just elaborate safety theatre, as harm has no genuine place in the competition between OpenAI, Anthropic, Google DeepMind and others — the true contest is in who gets to build the best, most profitable models and how well they can package that pursuit in the language of caution.
This dynamic creates a race to the bottom of responsibility, with each company justifying acceleration by pointing to competitors who might be less careful: The Chinese are coming, so if we slow down, they’ll build unaligned AGI first. Meta is releasing models as open source without proper safeguards. What if some unknown actor hits upon the next breakthrough first? This paranoid logic forecloses any possibility of genuine pause or democratic deliberation. Speed becomes safety, and caution becomes recklessness.
“[Sam] Altman seems determined to position OpenAI as humanity’s champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.”
The superintelligence frame reshapes internal corporate politics, as AI safety teams, often staffed by believers in existential risk, provide moral cover for rapid development, absorbing criticism that might target business practices by attempting to reinforce the idea that these companies are doing world-saving work. If your safety team publishes papers about preventing human extinction, routine regulation begins to look trivial.
The well-publicized drama at OpenAI in November 2023 illuminates these dynamics. When the company’s board attempted to fire Sam Altman over concerns about his candor, the resulting chaos revealed underlying power relations. Employees, who had been recruited with talk of saving humanity, threatened mass defection if their CEO wasn’t reinstated — does their loyalty to Altman outweigh their quest to save the rest of us? Microsoft, despite having no formal control over the OpenAI board, exercised decisive influence as the company’s dominant funder and cloud provider, offering to hire Altman and any staff who followed him. The board members, who thought honesty an important trait in a CEO, resigned, and Altman returned triumphant.
Superintelligence rhetoric serves power, but it is set aside when it clashes with the interests of capital and control. Microsoft has invested billions in OpenAI and implemented its models in many of its commercial products. Altman wants rapid progress, so Microsoft wants Altman. His removal put Microsoft’s whole AI business trajectory at risk. The board was swept aside because they tried, as is their remit, to constrain OpenAI’s CEO. Microsoft’s leverage ultimately determined the outcome, and employees followed suit. It was never about saving humanity; it was about profit.
The entrepreneurs of the AI apocalypse have discovered a perfect formula. By warning of existential risk, they position themselves as indispensable. By racing to build AGI, they justify the unlimited use of resources. And by claiming unique responsibility, they deflect democratic oversight. The future becomes a hostage to present accumulation, and we’re told we should be grateful for such responsible custodians.
Superintelligence discourse actively constructs the future. Through constant repetition, speculative scenarios acquire the weight of destiny. This process — the manufacture of inevitability — reveals how power operates through prophecy.
Consider the claim that artificial general intelligence will arrive within five to 20 years. Across many sources, this prediction is surprisingly stable. But since at least the mid-20th century, researchers and futurists have repeatedly promised human-level AI “in a couple of decades,” only for the horizon to continuously slip. The persistence of that moving window serves a specific function: it’s near enough to justify immediate massive investment while far enough away to defer necessary accountability. It creates a temporal framework within which certain actions become compulsory regardless of democratic input.
This rhetoric of inevitability pervades Silicon Valley’s discussion of AI. AGI is coming whether we like it or not, executives declare, as if technological development were a natural force rather than a human choice. This naturalization of progress obscures the specific decisions, investments and infrastructures that make certain futures more likely than others. When tech leaders say we can’t stop progress, what they mean is, you can’t stop us.
Media amplification plays a crucial role in this process, as every incremental improvement in large language models gets framed as a step towards AGI. ChatGPT writes poetry; surely consciousness is imminent. Claude solves coding problems; the singularity is near. Such accounts, often sourced from the very companies building these systems, create a sense of momentum that becomes self-fulfilling. Investors invest because AGI seems near, researchers join companies because that’s where the future is being built and governments defer regulation because they don’t want to handicap their domestic champions.
The construction of inevitability also operates through linguistic choices. Notice how quickly “artificial general intelligence” replaced “artificial intelligence” in public discourse, as if the general variety were a natural evolution rather than a specific and contested concept, and how “superintelligence” — or whatever term the concept eventually assumes — then appears as the seemingly inevitable next rung on that ladder. Notice how “alignment” — ensuring AI systems do what humans want — became the central problem, assuming both that superhuman AI will exist and that the challenge is technical rather than political.
Notice how “compute,” which basically means computational power, became a measurable resource like oil or grain, something to be stockpiled and controlled. This semantic shift matters because language shapes possibility. When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future.
“When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future.”
When we simultaneously treat compute as a strategic resource, we further normalize the concentration of power in the hands of those who control data centers, who, in turn, as the failed ousting of Altman demonstrates, grant further power to this chosen few.
Academic institutions, which are meant to resist such logics, have been conscripted into this manufacture of inevitability. Universities, desperate for industry funding and relevance, establish AI safety centers and existential risk research programs. These institutions, putatively independent, end up reinforcing industry narratives, producing papers on AGI timelines and alignment strategies, lending scholarly authority to speculative fiction. Young researchers, seeing where the money and prestige lie, orient their careers toward superintelligence questions rather than present AI harms.
International competition adds further to the apparatus of inevitability. The “AI arms race” between the United States and China is framed in existential terms, that whoever builds AGI first will achieve permanent geopolitical dominance. This neo-Cold War rhetoric forecloses possibilities for cooperation, regulation or restraint, making racing toward potentially dangerous technology seem patriotic rather than reckless. National security becomes another trump card against democratic deliberation.
The prophecy becomes self-fulfilling through material concentration — as resources flow towards AGI development, alternative approaches to AI starve. Researchers who might work on explainable AI or AI for social good instead join labs focused on scaling large language models. The future narrows to match the prediction, not because the prediction was accurate, but because it commanded resources.
In financial terms, it is a heads-we-win, tails-you-lose arrangement: If the promised breakthroughs materialize, private firms and their investors keep the upside, but if they stall or disappoint, the sunk costs in energy-hungry data centers and retooled industrial policy sit on the public balance sheet. An entire macro-economy is being hitched to a story whose basic physics we do not yet understand.
We must recognize this process as political, not technical. The inevitability of superintelligence is manufactured through specific choices about funding, attention and legitimacy, and different choices would produce different futures. The fundamental question isn’t whether AGI is coming, but who benefits from making us believe it is.
The Abandoned Present
While we fixate on hypothetical machine gods, actual AI systems reshape human life in profound and often harmful ways. The superintelligence discourse distracts from these immediate impacts; one might even say it legitimizes such. After all, if we’re racing towards AGI to save humanity, what’s a little collateral damage along the way?
Consider labor, that fundamental human activity through which we produce and reproduce our world. AI systems already govern millions of workers’ days through algorithmic management. In Amazon warehouses, workers’ movements are dictated by handheld devices that calculate optimal routes, monitor bathroom breaks and automatically fire those who fall behind pace. While the cultural conversation around automation often emphasizes how it threatens to replace human labor, for many, automation is already actively degrading their profession. Many workers have become an appendage to the algorithm, executing tasks the machine cannot yet perform while being measured and monitored by computational systems.
Frederick Taylor, the 19th-century American mechanical engineer and author of “The Principles of Scientific Management,”is famous for his efforts to engineer maximum efficiency through rigid control of labor. What we have today is a form of tech-mediated Taylorism wherein work is broken into tiny, optimized motions, with every movement monitored and timed, just with management logic encoded in software rather than stopwatches. Taylor’s logic has been operationalized far beyond what he could have imagined. But when we discuss AI and work, the conversation immediately leaps to whether AGI will eliminate all jobs, as if the present suffering of algorithmically managed workers were merely a waystation to obsolescence.
The content moderation industry exemplifies this abandoned present. Hundreds of thousands of workers, primarily in the Global South, spend their days viewing the worst content humanity produces—including child abuse and sexual violence—to train AI systems to recognize and filter such material. These workers, paid a fraction of what their counterparts in Silicon Valley earn, suffer documented psychological trauma from their work. They’re the hidden labor force behind “AI safety,” protecting users from harmful content while being harmed themselves. But their suffering rarely features in discussions of AI ethics, which focus instead on preventing hypothetical future harms from superintelligent systems.
Surveillance represents another immediate reality obscured by futuristic speculation. AI systems enable unprecedented tracking of human behavior. Facial recognition identifies protesters and dissidents. Predictive policing algorithms direct law enforcement to “high-risk” neighborhoods that mysteriously correlate with racial demographics. Border control agencies use AI to assess asylum seekers’ credibility through voice analysis and micro-expressions. Social credit systems score citizens’ trustworthiness using algorithms that analyze their digital traces.
“An entire macro-economy is being hitched to a story whose basic physics we do not yet understand.”
These aren’t speculative technologies; they are real systems that are already deployed, and they don’t require artificial general intelligence, just pattern matching at scale. But the superintelligence discourse treats surveillance as a future risk — what if an AGI monitored everyone? — rather than a present reality. This temporal displacement serves power, because it’s easier to debate hypothetical panopticons than to dismantle actual ones.
Algorithmic bias pervades critical social infrastructures, amplifying and legitimizing existing inequalities by lending mathematical authority to human prejudice. The response from the AI industry? We need better datasets, more diverse teams and algorithmic audits — technical fixes for political problems. Meanwhile, the same companies racing to build AGI deploy biased systems at scale, treating present harm as acceptable casualties in the march toward transcendence. The violence is actual, but the solution remains perpetually deferred.
And beneath all of this, the environmental destruction accelerates as we continue to train large language models — a process that consumes enormous amounts of energy. When confronted with this ecological cost, AI companies point to hypothetical benefits, such as AGI solving climate change or optimizing energy systems. They use the future to justify the present, as though these speculative benefits should outweigh actual, ongoing damages. This temporal shell game, destroying the world to save it, would be comedic if the consequences weren’t so severe.
And just as it erodes the environment, AI also erodes democracy. Recommendation algorithms have long shaped political discourse, creating filter bubbles and amplifying extremism, but more recently, generative AI has flooded information spaces with synthetic content, making it impossible to distinguish truth from fabrication. The public sphere, the basis of democratic life, depends on people sharing enough common information to deliberate together.
When AI systems segment citizens into ever-narrower feeds, that shared space collapses. We no longer argue about the same facts because we no longer encounter the same world, but our governance discussions focus on preventing AGI from destroying democracy in the future rather than addressing how current AI systems undermine it today. We debate AI alignment while ignoring human alignment on key questions, like whether AI systems should serve democratic values rather than corporate profits. The speculative tyranny of superintelligence obscures the actual tyranny of surveillance capitalism.
Mental health impacts accumulate as humans adapt to algorithmic judgment. Social media algorithms, optimized for engagement, promote content that triggers anxiety, depression and eating disorders. Young people internalize algorithmic metrics — likes, shares, views — as measures of self-worth. The quantification of social life through AI systems produces new forms of alienation and suffering, but these immediate psychological harms pale beside imagined existential risks, receiving a fraction of the attention and resources directed toward preventing hypothetical AGI catastrophe.
Each of these present harms could be addressed through collective action. We could regulate algorithmic management, support content moderators, limit surveillance, audit biases, constrain energy use, protect democracy and prioritize mental health. These aren’t technical problems requiring superintelligence to solve; they’re just good old-fashioned political challenges demanding democratic engagement. But the superintelligence discourse makes such mundane interventions seem almost quaint. Why reorganize the workplace when work itself might soon be obsolete? Why regulate surveillance when AGI might monitor our thoughts? Why address bias when superintelligence might transcend human prejudice entirely?
The abandoned present is crowded with suffering that could be alleviated through human choice rather than machine transcendence, and every moment we spend debating alignment problems for non-existent AGI is a moment not spent addressing algorithmic harms affecting millions today. The future-orientation of superintelligence discourse isn’t just distraction but an abandonment, a willful turning away from present responsibility toward speculative absolution.
Alternative Imaginaries For The Age Of AI
The dominance of superintelligence narratives obscures the fact that many other ways of doing AI exist, grounded in present social needs rather than hypothetical machine gods. These alternatives show that you do not have to join the race to superintelligence or renounce technology altogether. It is possible to build and govern automation differently now.
Across the world, communities have begun experimenting with different ways of organizing data and automation. Indigenous data sovereignty movements, for instance, have developed governance frameworks, data platforms and research protocols that treat data as a collective resource subject to collective consent. Organizations such as the First Nations Information Governance Centre in Canada and Te Mana Raraunga in Aotearoa insist that data projects, including those involving AI, be accountable to relationships, histories and obligations, not just to metrics of optimization and scale. Their projects offer working examples of automated systems designed to respect cultural values and reinforce local autonomy, a mirror image of the effective altruist impulse to abstract away from place in the name of hypothetical future people.
“The speculative tyranny of superintelligence obscures the actual tyranny of surveillance capitalism.”
Workers are also experimenting with different arrangements, and unions and labor organizations have negotiated clauses on algorithmic management, pushed for audit rights over workplace systems and begun building worker-controlled data trusts to govern how their information is used. These initiatives emerge from lived experience rather than philosophical speculation, from people who spend their days under algorithmic surveillance and are determined to redesign the systems that manage their existence. While tech executives are celebrated for speculating about AGI, workers who analyze the systems already governing their lives are still too easily dismissed as Luddites.
Similar experiments appear in feminist and disability-led technology projects that build tools around care, access and cognitive diversity, and in Global South initiatives that use modest, locally governed AI systems to support healthcare, agriculture or education under tight resource constraints. Degrowth-oriented technologists design low-power, community-hosted models and data centers meant to sit within ecological limits rather than override them. Such examples show how critique and activism can progress to action, to concrete infrastructures and institutional arrangements that demonstrate how AI can be organized without defaulting to the superintelligence paradigm that demands everyone else be sacrificed because a few tech bros can see the greater good that everyone else has missed.
What unites these diverse imaginaries — Indigenous data governance, worker-led data trusts, and Global South design projects — is a different understanding of intelligence itself. Rather than picturing intelligence as an abstract, disembodied capacity to optimize across all domains, they treat it as a relational and embodied capacity bound to specific contexts. They address real communities with real needs, not hypothetical humanity facing hypothetical machines. Precisely because they are grounded, they appear modest when set against the grandiosity of superintelligence, but existential risk makes every other concern look small by comparison. You can predict the ripostes: Why prioritize worker rights when work itself might soon disappear? Why consider environmental limits when AGI is imagined as capable of solving climate change on demand?
These alternatives also illuminate the democratic deficit at the heart of the superintelligence narrative. Treating AI at once as an arcane technical problem that ordinary people cannot understand and as an unquestionable engine of social progress allows authority to consolidate in the hands of those who own and build the systems. Once algorithms mediate communication, employment, welfare, policing and public discourse, they become political institutions. The power structure is feudal, comprising a small corporate elite that holds decision-making power justified by special expertise and the imagined urgency of existential risk, while citizens and taxpayers are told they cannot grasp the technical complexities and that slowing development would be irresponsible in a global race. The result is learned helplessness, a sense that technological futures cannot be shaped democratically but must be entrusted to visionary engineers.
A democratic approach would invert this logic, recognizing that questions about surveillance, workplace automation, public services and even the pursuit of AGI itself are not engineering puzzles but value choices. Citizens do not need to understand backpropagation to deliberate on whether predictive policing should exist, just as they need not understand combustion engineering to debate transport policy. Democracy requires the right to shape the conditions of collective life, including the architectures of AI.
This could take many forms. Workers could participate in decisions about algorithmic management. Communities could govern local data according to their own priorities. Key computational resources could be owned publicly or cooperatively rather than concentrated in a few firms. Citizen assemblies could be given real authority over whether a municipality moves forward with contentious uses of AI, like facial recognition and predictive policing. Developers could be required to demonstrate safety before deployment under a precautionary framework. International agreements could set limits on the most dangerous areas of AI research. None of this is about whether AGI, or any other kind of superintelligence one can imagine, does or does not arrive; it’s simply about recognizing that the distribution of technological power is a political choice rather than an inevitable outcome.
“The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain.”
The superintelligence narrative undermines these democratic possibilities by presenting concentrated power as a tragic necessity. If extinction is at stake, then public deliberation becomes a luxury we cannot afford. If AGI is inevitable, then governance must be ceded to those racing to build it. This narrative manufactures urgency to justify the erosion of democratic control, and what begins as a story about hypothetical machines ends as a story about real political disempowerment. This, ultimately, is the larger risk, that while we debate the alignment of imaginary future minds, we neglect the alignment of present institutions.
The truth is that nothing about our technological future is inevitable, other than the inevitability of further technological change. Change is certain, but its direction is not. We do not yet understand what kind of systems we are building, or what mix of breakthroughs and failures they will produce, and that uncertainty makes it reckless to funnel public money and attention into a single speculative trajectory.
Every algorithm embeds decisions about values and beneficiaries. The superintelligence narrative masks these choices behind a veneer of destiny, but alternative imaginaries — Indigenous governance, worker-led design, feminist and disability justice, commons-driven models, ecological constraints — remind us that other paths are possible and already under construction.
The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain. And the answer cannot be left to the corporate prophets of artificial transcendence because the future of AI is a political field — it should be open to contestation. It belongs not to those who warn most loudly of gods or monsters, but to publics that should have the moral right to democratically govern the technologies that shape their lives.