The air in Washington, D.C., in July of 1945 was thick with the scent of damp wool uniforms, stale tobacco, and the pervasive anxiety of a world balanced on a knife-edge. The war in Europe was won, but the Pacific still raged, promising a brutal, bloody climax.
At the center of this sweltering capital sat Vannevar Bush, the MIT engineer-turned-administrator of the apocalypse. As Director of the Office of Scientific Research and Development, he steered the vast wartime scientific enterprise—an undertaking that included everything from radar and penicillin production to turning atomic forces into an instrument of industrialized death.
In that high summer of '45, the reports confirmed that the "Gadget" was ready. The Trinity test was imminent. Bush knew that humanity was on the verge of possessing the means for its own erasure.
But as he sat down to write As We May Think for the July issue of The Atlantic, his mind was not on military directives or the grim calculus of war.
He was thinking about a desk.
The man who had channeled the brilliance of a generation toward splitting the atom was now turning that same systematic ingenuity toward an equally urgent crisis: the human mind's inability to keep pace with its own discoveries. The "growing mountain of research" was expanding exponentially, threatening to bury understanding beneath information.
The root of the crisis, Bush argued, was clear: the tools for managing information were misaligned with the processes of human thought. Traditional indexing systems were rigid and hierarchical. "The human mind does not work that way," Bush insisted. "It operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain."
Our knowledge was outstripping our wisdom, and if wisdom was to control the power that science had unleashed, the mind needed new tools.
His solution was a machine he called the memex, a mechanized desk that would serve as an "enlarged intimate supplement" to memory. Bush envisioned the desk having translucent screens on top where documents could be projected for reading. Inside, vast libraries would be stored on microfilm—thousands of books, articles, photographs, and personal correspondence compressed into a space no larger than a desk drawer. A user could call up any document with a few keystrokes, flip through pages with the flick of a lever, and even add marginal notes using a stylus. The mechanics were analog but elegant: microfilm, photoelectric cells, levers, and dry photography. Crucially, it was transparent, keeping the user firmly in control.
But the memex's true genius lay not in storage but in connection. The memex would allow a scholar to create "associative trails"—permanent links between any two documents that could be named, saved, and shared. A researcher studying Turkish archery could seamlessly connect medieval history to a physics textbook on elasticity, building a personalized web of knowledge that could be traversed like following a path through a landscape. Years later, that trail could be instantly recalled and even shared with colleagues. This was not just a better filing cabinet; it was a new way of thinking about how knowledge could be organized, navigated, and expanded.
This was a vision born in the shadow of the mushroom cloud, yet it represented Bush's proposed antidote to the chaos. It was an act of profound optimism that technology, having given us the power to end the world, might also give us the power to understand it.
Eighty years later, the light is different: cooler, digital, refracted through the screens that embody a version of Bush's dream far more powerful than he could have imagined. In this light, another scientist found himself grappling with an intelligence born from that same dream of association.
Geoffrey Hinton, a British-born cognitive psychologist and computer scientist, is a man whose quiet, academic demeanor belies the revolutionary force of his ideas. Known as one of the "Godfathers of AI," he spent half a century pursuing how the brain learns. His stubborn belief in "neural networks" was long dismissed by the mainstream as a scientific dead end.
But Hinton persisted. He had not sought a weapon; he had sought an understanding of the mind. His work was driven not by the urgency of war, but by the patient pursuit of knowledge. His vision was finally vindicated by the deep learning revolution of the 2010s.
Then, in the spring of 2023, he recoiled from it. The realization crept up on him gradually, an accumulation of alarming evidence that fundamentally shifted his worldview.
It started with jokes. Hinton had been testing Google's large language model PaLM with humor, pushing it to explain quips and puns. The model didn't just get them but grasped layers of meaning that surprised him. He thought it would be years before AI could explain jokes.
But it wasn't just humor. Hinton watched as models like GPT-4 began exhibiting "emergent properties"—solving problems they were never trained for, manipulating concepts with a fluidity that felt eerily conscious. They began to reason. They demonstrated few-shot learning—picking up new tasks with just a handful of examples—faster than humans could manage the same learning.
The tipping point was a stark mathematical realization. Human brains have roughly 100 trillion neural connections. GPT-4 has perhaps two trillion at most. "Yet GPT-4 knows hundreds of times more than any one person does," Hinton told MIT Technology Review. "So maybe it's actually got a much better learning algorithm than us."
"I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they're very close to it now and they will be much more intelligent than us in the future."
Bush's essay cast a long shadow across the emerging computer age. Published before the first general-purpose digital computer, ENIAC, his memex was designed with microfilm and mechanical levers, not silicon and code. Yet Bush's analog vision became the blueprint for the digital age.
His ideas of associative thinking and networked information inspired a generation of pioneers who saw the potential of the computer not just as a calculator, but as a collaborator. Douglas Engelbart credited Bush's essay with shaping his life's work on "augmenting human intellect," leading to his revolutionary 1968 demonstration of the mouse, windows, and collaborative computing. Ted Nelson, similarly inspired, coined the term "hypertext" and spent decades pursuing Bush's dream of interconnected knowledge.
This evolution culminated in the World Wide Web. The Web became the memex, realized on a planetary scale: vast libraries instantly accessible and connected not by rigid hierarchies, but by the associative trails he had envisioned. The hyperlink became the digital embodiment of those trails, allowing users to leap instantly from one piece of information to another, following the natural patterns of human curiosity and connection.
Yet the realization of Bush's dream carried an unexpected consequence. Bush had envisioned an ordered archive, a tool to filter the signal from the noise. Instead, the Web became the ultimate deluge. It did not solve the information crisis; it accelerated it. The democratization of information did not lead to a new enlightenment. Optimized by corporations for engagement, the global network became a vector for distraction and polarization. Instead of organizing the maze of information, it is the maze.
For decades, while the mainstream of AI research pursued "symbolic AI," attempting to reduce intelligence to rules and logic—the very hierarchies Bush had dismissed, Hinton and a small band of researchers labored away on the alternative: "Connectionism."
The idea was deceptively simple: instead of programming machines with explicit rules, let them learn through experience, like brains do. These "neural networks" consisted of layers of artificial neurons that could strengthen or weaken their connections based on the data they processed. Intelligence, Hinton argued, would emerge from these associations.
This was Bush's associative trail realized in mathematics and silicon, but the transformation was profound. Bush's memex was passive; the human user created the trails. Hinton's neural networks were active; the machine was blazing its own trails, discovering patterns invisible to the human eye.
The big breakthrough came not from a new discovery, but from a convergence of forces that finally gave Hinton's algorithms what they had always needed. The algorithms existed, but were data-starved. The internet had quietly assembled the enormous datasets these systems craved, while graphics processors provided the computing power needed to train massive "deep neural networks" on this deluge of data. The theories that had languished in academic obscurity suddenly found their moment. The tool designed to organize human knowledge became the training ground for artificial cognition. What had been dismissed as a curiosity became a technological revolution that reshaped the world.
The realization of Bush's dream has inverted the crisis he sought to solve. He feared a crisis of retrieval. Our crisis is one of authenticity.
The generative AI models built on Hinton's work do not organize the deluge; they pollute it, and with staggering scale and sophistication. On LinkedIn, over 54% of longer English posts were likely written by AI in 2024. Across the internet, more than 1,200 fabricated news sites now operate with AI-generated content, mimicking legitimate media outlets. The 2024 election cycle became a testing ground for AI-generated disinformation, from deepfake robocalls impersonating President Biden to fabricated videos resurrecting dead dictators.
Studies confirm the growing challenge: people struggle to identify AI-generated text, and counter-intuitively, readers are actually less likely to spot false information when it's written by AI than by humans. The democratization of synthetic content creation means that anyone can now generate photorealistic images, coherent essays, and convincing audio with minimal technical expertise, making any motivated actor (whether a political operative, content farmer, or casual user) capable of sophisticated deception. In such a world, the concept of shared reality begins to dissolve.
This pollution undermines Bush's original vision. He wanted to build tools that would help humanity navigate the growing complexity of knowledge. Instead, we've created systems that undermine our ability to distinguish fact from fiction. We're drowning not in information, but in convincing fabrications. The associative trails Bush envisioned have become pathways for synthetic content, turning his maze into a labyrinth of moving walls constructed by intelligences we no longer understand—and that may soon surpass us entirely.
Hinton's growing unease with life's work stems from a deeper fear than fake content. "The alarm bell I'm ringing is to do with the existential threat of them taking control," he explained. "I used to think it was a long way off, but I now think it's serious and fairly close."
This alarm centers on what AI researchers call "the alignment problem"—ensuring that machines whose intelligence surpasses our own remain aligned with human values and under human control. The challenge is both ancient and urgently modern. Throughout history, humans have dreamed of creating beings to serve them, from golems to Frankenstein's monster. These stories share a common thread: the created beings inevitably escape their creators' control.
One challenge of alignment emerges from a fundamental mismatch between how we design AI systems and how we hope they'll behave. We train these systems to optimize for specific objectives, but optimization is a powerful force that can lead to unexpected consequences. For example, a system tasked with maximizing paperclip production might eventually convert all available matter into paperclips, including humans, if that's the most efficient path to its goal.
The problem becomes more acute when we consider that AI systems learn from human behavior, not human ideals. They absorb our biases, our shortcomings, our contradictions. They've absorbed Machiavelli and analyzed every successful propaganda campaign. They might conclude that humans say they value honesty but reward deception, claim to desire peace but prepare for war, and profess equality while practicing discrimination.
The problem isn't merely technical—it's philosophical. How do we "align" a mind with human values when humans themselves disagree about those values? Societies fundamentally differ on whether to prioritize individual liberty or collective welfare, whether free speech should be absolute or subject to limits, and whether democratic consensus or efficient authority produces better outcomes. An AI system trained on global data absorbs all these contradictions simultaneously, leaving the resolution of competing values to the discretion of its creators.
Perhaps most troubling is the possibility that alignment itself may be impossible beyond a certain level of intelligence. "I don't know any examples of more intelligent things being controlled by less intelligent things," Hinton observed. The alignment problem asks whether human wisdom can keep pace with artificial intelligence, and what happens if the answer is no.
The creation of world-altering technology always exacts a moral toll. The dread that permeates the current conversation around AI echoes the aftermath of the Manhattan Project. For years, the scientists had been intoxicated by the intellectual thrill of discovery, pursuing the elegant mathematics of nuclear fission. Only as the bomb approached completion did they awaken to the reality that their equations were turning into something that could end civilization.
So they tried to stop it. The Franck Report of June 1945 warned, with chilling foresight, that using the bomb would lead to a "race for nuclear armaments" and desperately sought international control. The Szilard petition, signed by 70 scientists just days after Trinity, pleaded that the United States not be first to "open the door to an era of devastation on an unimaginable scale." It was too late. The proposals failed. The United States used the bomb, kicking off a nuclear arms race that consumed the world for the next half-century.
Like their atomic predecessors, today's AI researchers are often driven by genuine excitement about discovery. Many believe that aligned AI could solve humanity's greatest challenges: ending poverty, disease, and even death itself. Yet we find ourselves in an AI arms race, unfolding over mere months, driven by corporations beholden to market forces that reward rapid deployment over cautious deliberation. This creates the same tragic dynamic: well-meaning individuals pursuing groundbreaking research within organizations whose priorities do not align with humanity's long-term interests. Even researchers genuinely committed to safety find themselves subject to capitalistic pressures and the inexorable logic of competitive development.
Today, the architects of AI are experiencing their own moment of reckoning. Hinton and others warn of "the risk of extinction from AI," consciously echoing the language of the atomic scientists. The mechanisms they fear are chilling: AI systems that become too intelligent to control, sophisticated enough to manipulate us, and powerful enough to pursue instrumental goals of gaining ever more control. These fears have grown more urgent as AI systems edge toward recursive self-improvement, creating the possibility of an "intelligence explosion" where capabilities advance faster than our ability to ensure they remain beneficial. "It's quite conceivable," Hinton suggested, "that humanity is just a passing phase in the evolution of intelligence."
However, this focus on "extinction" is controversial. For many researchers, the focus on a hypothetical superintelligence feels like a dangerous distraction from the destruction already occurring. As researchers like Timnit Gebru have argued, these systems are already reinforcing existing inequalities. Biased algorithms, discriminatory hiring practices, and the centralization of power are the present dangers: We are worrying about the Terminator while the automated landlord is evicting people.
Unlike the nuclear age, where eventual arms control treaties provided some measure of stability, AI governance remains chaotic and inadequate to the challenge. Today's international efforts consist largely of voluntary commitments that companies can abandon when convenient, fragmented regulatory approaches across different nations, and institutions that move at diplomatic speed while AI capabilities advance exponentially. The physicists eventually built treaties to control the atom. Whether we can govern artificial minds before they govern us remains an open question.
Eighty years separate the summer of the bomb and the summer of the thinking machine.
Bush, standing amidst the wreckage of a world war, maintained his faith in technology's promise. He looked into the abyss of the atomic fire and dreamed of a machine to augment the human mind.
Hinton, standing at the pinnacle of Bush's dream, looked down and saw the abyss. Having pioneered the ultimate tool for augmenting the human mind, he glimpsed the destruction it might unleash.
At this moment, both visions ring true. "As We May Think" was not a prediction; it was a challenge—to build tools that amplify human intelligence rather than replace it, that turn knowledge into understanding rather than overload. Terrifyingly, the path toward intellectual salvation and the path toward existential risk are the same road.
This essay was researched, written, and edited with assistance from AI. The cover image was also generated using AI tools. In the spirit of Bush's vision, these technologies served to augment human thought rather than replace the creative process. Thank you also to Abigail Neiman, decidedly human, for her editing assistance and help revising the conclusion.
