Author’s note: Some commenters on Hacker News questioned whether Part I of this series was AI-generated. I want to be transparent about my process: I am a human being who researched, structured, and wrote both pieces. I did use Claude as a research and drafting tool during parts of my workflow — the same way a writer might use a search engine, a database, or a conversation with a knowledgeable colleague to stress-test arguments and surface sources. Every claim in this article is sourced to published reporting, government documents, or corporate disclosures, all of which I verified independently. The editorial judgment, the structural choices, the argument, and the voice are mine. If your objection to an article about the surveillance state is that the author used a word processor you find suspicious, I’d gently suggest you’re focusing on the wrong thing.
In Part I of this series, I documented the demand side of the American surveillance apparatus: how the Trump administration, using tools from Palantir, Clearview AI, Zignal Labs, and others, has assembled an AI-powered system capable of tracking millions of people in real time, predicting “threats” before they occur, and automating the machinery of deportation, surveillance, and political repression.
This piece covers the supply side — the story of how OpenAI, the world’s most prominent artificial intelligence company, systematically repositioned itself from a nonprofit dedicated to benefiting “all of humanity” to a company that donates to presidents, courts defense secretaries, replaces blacklisted competitors, and is building the technological substrate for the next generation of mass surveillance capabilities.
What follows is documented through government contracts, corporate disclosures, investigative journalism, and OpenAI’s own public statements. Where I discuss technical capabilities, I am describing what is achievable with current or near-term technology — not speculation about science fiction.
OpenAI announced itself to the world in December 2015 with a press release stating: “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole. Since our research is free from financial obligations, we can better focus on a positive human impact.”
CEO Sam Altman told the New Yorker in 2016: “I unabashedly love this country, which is the greatest country in the world. But some things we will never do with the Department of Defense.”
By early 2026, OpenAI had a $200 million contract with that same Department of Defense, had struck a deal to deploy its models in the Pentagon’s classified networks, and was positioning itself as the AI vendor of choice for the national security establishment.
The pivot happened in stages, and each stage was deliberate.
January 2024: OpenAI quietly deleted language from its usage policy that had explicitly prohibited use of its technology for “weapons development” and “military and warfare.” The company replaced this blanket prohibition with vaguer language about not using the service to “harm yourself or others.” Experts at the AI Now Institute warned that the removal left the door open for lucrative military contracts. OpenAI’s VP of global affairs, Anna Makanju, confirmed the company was already working with the Pentagon on cybersecurity projects.
Spring–Fall 2024: OpenAI went on a hiring spree targeting the defense and intelligence community. In February, the company brought on Katrina Mulligan, a veteran of the intelligence establishment who had led the media response to the Snowden leaks while on the Obama National Security Council, worked for the director of national intelligence, and served as a senior civilian overseeing Special Operations forces at the Pentagon. In October, OpenAI hired Dane Stuckey as CISO after a decade at Palantir — the same Palantir now building ImmigrationOS for ICE, the same Palantir whose ELITE tool generates targeting packages and assigns “confidence scores” to people’s home addresses using HHS data. The company also added former NSA director Paul Nakasone to its board.
2024 lobbying spend: OpenAI spent $1.76 million on government lobbying — a nearly sevenfold increase from the $260,000 it spent in 2023. The company hired lobbyist Matthew Rimkunas, who had spent 16 years working for Senator Lindsey Graham, and later added lobbyist Meghan Dorn, who had worked five years for Graham’s office.
December 2024: OpenAI announced a strategic partnership with defense contractor Anduril Industries — its first collaboration with a commercial weapons manufacturer. The partnership focused on counter-drone systems. Anduril’s leadership is closely tied to Peter Thiel’s network: chairman Trae Stephens previously worked at Palantir and Thiel’s foundation; co-founder Palmer Luckey is a prominent Trump supporter with deep ties to the Republican political establishment. The same Peter Thiel whose Palantir is now the backbone of ICE’s surveillance infrastructure.
December 2024: Altman made a $1 million personal donation to Trump’s inauguration fund. Democratic senators accused him of using donations “to cozy up” to the administration to avoid regulatory scrutiny.
On January 21, 2025 — the day after Trump’s inauguration — Altman stood behind the presidential seal at the White House and praised the president for the $500 billion “Stargate” initiative. “For AGI to get built here, to create hundreds of thousands of jobs, to create a new industry centered here, we wouldn’t be able to do this without you, Mr. President,” Altman said.
This was a man who, in 2016, had compared Trump to Hitler in 1930s Germany and called on tech companies to stand against him. Who had donated $200,000 to help reelect Biden. After Stargate, Altman wrote on X that “watching [Trump] more carefully recently has really changed my perspective on him.”
Stargate is not merely a business deal. It is the physical infrastructure of AI dominance — data centers across Texas, Ohio, Michigan, and eventually internationally — and it was announced by the President of the United States as a national priority. Trump called it “the largest AI infrastructure project in history” and indicated he would use emergency declarations to expedite its development.
The quid pro quo is not hidden. It is the arrangement itself. OpenAI gets billions in infrastructure support, favorable regulatory treatment, and a direct line to the national security establishment. The administration gets the ability to claim credit for AI supremacy, jobs, and investment — and gets a willing partner for whatever comes next.
In March 2025, OpenAI’s global affairs chief Chris Lehane — a former press secretary for Al Gore and special counsel to President Clinton — submitted a white paper directly to the Trump administration. As reported by The Intercept, the document pitched a vision of AI built not for global benefit but for “the explicit purpose of maintaining American hegemony and thwarting the interests of its geopolitical competitors — specifically China.”
The policy paper proposed that the government:
Create a direct line for the AI industry to reach the entire national security community
Work with OpenAI “to develop custom models for national security”
Increase intelligence sharing between industry and spy agencies
Preempt state-level AI regulations that “may hinder economic competitiveness”
Ease environmental restrictions that might slow data center construction
Use export controls not merely to restrict China but to ensure America is “winning diffusion” globally
The document explicitly advocates conceiving of the AI market as “the entire world less the PRC and its few allies” — quietly excluding over 1 billion people from the humanity OpenAI claims to serve.
Lehane told Axios he had been at the White House the prior week and had held “many meetings” with Trump administration officials. “Our work stream is intersecting with where the administration is going,” he said.
Lehane was named to TIME’s 100 Most Influential People in AI for 2025. The magazine noted he had dropped “any pretensions that [OpenAI] would welcome AI regulation, and overtly embraced an accelerationist race to AI supremacy.”
Meanwhile, OpenAI’s Washington team — led by Mulligan — now includes former Department of Defense, NSA, CIA, and Special Operations personnel. The revolving door doesn’t just spin. It has been removed from its hinges.
In June 2025, OpenAI was awarded a $200 million, one-year contract with the Defense Department — its first formal defense contract. The company had been building a portfolio that included partnerships with NASA, the NIH, and the Treasury Department, and had created ChatGPT Gov for federal employees.
But the most revealing chapter came in February 2026, when the Trump administration effectively destroyed OpenAI’s primary competitor.
After months of negotiations over whether Anthropic could restrict its AI from being used for mass domestic surveillance or fully autonomous weapons, Defense Secretary Pete Hegseth designated Anthropic — an American company — a “supply chain risk to national security.” This designation, normally reserved for foreign adversaries, would bar any military contractor from doing business with Anthropic. Trump ordered every federal agency to cease using Anthropic’s technology.
Within hours, OpenAI announced it had struck a deal with the Pentagon to fill the gap.
The sequence is worth stating plainly: the government blacklisted a company for insisting its AI not be used for mass surveillance of Americans, and the company that swooped in to replace it is the one that donated $1 million to the president’s inauguration, co-announced a $500 billion deal at his White House, submitted a white paper urging the government to deregulate its industry, and hired its Washington team from the intelligence agencies that operate the surveillance programs.
Sam Altman publicly stated that OpenAI shared the same “red lines” as Anthropic. He claimed the Pentagon deal included the same protections. But as Fortune reported, while Anthropic tried to have limits spelled out explicitly in the contract, OpenAI agreed that the Pentagon could use its tech for “any lawful purpose” while also claiming to have the protections “in the agreement.” It is unclear how both of these things could be true.
Dean Ball, a former Trump senior policy adviser for AI, called the Anthropic blacklisting “attempted corporate murder.” Multiple legal experts said Hegseth’s interpretation of supply-chain-risk authority was “almost surely illegal.”
The message to every AI company was unmistakable: cooperate, or be destroyed.
In December 2025, the Trump administration launched the “U.S. Tech Force,” placing personnel from companies including OpenAI, Amazon, Microsoft, Palantir, Anduril, and xAI directly into government projects. Employees from these companies will work on government AI initiatives — and the companies have committed to considering program “alumni” for future employment.
Independent analysts described this structure as intentional regulatory capture. Instead of independent officials overseeing AI policy, the companies building the AI are sending their own personnel to implement federal policies governing its use.
OpenAI is not merely selling technology to the government. It is embedding itself inside the government.
Everything described above involves capabilities that exist today — Palantir’s targeting packages, Clearview AI’s facial recognition, Zignal Labs’ social media monitoring, the NSA’s XKeyscore. These systems are powerful but they are constrained by the quality and availability of their training data.
OpenAI’s video generation model, Sora, represents a qualitative leap beyond these constraints — not because it is a surveillance tool itself, but because it is a training data factory for the next generation of surveillance tools.
The core technique is called synthetic data generation. You use a generative model to produce labeled training data for a different, analytical model. When you generate the video, you have perfect ground truth — you know exactly what’s in the scene because you specified it. This solves the labeling bottleneck that plagues real-world data collection.
With a sufficiently high-fidelity video generator, you can create training distributions that would be impossible, prohibitively expensive, or deeply unethical to collect from real humans. The synthetic data doesn’t need to be photorealistic in every detail — it just needs to capture enough statistical structure for the downstream model to generalize to real-world inputs.
Here is what becomes trainable:
Threat detection from surveillance video. Building systems that detect fights, assaults, armed threats, or active shooter scenarios requires training data of people committing violence. Real datasets are tiny, legally sensitive, and biased. Synthetic video can generate realistic depictions of escalating aggression, weapon brandishing, crowd panic, and physical altercations across every conceivable environment — parking garages, transit stations, schools, retail spaces — with controllable variables for crowd density, lighting, occlusion, and camera angle. This extends to training models that detect pre-attack behavioral indicators, concealed weapon signatures from movement anomalies, or crowd dynamics preceding stampedes.
Person re-identification at scale. One of the hardest problems in tracking a person across multiple camera views is getting labeled data where you know person A in camera 1 is the same as person A in camera 7, across changes in angle, lighting, and clothing. Synthetic generation can produce unlimited identity-consistent video across arbitrary camera networks — the same synthetic person walking through a synthetic city, captured from dozens of viewpoints, with systematic variation in disguise, clothing, accessories, and gait. The result: re-identification systems robust to evasion in ways current models cannot achieve because real training data doesn’t cover enough of the variation space.
Emotion, deception, and psychological state analysis. Training models to detect micro-expressions, deception, emotional states, or psychological distress from video requires either recording people experiencing genuine emotions (massive ethical issues) or using actors (whose expressions introduce systematic bias). Synthetic generation can produce precisely specified emotional states, masked emotions, micro-expressions at controlled intensities, and culturally varied expressions — all with perfect labels. This enables systems to detect coercion, intoxication, cognitive impairment, or acute psychological crisis from video.
Gait recognition and biomechanical profiling. Identifying people by how they walk requires enormous per-individual datasets impractical to collect ethically. Synthetic generation can produce gait patterns corresponding to specific conditions, fatigue signatures, varying levels of impairment, and capability degradation under stress — all perfectly labeled across the full parameter space.
Now connect this to the surveillance infrastructure documented in Part I. The government already operates facial recognition at every international airport. ICE already monitors 8 billion social media posts per day. Palantir already generates targeting packages with confidence scores. The ATF conducted hundreds of facial recognition searches without any policy governing their use.
What is missing is not the will to conduct mass behavioral surveillance. What is missing is the training data to build models sophisticated enough to do it reliably. Sora-class video generation provides exactly that missing ingredient.
There is a final structural observation worth making about Sora’s content policy, and it concerns the relationship between privacy protection and data collection.
Sora’s policy around identity is framed as protective: you should not generate realistic video of other people without their consent. The stated purpose is to prevent deepfakes and harassment. On its face, this is reasonable.
But consider what it does to user behavior. If you cannot generate video of others, the path of least resistance for exploring the tool’s most compelling capability — putting a realistic human in a generated scene — is to use yourself. To do this well, you upload selfies, photos from multiple angles, full-body shots, images in different lighting. The better the reference material, the better the output, so users are incentivized to provide more and higher-quality biometric data.
What the system receives from this interaction goes far beyond what any social media profile contains:
Geometric facial data from multiple angles, enabling precise 3D facial reconstruction
Skin texture, mole patterns, scarring — features useful for re-identification but typically hard to extract from surveillance footage
Body proportions and posture for anthropometric profiling and gait recognition training
Contextual metadata — photos taken in homes and workplaces carrying GPS coordinates, timestamps, and device information
Longitudinal data from users who return over months, showing how appearance changes over time — exactly what aging-robust recognition systems need
All of this arrives voluntarily, with consent under the terms of service, labeled by the user themselves (”this is me”), at a quality level that would be impossible to obtain through passive surveillance. Users are motivated to solve the system’s data quality problem for the system, because doing so makes the output better.
The privacy protection for others becomes the data collection incentive for you.
And every user who uploads their photos and generates a video of themselves contributes to a calibration dataset that measures how well synthetic humans map to real ones — the critical validation link in the entire synthetic-data-for-surveillance pipeline.
Let me connect the pieces explicitly.
The government has built a surveillance apparatus of extraordinary scope. DHS has awarded more than $1 billion in IT contracts in roughly the first year of Trump’s second term. ICE plans to spend more than $300 million on social media monitoring, facial recognition, license plate readers, and location tracking tools. The agency can request footage from more than 2,000 local police departments through Amazon’s Ring. The NSA’s XKeyscore continues to operate with no judicial oversight. The Intelligence Community admits it does not know how much commercially available data it is collecting, what types, or what it is doing with it.
OpenAI has positioned itself as the preferred AI vendor for this apparatus. The company deleted its military use ban, hired from the NSA and CIA and Pentagon, donated to the president, co-announced a half-trillion-dollar deal at the White House, submitted policy proposals drafted to mirror the administration’s priorities, won a $200 million defense contract, embedded its personnel inside the government through the Tech Force initiative, and replaced a competitor that was blacklisted for insisting its AI not be used for mass surveillance.
OpenAI’s video generation technology provides the missing link in the surveillance pipeline: the ability to generate unlimited, perfectly labeled training data for analytical models that would otherwise be impossible to train — re-identification systems, behavioral threat detection, emotion analysis, gait recognition, and more.
And OpenAI’s content policies, ostensibly designed to protect privacy, structurally incentivize users to voluntarily provide high-quality biometric data that calibrates the bridge between synthetic and real-world surveillance capabilities.
Bruce Schneier warned in January 2026 that we have entered the era of “bulk spying” — mass surveillance plus AI-powered analysis. Congressman Warren Davidson wrote that “surveillance capitalism and government spying on its own citizens has run amok.”
The Bulletin of the Atomic Scientists — the organization that created the Doomsday Clock — compares AI to nuclear technology and calls for “restrictive, well-designed controls to prevent damage to democracy.”
Those controls do not exist. The company building the engine has no interest in creating them. And the government operating the machine has every interest in ensuring they never arrive.
The panopticon does not merely watch. It learns. And the company teaching it has purchased its seat at the table with a $1 million donation and a willingness to say yes when others said no.
This is Part II of a series. Part I: The Panopticon Is Here documented the surveillance infrastructure the government has already deployed. Part III will examine what resistance looks like — legal, technical, and political — and whether any of it can work.
Sources and References
The following sources inform this article, in addition to all sources cited in Part I:
The Intercept, “OpenAI’s Pitch to Trump: Rank the World on U.S. Tech Interests,” June 3, 2025
The Intercept, “OpenAI Quietly Deletes Ban on Using ChatGPT for ‘Military and Warfare,’” January 12, 2024
CNBC, “OpenAI quietly removes ban on military use of its AI tools,” January 16, 2024
CNBC, “OpenAI strikes deal with Pentagon hours after Trump admin bans Anthropic,” February 27, 2026
CNBC, “OpenAI wins $200 million US defense contract,” June 16, 2025
CNBC, “Trump had phone call with OpenAI’s Sam Altman last week about AI infrastructure,” January 22, 2025
NPR, “OpenAI announces Pentagon deal after Trump bans Anthropic,” February 27, 2026
NPR, “Bezos, Zuckerberg and Altman donate to Trump’s inauguration fund,” December 13, 2024
Fortune, “OpenAI sweeps in to snag Pentagon contract after Anthropic labeled ‘supply chain risk,’” February 28, 2026
Fortune, “OpenAI strikes a deal with the Pentagon, just hours after Trump orders end to Anthropic contracts,” February 27, 2026
CNN, “OpenAI strikes deal with Pentagon hours after Trump admin bans Anthropic,” February 27, 2026
NBC News, “OpenAI strikes deal with Pentagon after Trump orders government to stop using Anthropic,” February 27, 2026
ABC News, “OpenAI’s Sam Altman once warned America about Trump. Now he’s partnering with him,” January 28, 2025
Fox Business, “Sam Altman defends OpenAI Pentagon deal after Trump executive order,” March 1, 2026
Fox Business, “OpenAI lays out key proposals for Trump admin AI Action Plan,” March 13, 2025
Axios, “OpenAI’s Altman responds to Dem letter demanding he explain Trump donation,” January 17, 2025
Axios, “Chris Lehane on OpenAI’s policy strategy for new Trump era,” March 13, 2025
MIT Technology Review, “OpenAI has upped its lobbying efforts nearly seven-fold,” January 21, 2025
TIME, “Chris Lehane: The 100 Most Influential People in AI 2025”
Common Dreams, “OpenAI Cuts ‘Military and Warfare’ Ban From Permissible Use Policy,” January 15, 2024
Brennan Center for Justice, “Money in Politics Roundup — October 2025”
Quartz, “OpenAI gets $200 million Defense Department contract,” June 17, 2025
Calcalist/Ctech, “AI drives defense innovation with OpenAI and Anduril leading the way,” December 8, 2024
Sherwood News, “OpenAI strikes deal with Anduril to bring its AI to the battlefield,” December 4, 2024
Evolmagazine, “U.S. Tech Force: Analysis of Trump’s AI Plan & Partners,” December 17, 2025
OpenAI, “Announcing The Stargate Project,” January 21, 2025
OpenAI, OSTP/NSF RFI Response, March 13, 2025
OpenAI, OSTP RFI Response, October 27, 2025
Wikipedia, “Stargate LLC”