I was writing a blog post where I was going to reference Zeynep Tufekci’s 2025 NeurIPS keynote, and realized there isn’t a solid synopsis online. So here’s mine, because the framing felt important and immediately useful.
Tufekci’s through-line is that the dominant AI fears (and hopes) are misaligned: they fixate on “AGI,” superintelligence, or head-to-head “can it beat humans at X?” benchmarks. Her argument is that the destabilizing risk doesn’t require AI to be better than humans in any global sense. It’s enough for AI to be good enough, cheap, fast, and deployable at scale, which she calls in the talk and elsewhere “Artificial Good-Enough Intelligence.”
The nightmare we should be focusing on is institutional: generative AI breaks long-standing correlations that society uses to infer things like effort, sincerity, authenticity, and credibility. Once these signals erode, we don’t automatically get something better. We scramble to replace the old filters with new ones. And the transition from an old to a new way of doing things can be very costly, even if the old way of doing things wasn’t great.
She started by describing systematic ways in which early conversations about impacts of a technology are incorrect: they tend to focus on the wrong (e.g., familiar at the time) benchmarks and incremental substitutions rather than systemic second-order effects.
She started with examples of the wrong questions that smart, thoughtful people with skin in the game got stuck in when prior technologies like the car and printing press emerged. For example, when the printing press came out, initially it seemed like it would only solidify the power of the church, who could use it to print bibles, spreading the word, and make money by selling “Indulgences” that they claimed would get your loved ones out of purgatory faster. This intuition, that incumbents will use a new medium to consolidate power, wasn’t totally wrong, but cheaper replication and distribution also empowered challengers. Ultimately others like Martin Luther gained access and used the printing press to spread alternative messages, leading to Reformation, the fracturing of Western Christianity, and many years of destructive religious wars.
When cars were first introduced, people’s mental models were still fixated on horses. Many were preoccupied with whether “horseless carriages” would be faster than those with horses. They were imagining what would happen on the small scale as you take out horses one by one and replace them with cars, like the fact that the cities would have less manure. But it’s not “replace a horse, get less manure,” it’s redesign streets, cities, logistics, commuting, housing, and who gets to live where. They failed to grasp how cars would reshape the entire landscape by making it possible for people to move out of cities.
She also connected this to early optimism about social media and democracy, which underestimated how quickly incentives and tactics would evolve. She talked about the reactions she received earlier in the 2000s when she tried to warn people about the need for transparency in how social media sites like Facebook were being used politically. At the time, the prevailing view was that social media would be a savior for democracy. She wrote her first New York Times piece to talk about there needing to be more transparency on how Obama administration used Facebook for campaigning, but received pushback from the White House and high level silicon valley execs, who were confident that social media would always be great for democracy and that their opponents would never be able to use tools the way they could.
One of her points was that the artificial general intelligence (AGI) dialogue sets up the wrong expectation: that there is a line representing humanness and AI is slowly approaching that line. In reality it’s always going to be a different thing, and the relationship much more complicated.
She argued that many of the tasks that the ML community tends to focus on–rationality, Math Olympiads, and the like–are interesting exactly because they are a little uncommon for humans. It’s the fact that models can now reproduce complex, coherent human language that is good enough at scale that matters more than how their performance compares to ours on niche tasks. The important question has never been whether they are absolutely better. If they were on the same trajectory that would likely be easier: we’d be out of jobs at first, but that would be a political question about how to divide all the additional wealth. The intellectual questions we are actually facing are much more complicated – we have a foreign kind of intelligence reproducing much of what we consider uniquely human.
Tech takes the scarce and makes it abundant. Part of her point about the importance of “good enough AI” is that social disruption doesn’t wait for a model that is uniformly excellent; it arrives when a system is useful enough to substitute into institutional workflows. And as a society we have come to rely on many built-in mechanisms that assume only humans can generate outputs with certain properties.
LLMs break our ability to conclude when there is proof of effort. We make high schoolers write essays and do math assignments not because we care about essay or because we don’t know the problem set answers, but because the effort trains them in a certain way. We read customized cover letters as an important signal of interest, because it has traditionally been hard to make a good one, and you could only do it for so many jobs you applied for. Gatekeeping is inevitable, and when the old mechanisms stop working, other measures will step in, like relying on the prestige of the candidate’s institution or their connections to decide who to hire, or what papers to cite or publish.
LLMs also break proof of authenticity. Our financial system, our courts, etc are built around assumptions that certain things are hard, like forging certain types of evidence (e.g., videos, images, voice clips). When those things are no longer hard, some mechanism must step in in its place, and it may not be ideal. Many of the mechanisms that are breaking down are related to verifying authenticity or effort. One likely replacement is expanded surveillance and credentialing—solutions that may stabilize trust but at a steep cost
Ultimately, the message is that transitions are expensive: breaking a flawed mechanism doesn’t automatically produce a better mechanism. “Good enough” AI at scale can destabilize the scaffolding of effort and authenticity long before anything like “AGI” arrives—if it ever does.
