Daniel's Blog · The Case That More Openness Brings More Good to Society

5 min read Original article ↗

The Case That More Openness Brings More Good to Society

The e/acc crowd loves to argue against open source by asking, "what if bad actors get hold of this technology?" I want to make the opposite case: the push away from openness, the one Anthropic and friends are leading right now, is itself a leading driver of harm in society, not a defense against it.

Start with a premise that gets skipped too often: positive impact on society includes normalcy. We have built a relatively peaceful society because most people aspire to be normal. Normal people want to live somewhere the daily fight to survive is as small as possible, and to contribute when they have surplus to give. That aspiration is the soil that laws, regulations, and ethics grow out of. The opposite of contributing positively is being an anomaly, defined as someone who breaks the social contract, and the first step back to normalcy is identifying those people. Common sense, until you notice the zeroth step hiding underneath it: for anyone to recognize an anomaly, they need the know-how to recognize one. Recognition is a skill, and skills require knowledge.

Imagine trying to find a thief who has stolen nuclear material. How can the public help if they don't know what nuclear material looks like, what it does, or why standing next to it is dangerous? Worse, an uninformed public will actively obstruct a real investigation like refusing to evacuate, wandering into contaminated zones, amplifying rumors. Knowledge is the first line of defense. Without it the social fabric tears, because it isn't resilient enough to absorb the shock of an anomaly it cannot name.

This is exactly why reducing public contact with LLMs produces more harm, not less, and the rollout of ChatGPT is the cleanest case study we have. There was real friction during first contact between the public and LLM providers but as exposure grew and as people developed intuitions about how these systems actually work, adoption climbed and risk management improved in step. The world did not end when GPT-2 was released to the public. It didn't end because the public got the chance to learn how a bad actor might weaponize a language model against them, and then organized openly to push back. Open harnesses, open weights, open disclosure, open write-ups: these are how the warning spread fast enough to matter. They are how ordinary people learned to articulate why a given misuse was dangerous and how big the blast radius could be.

Limiting public knowledge about how things work doesn't make solutions easier to find. It makes them harder. Nobody is arguing we should hand every citizen a nuke — we all understand the danger, precisely because the danger has been explained openly for eighty years. We teach radiation safety so radioactive materials can be used in hospitals and power plants without killing anyone. The thing we actually fear is the opposite situation: a hostile state acquiring a nuke while our own public is too uninformed to support a coherent response. Secrecy on our side does not slow them down. It only blinds us.

Anthropic markets itself as the safety-first lab and treats Claude Code as a crown-jewel secret, a closed product whose internals are deliberately walled off, with reverse-engineering efforts hit by takedown notices. And then, on March 31, 2026, Anthropic accidentally leaked part of the internal source code for Claude Code, confirming the incident publicly and attributing it to a release packaging error rather than a security breach. A debug file was bundled into a routine update and pushed to the public npm registry, pointing to a zip archive on Anthropic's own cloud storage that contained nearly 2,000 files and around 500,000 lines of code, which was mirrored and dissected on GitHub within hours. The company then issued takedowns, and as one analyst put it, the takedowns didn't contain the leak, they just changed its file extension, because developers had already rewritten the functionality in other languages.

Sit with the irony. The lab most committed to the argument that openness is dangerous could not keep its own source code closed for a single routine release. The "secret sauce" defense is philosophically weak and operationally fragile. Secrets leak. Build pipelines misconfigure. Humans err. Meanwhile the things openness was supposed to give us like public literacy, distributed scrutiny, faster patching of misuse, a citizenry that can actually participate in governing the technology are forfeited by choice, in exchange for a moat that turns out to be a sieve.

Closed-source-as-safety is hubris dressed up as caution. It assumes the lab can hold the line forever, that competitors cannot catch up, that adversaries cannot reconstruct, that the public is safer being kept in the dark than being brought into the light. None of those assumptions survive contact with reality. North Korea has nukes and ICBMs, and nobody handed them the blueprints on a silver platter. Capability spreads. The only variable we actually control is whether the defenders: the public, the researchers, the watchdogs, the next generation of builders, get to spread with it.

Openness is not the risk. Pretending we can outrun openness is.