There's this common argument that only those who are bad at coding, or don't care about the craft of it, use AI to code. I think this is pretty clearly false. Let's look at a list, shall we?
-
Salvatore Sanfilippo, the creator of Redis
- Replaced the
fast_floatC++ library with a pure C implementation in order to make the project more maintainable, removing around 3.5k lines of code on net - Wrote a Flux 2 image generation model inference library in pure C with zero external dependencies
- Wrote a blog post covering some other things he did, and why he is using AI so heavily now: "Don't fall into the anti-AI hype"
- Replaced the
-
David Heinemeier Hansson, odious fascist, but still undeniably a good developer, and the creator of Rails and Basecamp:
- Wrote a blog post about his use of AI agents, here: "Promoting AI agents"
-
Simon Willison, co-creator of the Django web framework, board member of the Python Software Foundation, and creator of Datasette:
- Writes a lot about using LLMs for various things on his blog: SimonWillison.net
-
Armin Ronacher, creator of Flask, Pygments, Jinja, and a number of other moer or less widely known OSS projects
- Initially was against AI: "I Think AI Would Kill my Wife"
- Now is all-in: "AI Changes Everything", "Welcoming The Next Generation of Programmers", "90%" ("For the infrastructure component I started at my new company, I’m probably north of 90% AI-written code.")
-
Guido van Rossum, creator of Python:
- In an interview with Lex Fridman, said: "You know where that [coding] is going? GitHub CoPilot. I use it every day, and it writes a lot of code for me."
-
John Carmack, creator of Doom, Quake, and pioneer of video games:
-
Andrej Karpathy, founding member of OpenAI and former Director of AI at Tesla:
- Proclaimed that: "The hottest new programming language is English"
- Uses AI tools like Cursor to build high-performance C code: llm.c
-
Mitchell Hashimoto, creator of Ghostty and Terriform:
-
Peter Norvig, influential educator and AI researcher:
- In "Large Language Models and the Future of Programming", says "I think in the future, programming is going to be thought of more as a collaboration, not as instructions — it's not the programmer telling the computer what to do, it's the two exploring the space together to solve the problem."
-
Chris Lattner, creator of LLVM, Clang, and Swift:
- In an inteview with Jeremy Howard, said: "It’s amazing for learning a codebase you’re not familiar with, so it’s great for discovery. The automation features of AI are super important. Getting us out of writing boilerplate, getting us out of memorizing APIs, getting us out of looking up that thing from Stack Overflow; I think this is really profound. This is a good use."
- Not an unalloyed booster, though: "“Software craftsmanship is the thing that AI code threatens. Not because it’s impossible to use properly—again, I use it, and I feel like I’m doing it well because I care a lot about the quality of the code. But because it encourages folks to not take the craftsmanship, design, and architecture seriously."
-
Steve Klabnik, writer of The Rust Programming Language book, Rust core team member, Ruby on Rails contributor
- Talked extensively about using AI, and how to push back against AI haters, on a podcast with the Zed team, where he also mentioned that he would've considered himself an "AI hater" up until 2025
- Has now written a prototype systems programming language (a sort of Rust-lite), talking about how Claude Code made it possible ("I have also been wondering for a few years if you can build a language without financial backing and a team. I think LLMs might change that equation. Yes, Claude costs money, but I mean like, you don’t need to do what Mozilla did and drop seven or eight figures worth of money on a development team to make meaningful progress.") and how Claude writes most of his code lately: "But one thing that 2025 changed: I went from an AI skeptic to writing most of my code with Claude."
- Has written a blog post about how the AI discourse in general (not just from the hype side) is bad (generally uninformed and tribalistic)
- Has written about his experiences positive and negative with Claude
- Has writte a few primers and tutorials for agentic development: Agentic development basics, Getting started with Claude for software development, How to think about Gas Town
-
Yehuda Katz, creator of Handlebars, Ember.JS, and Rust's package manager Cargo:
- Writes here about his process: "I’ve been using AI code assistants for ages at this point, and I’ve landed on a productive workflow for myself. … I mostly use Windsurf with Cascade for the vibe coding portion. I have a Pro account, and chew through a decent number of tokens. … This is what I do, day in and day out. And it’s made me much more productive."
More iffy ones:
-
Ryan Dahl, the creator of Node.JS and Deno:
- "the era of humans writing code is over."
- Is presumably therefore using LLMs to code a lot, but I couldn't easily find any direct evidence
-
Even Linus Torvalds, creator of the Linux kernel and the Git version control system:
- Wrote a new open source project called AudioNoise with a small part of it vibe coded in Python ("Also note that the python visualizer tool has been basically written by vibe-coding.")
- Has said that: "There is zero point in talking about AI slop. That's just plain stupid. … It's why I strongly want this to be that "just a tool" statement."
- Has also called himself "a huge believer in AI as a tool."
This refutes a few arguments that I see a lot of people make, especially in places like lobste.rs:
- "Only bad coders use AI": This is dead. You cannot argue this while Salvatore Sanfilippo is 100% agenticly coding improvements to Redis in low level C code and Guido van Rossum uses Copilot daily.
- "It's all Clever Hans": The detailed, technical write-ups (Sanfilippo's C conversion, Klabnik's language prototype, Willison's extensive documentation) show these developers understand and direct the output. It is extremely unlikely they'd be fooled by an AI bumbling about doing fully random things until it randomly gets the right answer and they accept it. These systems are steerable. They may not be precise, but they are accurate.
- "Gambling effect only": Occasional wins don't sustain multi-year adoption across entire workflows. Ronacher's "90% AI-written code" and Klabnik's "most of my code with Claude" show sustained, iterative use, making it seriously hard to claim that it's just jackpot chasing.
- "Disingenuous hype": Some have financial incentives (Karpathy, Norvig), but most don't. Sanfilippo, Willison, Ronacher, and van Rossum have nothing to sell. More, their views remain credible because they retain critical nuance (e.g., Lattner's craftsmanship warning).
- "It doesn't make you more productive": I think it's hard to argue that, for instance, Steve Klabnik could've created an entire compiled, low level systems language prototype in about two weeks without agentic help. He's good, but not that good. No one is.
The point isn't to say that you have to use AI, or that it has no downsides or risks, or no reasons to be skeptical, but more to say that at this point it's just not sane or feasible to claim that it isn't useful for anyone. I think it's at the stage now where, like many other development tools — IDEs, debuggers, dynamic vs static typing, high level vs low level languages, OOP vs functional, etc — some people clearly really do get benefits out of them, there's there there, even if it might not work for others; and that it might be valid to say "skill issue" to the people who can't get value out of them.