They all use it

4 min read Original article ↗

Last week, at a conference, I had a random hallway conversation with another engineer. We ended up talking about Zed and he told me he’ll try it, but does it have any AI features? If so, can you turn them off?

I told him that, yes, you can turn them off. Sensing what made him ask, I added that if you do turn them off, it’s all deactivated, no AI in the background, foreground, underground.

Curious now, having a chance for more nuance than a GitHub issue usually allows, with this being a real conversation in the Real World, I asked: so you don’t use AI? Not at all?

No, he said. With a shrug, he added: I tried it once, it was completely wrong, so I stopped using it. Never used it for coding, he said.

What’d you use, I asked. Claude? ChatGPT? Have you tried GPT4?

Not sure, some website, he said with another shrug.

I haven’t been able to stop thinking about it.

There wasn’t any doubt in those shrugs. A couple of shrugs saying: I don’t care about all that AI stuff, I’m not interested, I just want to turn it off.

And I keep thinking about it and… I don’t get it.

What I do get is if you think AI is over-hyped, or that it’ll never lead to AGI, or that LLMs can’t reason, or that there’s a whole bunch of bullshit flying around in the world with the tag “AI” attached to it, or that it’s too expensive, too inefficient, too restricted, generates too much crap, or isn’t useful for what you’re doing — I get that.

What I don’t get it is how you can be a programmer in the year twenty twenty-four and not be the tiniest bit curious about a technology that’s said to be fundamentally changing how we’ll program in the future. Absolutely, yes, that claim sounds ridiculous — but don’t you want to see for yourself?

The boy cried wolf, we won’t fall for that old trick again, hype’s hype and hot air is hot air, but now the whole town is saying there’s a wolf alright and you’re not interested in seeing what it looks like, not at all?

There’s Andreas Kling, creator of SerenityOS and the Ladybird browser, using Copilot to build JIT compilers and often saying how much he values Copilot. Mitchell Hashimoto, founder of Hashicorp and creator of so many successful tools that I don’t know which one to name here, doesn’t use language servers but Copilot when hacking on Ghostty. Fabrice Bellard, a hacker with a portfolio so impressive that if someone would say that he’s made-up and doesn’t really exist you wouldn’t immediately brush it off, has been getting into LLMs and building tools for them. John Carmack — John Carmack — is working in AI now. Jarred Sumner, who wrote Bun into the world, is using Claude to do something he could easily do himself. Simon Eskildsen, who’s done more engineering on napkins than others have on their computers, is using AI “all the time”. antirez — the antirez — has been getting into LLMs for at least the last year.

That’s just off the top of my head. I could go on and on and on, but I won’t because — somehow, magically? — I can hear you say “that’s an appeal to authority, it doesn’t mea—” Yes, yes, yes, you’re right.

Look. I’m not saying you should kneel in front of the AGI altar.

What I’m saying is that ever since I got into programming I’ve assumed that one shared trait between programmers was curiosity, a willingness to learn, and that our maxim is that we can’t ever stop learning, because what we’re doing is constantly changing beneath our fingers and if we don’t pay attention it might slip aways from us, leaving us with knowledge that’s no longer useful.

Maybe that assumption was wrong, maybe we don’t all share this trait, and maybe that’s okay, but even if… I don’t get how you can see some of the world’s best programmers use a new technology to make them better at programming and shrug it off.

How can you see them all use it and not think that, okay, maybe it’s not all bullshit, maybe something’s there, I need to figure out what it is?

Discussion about this post

Ready for more?