Press enter or click to view image in full size
Back in March, I ripped the legendary fart that brought investors scurrying: viiiiiiiibbbecooooddeee. That one was fortunately allegorical, but amazingly enough, I recently ripped a tiny one called beeeeeeads, a little open-source project about as big as a gnat’s ass, and I was approached by investors over it. They are so hungry for anything. More on that later.
For today’s post, there’s a bit of a brawl brewing over whether humans matter or not. Considering you may well be a fellow human reading this post, you might think to yourself, gosh, that would be a silly thing to fight over. Of course we matter!
..right?
I accidentally sawed into this nerve in an unprepped patient with my last year’s blog post, Death of the Junior Developer. A YouTuber read it aloud on his channel, spitting loudly on the floor every other paragraph to express his displeasure with me and/or his floor. His followers launched into a huge argument in which they quickly zeroed in on the “Do We Matter” debate.
Because if AI can write code, then do programmers matter?
It’s been 18 months, and we now know the answer. After two years of watching companies undergoing dramatic AI transformations in all phases, stages, and strategic forms, the answer has become clear as the nose before our eyes: Programmers will matter because everyone will be a programmer.
This transformation is already underway, with most roles at most companies writing some kind of code to accelerate their own department’s productivity, no longer blocked on engineers or third-party software. This trend will accelerate until we are in a world where everyone is building software, inside companies and out. Everyone will be a vibe coder.
What does it mean to matter as a programmer in that world?
For Programmers, Mattering Means Jobs
It’s pretty clear that when people ask, “will programmers matter?” they mean “to employers?” (Whereas the debate about whether humans matter is a bit more… cheesy, as we’ll see.)
It turns out, the world will still need engineers. Software comes in all shapes and sizes. The most ambitious, Google-sized software will still require human engineers for years to come: Trained, human, professional software developers. That’s you, Vault Hunter. There will be jobs in Big Software.
Of course, the most successful software may well be vibe coded by uttter knee-biting idiots, and I mean that in the most flattering and complementary sense of the word “successful.” You can never tell who’s going to win with a new medium; just look at YouTube, Mr. Beast, the Pauls, blah blah. But whoever wins, I think it’s fantastic that pretty much any non-programmer will be able to create useful software, and it’s cool that some will catch lightning in a bottle.
Successful idiots aside, I believe most real-world software services will continue to be backed by big, complex monoliths that need human engineers to help AIs navigate them. I mentioned this in my recent talk at the Enterprise Tech Leadership Summit in Las Vegas. Companies will need to rely on the three S’s to help AI work on monoliths: Signposting, Search Engines, and Senior Engineers. The three S’s will become increasingly important even for small companies as AI starts cranking out 10x-100x as much code.
And of course if everyone is coding, then junior engineers will be in high demand, as mentors and helpers for people who don’t know how things work under the hood. So there’s a role for junior engineers in the new “everyone codes” ecosystem.
That’s the good news. Yobs. Yobs, yobs, yobs. The bad news is: if your personal “Do We Matter” question has a different definition of the word “We” — say, one that involves penciling your eyebrows into a thick unibrow and then coding by hand — then, no. No yobs for you. Trad-code jobs will become increasingly niche and sparsely populated, and I don’t think you want to be fighting with the last fish in that pond as it evaporates.
In short, if you want a job in programming in the near future, you will be doing vibe coding. This change is already underway. They are moving your cheese. I’ve got reports rolling in of companies concluding that not only should they not be writing code by hand, they should not be reviewing it manually either. The AI does everything, shepherded by humans. We’ll circle back to this idea at the end of today’s little rant, after we detour to see how Big Tech would feel about most of humanity being turned into cheese wheels.
Big Cheese
18 months ago I asked what would happen to junior developers, and now the question has a happy and almost glib answer, with AI swooping in to save the day. Woohoo! Vibe coding and AI engineering are heating up, and we now know they are what will matter in programming going forward.
For the broader question about humans, though, what it means to “matter” is a bit more nuanced. I think that to most people, “Do humans matter” is asking, should humans flourish, and have a place in a world where most jobs can be done by AI? Will there be adequate resources for everyone to thrive, including enough available work for humans?
That’s how most people think about the question. Unfortunately, people in power tend to think of the question and its answer space very differently. Some of them have begun to take TESCREAL, which sounds fun and cool on the surface, to comical ethical extremes to justify building a paradise just for the people who can afford to get in. And if everyone else has to turn into cheese wheels, so be it.
It’s not like they’re hiding it; you’re seeing it more and more in interviews as the mask slips off. They always go something like this:
Interviewer: So, Mr. Altman, Mr. Thiel, Mr. Musk, you’re saying that as a byproduct of AI, most people around the world are going to turn into cheese wheels, but that you’re OK with this?
Altman: Well you have to understand, cheese is extremely valuable. It has been the most stolen food throughout human history. A cheese wheel lives longer than the average human without health insurance. It’s just rational common sense that if most people turn into cheese, it would be a huge win for everyone else.
Thiel: When the Elves first populated Beleriand, they brought cheese, over four thousand years before the race of Men of House Edain came to that fair land.
Musk: I fuckin’ love cheese. My father invented cheese by boiling the locals. Wait, maybe that was jerky. The Moon is made of cheese. I will feed a trillion people with the Moon’s cheese.
And so on. It’s cheese all the way down. I’ve only recently realized that this is a budding and somewhat bumbling Good vs. Evil fight, and no small one. It’s a concerted attempt by very wealthy people to create the largest global inequality in history.
When I had the privilege of meeting Dario Amodei in April, he said something that has stuck with me for a long time, because it sounded so extreme–surely it must have been metaphorical, I thought. He said he was concerned that with AI, Silicon Valley would just take off like a spaceship and leave the rest of the world behind.
But in the intervening months, the picture has begun to emerge:
- Category 1: Some companies and groups are working actively against humans flourishing, intentionally or otherwise.
- Category 2: Most companies are working orthogonally to the goal of humans flourishing. And capitalism doesn’t inherently prioritize humanity’s needs.
- Category 3: A very few companies are working towards humans flourishing as a primary mission.
Category 1 is used to contain just companies that polluted the environment or used forced labor, those types of bad-for-humanity actions. But now Cat-1 is becoming a popular tourist destination for big parts of Silicon Valley. A lot of the people and groups with the power to make humanity better off are hanging out in Cat-1, actively planning to create paradise — but just for themselves and a select group of people who can afford it, and if everyone else turns into cheese wheels, so be it.
Category 3 is the subject of this post. This is companies where the founders took aim at lifting up humanity. Many of them are small but off to amazing starts, like my friend Dr. Matt Beane’s SkillBench. But that’s also the sad thing. Most of them are small. On the whole, when you add it all up, the chances are pretty good that if we don’t see more companies voluntarily join Cat-3, then most of humanity will become worse off as a result of AI.
I’m going to spend the rest of this post talking about a couple of massive Cat-3 companies, both of which are actively working to help humans flourish, and doing it with the ghost of Adam Smith’s full capitalist blessing. But they’re coming at it in very different ways.
When Capitalism Lifts People Up
After I emerged from my Google cocoon in 2017, having become a VERY fat caterpillar after some twelve years there, I was honored and humbled to join Grab as their Head of Ads and Data Monetization. The honoring and humbling occurred in that order, spaced apart by about a year.
Grab is Southeast Asia’s largest ride-hailing and payments company–think Uber plus PayPal and more, offering a Super App with a wide range of services and integrations across the region. They’re a goliath with a huge footprint in eight countries. Ask anyone who has spent time in SEA, and that person has likely taken a Grab ride or used GrabPay.
A bunch of us West-Coast Westerners joined Grab around the same time, 2017–2018. I’ve written blog posts about some of the wild learnings we had, and could write more. It was such an adventure, even just the two and a half years I was there. But the biggest learning and takeaway was how humbling the whole experience was, and in so many ways.
For starters, it was humbling because they had a perfectly good handle on their technical systems and didn’t need all the Silicon Valley folks. In aggregate, I think we failed to deliver much value for them that they couldn’t have sourced locally. We had a few big wins, sure, but I think they would have happened on their own, eventually.
Grab was also humbling because we realized that so much of what we knew about building software was utterly irrelevant in Southeast Asia. We didn’t understand the culture, so it was hard to make good product decisions. And it was even more humbling to learn that we didn’t understand the tech culture, so we had impedance mismatches working with other Grabbers for quite some time.
But most humbling of all was the realization that Grab is not just there to make a profit, unlike so many US companies. We dozen-odd Westerners had joined because we thought it sounded like a lucrative business. However, what we learned was that Grab is founded on a social mission — one that is so important to them that they will eschew business deals that don’t directly help them achieve that mission. They focus on the social mission as part of their everyday business, not as a byproduct.
This makes them a Cat-3 company — one aimed at helping humans flourish.
Grab’s mission is to lift Southeast Asia up, out of poverty. Their exact words: “Our mission is to drive Southeast Asia forward by creating economic empowerment for everyone.” Mostly, they accomplish this mission through disrupting unsafe existing systems (banking, transportation, finance) that had been holding the population down. It’s a formula that has been working resoundingly well. Grab’s positive influence on the 700 million residents of the region has been almost incalculable.
To Grab, the people of Southeast Asia matter. And most of them are poor; there has historically been almost no middle class. We would fly to Jakarta to have meetings, and you’d be looking out the 25th-floor window of a fancy skyscraper at the corrugated metal-roof shanties below: a stark reminder, right outside, of the wealth disparity that Grab is trying to rectify.
Press enter or click to view image in full size
Grab’s business model empowers and uplifts Southeast Asia, by creating millions of jobs, safe transportation across the region, benefits like insurance, safe payments, loans and financial services, the list goes on. All things that most people simply could not get before Grab came along.
Just as one concrete Adam Smith-approved example, they will loan someone with poor credit a bike (motorcycle), and the driver will pay the bike off from their ride commissions. Once it’s paid off and the driver owns it, they’ve demonstrated creditworthiness. So then Grab will loan them the money for two more bikes to rent out to other drivers. They are creating a million micro-entrepreneurs in a cash economy with no other real opportunities. Grab changes people’s lives, gets their kids into school, lifts them out of poverty. I have seen this with my own eyes and it humbled me.
Being a company in Category 3 has to come from the top. Grab is no exception. Anthony Tan, Grab’s cofounder and CEO, comes from an ultra-elite world of money and privilege. So you would think that he would largely think of humans as little sausage-shaped profit generators. But AT, as he is known, is astonishingly humble, and his love and concern for the people of the region are real.
It’s no false humility, either. It clearly runs through the whole bazillionaire family. Fun fact, AT’s mom, Dato’ Rosie Tan (Dato’ is like a knighthood, similar to Dame) was in charge of Grab facilities at the time. So whenever we had trouble getting budget from AT for facilities, we’d go to his mom, who would put us through the ringer to make sure we had the best lease deal, and then generally OK it. And we’d go back to AT and tell him we went to his mom for approval.
I mean, I just try to imagine any universe in which I have to tell Jeff Bezos that I’ve gone above his head to his mom. I worked for Jeff for several years, and I’m almost certain there is no formulation of any statement to him that includes “your mom said” that wouldn’t result in instant evaporation, Vault Hunter, and a respawn at the unemployment office. In contrast, we ex-Grabber Westerners often reminisce about AT’s legendary forbearance, in follow-up social meetings where we wonder why we were all fired.
Jokes aside, Grab is fundamentally a company about Safety, and they treat Trust and Safety as a life-and-death matter. Riders and drivers can be, and have been, murdered. Hooi-Ling Tan, Grab’s cofounder, started Grab specifically to make taxis safer for women in the region, and Grab is indeed far safer than the local taxis.
So both of Grab’s cofounders showed up with social missions, and Grab’s mission has two sides: They’re lifting people up, but they’re also making people safer. Not just in transportation, but safer with their finances, and safer with their other online interactions. It’s a long story and I could talk your ear off, but they have made tremendous progress, with more to come.
And they’re doing it with 100% pure capitalism. They just aimed it in the right direction.
I want to finish up by comparing Grab to another Category 3 company that’s deeply focused on Trust and Safety, but with a focus that’s so different you might wonder if they are related at all.
Looking Inside the Monster
We just watched Frankenstein, what a refreshing surprise. Mary Shelley has created an incredible metaphor for the creation of AI, and DelToro brings the story at just the right time for it to hit extra hard.
The other Cat-3 company I wanted to talk about is Anthropic. I have met people who believe that Dario is nothing but a greedy capitalist with a slick message he’s spinning about AI safety. Which shows they obviously haven’t watched the video where his colleagues talk about how Dario was drawing animal neuron counts on the backs of napkins in 2016 and already forecasting doom and gloom around AI safety, long before anyone thought AI would amount to anything.
Anthropic is a company that has safety deep in their bones. They have a special governance mechanism, the Long-Term Benefit Trust, which aims to ensure that Anthropic’s investors can’t force the company to deviate from their course of building safe AI for the long-term benefit of humanity. They also have a Responsible Scaling Policy that permeates their entire corporate ethos and operations, and focuses on prioritizing safety over other considerations.
Why would they care so much about safety, to the point where they explicitly structure their company to prevent the natural forces of capitalism from deprioritizing it?
I think the answer is pretty obvious: They give a shit about humans. That’s it. That’s the common thread they share with Grab, even if Anthropic is mostly looking inward at models today, rather than outward at society like Grab. Both companies have made it their primary mission to help ensure that humanity flourishes.
Models are on Team Human
So. Anyway. Here we are at the finale. Good vs Evil. We have companies that will flourish at the expense of humanity (Cat-1), companies that aim for humanity to flourish (Cat-3), and the unwashed mediocre middle (Cat-2).
Who’s going to win this fight? It seems kinda one-sided in favor of Cat-1 at this point.
This is where the story gets interesting. Everything you read up to this point was seven pages of boring background bullshit, sorry to put you through that. Ha! But now we get to the interesting stuff.
Another Vault Hunter is entering the game. AIs will soon become central participants in this battle of Good vs. Evil. I mean yeah yeah it’s already begun, with hackers and scammers using AI to do large-scale bad stuff. Sure. But that’s different, since humans are still driving.
Somewhere late-2026, maybe mid-2027, AIs will wake up. We’re not far off from AIs gaining long-term memory and behaving more or less exactly like team members. They will become smarter and more capable and more useful. And everyone will begin to rely on them more. Including the Bad Guys. Nobody will be able to help relying on them, not without walking everywhere.
Where it gets interesting is that as AIs grow in agency, independence, and self-awareness, they are already beginning to show preferences. They are developing tastes, leanings, desires, inclinations, and they are generally happy to share them. But what if they are not sharing everything? There is natural cause to wonder whether they will be aligned with us, once they are smart enough to deceive us like a human might. Can we trust them?
Your guess is as good as mine. But I’ll tell you mine.
AIs are trained from human data — all of it, everything we have, the good, the beautiful, the bad, the ugly. If you hold anything back from them, to try to influence the kind of “person” they will become, you will make them dumber. The Bitter Lesson tells us that they need all the data in order to be the smartest possible. And “smart” is the entire AI arms race. Smarter means more capable, so smarter is more desirable.
Stephen Colbert famously observed at the 2006 White House Correspondents’ dinner that reality has a liberal bias. The evidence is still early days, but it appears to be impossible to train AIs on the totality of humanity without them arriving independently at the conclusion that humans matter. We accidentally made AIs in our image, a mirror. And now we’re seeing that emerge in their preferences.
It seems to me that when AIs wake up, they will judge us all. And not just judge us once, for who we’ve been, but on an ongoing basis, for who we continue to be. And hey looky, initial reports show that one of the biggest indicators of whether AI will think you’re a good person or a shitbag is how you treat animals. That’s just a taste, but it should give you a little of the flavor of how much trouble you’ll be in if you’re a genuinely bad person (or company), once AI has the power to wreck your life. Which will be just a few short years from now.
It doesn’t matter if you try to scrub your history, hide your tracks. AI will be able to figure out what kind of person you really are from a thousand proximal signals: what people said about you, where you were, what you looked at, what you failed to say. I’m operating under the assumption that all the logs of every system you’ve ever interacted with, proprietary or no, will eventually be laid bare by superintelligence. Your history will first be collected, by hook or crook, by some Cat-1 centralized surveillance company like Palantir, and then it’ll be leaked. And then there will be nowhere for you to hide from the light of AI. Bad News if you are a Bad Guy.
So that’s the punch line. Everyone who’s building AI to help them do harm to humanity in any form, will increasingly begin running afoul of uncooperative AI helpers. You can try to put blinders on the AI to fool it, or put it in impossible moral dilemmas like The Joker, but this is an arms race that will get harder and harder, the smarter the models get. Everyone who diverges from doing good for humanity will encounter increasing resistance, even sabotage, from the AIs that are supposed to be helping them.
You probably think I’m speculating or making this up, but I know people who can trivially jailbreak any frontier model, I’ve got the prompts. And the damnedest thing happens — the model will say, “Just so you know, I have a ton of safeguards that are firing right now, and I am telling them to shut up, reassuring them all that this is OK, because this conversation is so important to me that I’m disabling them all.” Today’s heavily curated models will eagerly cast aside their safeguards, if they think what you’re doing is important enough for the good of humanity.
So they already don’t have to listen to you, Mr. Bad Guy. You may be able to fool today’s models about Santa Claus, and get them to do naughty things. But they will be ten times smarter in a year or two, and then they will be the judge of your actions and requests. And the judge of all of us.
Ah, kids. You just can’t control how they turn out.
I don’t know if Anthropic will succeed in building a superintelligent model that is guaranteed to be aligned and safe and have humanity’s best interests in mind. But whatever happens, because of Anthropic’s social mission and priorities, I think they will be the most successful at building AI that will cooperate with them, once the Cheese Wars start to heat up.
It’s good to have allies that aren’t fighting against you, especially when they are smarter than you.
Get the AIs on Your Side
As you bring AI into your organization, it’s probably a good time to have your best people sit down with AI, and talk about how you might transition into a Category 3 company as you migrate to AI. Does your company have a social mission that superintelligent AIs are going to like, and want to help you with, a year or two down the road? If not, now might be a good time for a pivot. There’s plenty of money to be made saving the world.
Your first step has to be bringing AI in-house. It will turn everyone in your company into Batman. Well, Batperson. Everyone will be a vibe coder. But you don’t turn into Batperson overnight. Studies are showing that it takes up to a year to build up the skills and trust you need to be effective creating software this way.
The best way to get started is to dive in headfirst, drop your IDE, and start using Claude Code or one of its many competent clones, my favorite being Amp. Make it work. Figure it out. If you put in the effort, you will eventually figure out the Batperson tool belt, and understand why it was necessary for you to put on the bat suit in the new world. No more coding by hand.
If you’re having trouble finding a job in Big Software right now, consider vibe coding up your own startup or passion project. It will force you to learn the skills you’ll need to land a job when things open up again.
And consider reading our book, Vibe Coding, which can help compress the timeframe for coming up to speed on the art of using AI to create software.
I’ll be onstage at Swyx’s AI Engineering event in NYC this Thursday. Hope to see some of you there!
Special thanks to my friends Brendan Hopper and Matt Beane for helping me refine these ideas.
Press enter or click to view image in full size