LGTM 🚀 Culture: A Short Story

11 min read Original article ↗

Date: December 31, 2047

Dear Martin,

As part of clearing out the final area outside of town for the new Bubble-III data center, an old PC (Personal Computer) was found. It was left behind in a cabin further down in the forest. The PC still had a “hard drive” in it, so it dates from before the Cloud Mandate. I know you’re intrigued by this era, therefore you may find this of interest.

I spent a few nights going through the hard drive’s content. It appears to have been owned by a “computer programmer.”

There were a ton of code repositories on the disk. I ended up reading a lot of it. I tend not to brag about this, but I know how. My grandfather and I read code as a bonding exercise. Read, not write — don’t worry.

Perhaps this is typical for the time, but the code seems oddly self-critical. It has FIXME and TODO comments, contains spelling mistakes, and no emoji whatsoever. Also, the functionality described in “README” files seems wildly out of proportion with code size. Code bases we know today count in the millions of lines, these repositories are mere thousands or tens of thousands. It’s quite quaint, and I suspect that “thought” was put into it, which makes me suspect the code was written by a human. This is consistent with the fact that we found a bunch of “tools” (hammers, screwdrivers) in the cabin’s shed. The owner was clearly a savage.

However, what triggered this message to you is a particular file I found on the hard drive. The timestamp shows it was the last file touched before the device was powered off. It appears to be a hand-typed comment — a long one — triggered by a post on LinkedIn (the old name of WorldTruthFeed.org).

I’ve attached the file in full. While I cannot block CensorBuddy processing while sending it, I do recommend you disable ToDLeR mode on this one, it’s worth reading verbatim. The timestamp on the file is November 20, 2025, placing it late stage Bubble-II. I think it will be a nice addition to your “sign of the times” relic collection.

Best,

John
Sr. Director Third Time’s a Charm Data Center Construction Inc., subsidiary of NVIDIA Ltd.


Dear mr. future-of-work dude, may I call you bro’?

My daily struggle is that generating posts like this can be done with a single prompt, whereas proving its insanity requires many pages of nuanced writing.

No more.

BULLSHIT! BULLSHIT! BULLSHIT!

As “in control” as your cockpit-style three screen (and an iPad?) with green tea setup may suggest you are. AirPods Pro with a brown case (I hope), what is that supposed to symbolize? And is that a small Buddha in the corner there? Nice touch.

It doesn’t change much.

BULLSHIT! BULLSHIT! BULLSHIT!

Your cockpit picture looks tight. Very impressive.

Oddly, I have a different visual that keeps popping in my head. Not sure why:

Robot vacuums from the ‘00s, remember those? You would probably call them “agentic hoovers.”

They were kinda cool and futuristic, and seemed like they would solve a real pain point: hoovering the place. Because, who enjoys doing that?

And they were kind of cute. The agentic hoover goes into some direction, runs into a wall, looks confused, spins a little and then “decides” on a new random angle and happily chucks along. You could even put a funny hat on them, not a care in the world. After enough random bouncing around, they’d run out of battery and your apartment was kinda clean. LGTM! 🚀

It was obvious in the ‘00s: agentic hoovers are the future! Agentic hoovers are getting better every day, they will soon put other hoovers out of business. You could probably find a smug Agentic Hoover CEO proclaiming that soon all professional cleaners would be out of a job.

Here we are, 20 years later and our agentic hoover is collecting dust (hah!) in my son’s room. Compared to the earlier models, it did get a few updates, newer models did get better. For a while. It now maps out your room with a radar and more systematically covers the whole area. The results are a bit better. For anybody who cares about cleanliness, it does still not nearly match a regular hoover. It misses spots, you still have to babysit it, moving furniture around so it has a clear path and move it all back later. While our model has a mop, it doesn’t do much more than wet the floor — it’s mostly performance art.

We happen to be a family that does care a bit about cleanliness, so we still ended up hoovering and mopping by hand after the agentic hoover was done. We then realized the gain was negligible and we barely switched our agent on anymore. Doing it by hand was just quicker and more reliable.

My son has our hoover agent in his room now. He still runs it sometimes it after his mother nags him enough about having to clean his room. He likes retro electronics, and really does not enjoy hoovering, nor really cares about it. He switches it on, then comes downstairs proclaiming he’s cleaning his room right now.


Why am I reminded of this? Oh right, agentic coding agents.

Attach a mechanical hand to an agentic hoover, and put a bunch of them in a room so they can high five each other on their successes, and you got a pretty good picture of what I see when I think about coding agents.

Actually, while I rarely do this anymore, let’s ask ChatGPT to visualize this. I’m sure it won’t match your cockpit view with brown Airpods Pro, but you know — I’m sure we can still get something that looks good.

Here we go:

Hmm, those are some some creepy looking hands. And... what’s up with the fingers there high-fiving? Is there a third hand mixed in there? And one robot vacuum has a cable for some reason.

I probably used the wrong model or prompted it wrong. Skill issue.

Whatever. Details. LGTM! 🚀


Did you catch my sarcasm there, mr. future-of-work bro’?

Ask somebody who’s not strongly incentivized to “see the opportunity in AI” (because of the company’s new AI First strategy), for a deep critical look at the work produced by your AI agent and its numerous friends. Yes, all tens of thousands of lines of it. To the level of detail where they’d say “sure, you can wake me up at 2am and I can fix bugs here.”  Superficially a lot of it will “LGTM 🚀” and some of it may actually be surprisingly OKish. Inevitable though, a good chunk it — as you will discover, sooner or later — is done so ridiculously poorly it puts anybody with a brain to shame. And the insanity is going to be presented with the same level of confidence as the sane stuff. It’s like your trusted colleague is randomly switched out with a north-korean hacker trying to infiltrate your code base, but speaking with the same voice. It sucks.

You may not notice immediately, or perhaps think it doesn’t matter. Après moi, le déluge as a famous Frenchman once said: after me, the flood. What matters is vibes. This feels productive. New. Da futjah! Having this amount of code produced in such a short of time is impressive. And sometimes the code works! LGTM! 🚀

Welcome ladies and gentlefriends to LGTM 🚀 culture. Where things look good and that’s all that matters. Where things kinda work. Where information sounds kinda true. And our new chatbot BFF seems kinda real.

Our coding agent implemented a feature for us. LGTM! 🚀 Oh wait, where are the unit tests? “AI agent, please add unit tests!” We are congratulated on being geniuses for the suggestion. With this level of appreciation, we decide not to kill the vibes by asking why there were no tests in the first place. The agents produce an impressive number of tests, our code coverage went up! They surely seem to be mocking a lot of stuff, but mocking is a thing that you do in tests right, LGTM 🚀!

You make jokes, but this is a skill issue: you simply didn’t prompt it right! You need to educate yourself, take my course!

That sounds like gaslighting to me. If it’s so obvious what was missing from my prompt, why is that prompt not baked into the model’s training? If a code agent generates two identical code paths, with kinda different but functionally equivalent code, is that my fault because I didn’t prompt it with “and don’t do crazy shit?”

We ask the agent to make software “enterprise-grade secure.” Lo and behold, we receive another confirming pat on our back. Amazing for us to care about this, and thanks for the opportunity! On goes the agent, talking to a dedicated security “sub-agent.” “Hey yo, how about you add some securitah!” “Great call!” High fives all around. Layers of security are quickly added on parallel tracks. It looks like sci-fi! All kinds of impressively sounding libraries and mechanisms are pulled in. oAuth. MD5. SOC2 compliance! Let me fake a pen test here, yep, all good! Some moves seem irrelevant or nonsensical. To “experts,” they may actually be shockingly dangerous, but what do they know —  PhD level coding agents are on the case, enforcing enterprise quality bars so high never seen before. LGTM!

BULLSH1T! BULLSH1T! BULLSH1T3!

But what model did you use?

The idea that one model is significantly better than another is quickly getting dated. They’re all trained on the same questionably-sourced inputs. All fine-tuned by low-paid workers in vulnerable countries. All trained in data centers built in under-developed locations that have few other options. The main quality difference comes from whatever niche area a vendor decides is worth investing additional whack-a-mole fine-tuning cycles on. Is it r’s in strawberry, or glue on pizza day, or maybe we can finally get it to use fewer em-dashes, and if we have time left — let’s see if we can nudge the model to sound an alarm bell while simultaneously affirming a teen's suicide plans?

But it’s early days!

BU1L$H1T BUL1$H1T BUL1$H1T

You can define “early days” however you want. It’s been years now, and hundreds of billions of dollars are being burned. As they say: a billion dollars here, a billion dollars there and soon we’re talking about real money.

So, show me more than high-fiving agents that made that level of investment remotely worth it. Because most of what I see is chatbots everywhere and “let me rewrite and suck all humanity out of it for you” features shoved through our throats in all products that used to be kinda OK.

Or will its real legacy be LGTM 🚀 culture? Looks good, sounds nice, confirms my biases, so let’s gooooo 🚀!

We were promised a cure for cancer. We were promised the end of poverty. We were promised “AGI” (whatever that means). Tone deaf AI CEOs even promised us that a large part of us would soon be out of work. It all sounded very exciting.

And it is kinda happening. Not because AI can do jobs better, though. People are laid off because their CEOs believe that AI can do their jobs, and then rehires them when it turns out they actually kinda can’t. And when the bubble pops, we all get to share in the inevitable economic downturn. Well, not all of us, billionaires that are driving us into this brave new world are going to be just fine.

LGTM 🚀

But this is enterprise-grade software, ChatGPT told me so!

Have you been listening at all? No no no no n0!

How often do I NEED to repeat this! this is driving me ins3ne!

0nly if you accept enterprise software to me3n absolut3 shit. Maybe that’s always how it’s been and we just didnt know it. Maybe thats our future now. Sounds great, looks good, let’s ship it! Maybe bullsh1t is our future now. Maybe we shall wade in a LGTM future forever. no kwalitie. just junk. code. millions of lines. ,aybe we’ll be happier in our superficial LGTM world. feed it more shit. With our chatbot friends. chatbot marriages. models are getting bettter all the tim

chatbot vibed software that does LGTM. doesn’t really work breaks randomly; internet down again. but vibes man it just vibes. its the future the future is here its now. Is viby a word- it should be. skill issue. Vibe all the things. Vibe security. Vibe people. Livin la viba loca. vibe life. This w3s not promsed! This is mak3s no sense. trillions $ down the drain.Marriages broken because of chatbot affairs. Kids chatting only to botz. get me out of this TIMElinE. bought a patch of land far far away where the ai ceos and b0ts cant find m3. It has buunker. It has foot for yars. I will hide and w8.

pushing off now, vib3 u L8T0R

Catch the next one

Get new content straight in your inbox.