AI makes the easy part easier and the hard part harder
blundergoat.comI vibe coded a retro emulator and assembler with tests. Prompts were minimal and I got really great results (Gemini 3). I tried vibe coding the tricky proprietary part of an app I worked on a few years ago; highly technical domain (yes vague don’t care to dox myself). Lots of prompting and didn’t get close.
There are literally thousands of retro emulators on github. What I was trying to do had zero examples on GitHub. My take away is obvious as of now. Some stuff is easy some not at all.
I call these "embarrassingly solved problems". There are plenty of examples of emulators on GitHub, therefore emulators exist in the latent spaces of LLMs. You can have them spit one out whenever you want. It's embarrassingly solved.
There are no examples of what you tried to do.
In a way it shows how poorly we have done over the years in general as programmers in making solved problems easily accessible instead of constantly reinventing the wheel. I don't know if AI is coming up with anything really novel (yet) but it's certainly a nice database of solved problems.
I just hope we don't all start relying on current[1] AI so much that we lose the ability to solve novel problems ourselves.
[1] (I say "current" AI because some new paradigm may well surpass us completely, but that's a whole different future to contemplate)
> In a way it shows how poorly we have done over the years in general as programmers in making solved problems easily accessible instead of constantly reinventing the wheel.
I just don't think there was a great way to make solved problems accessible before LLMs. I mean, these things were on github already, and still got reimplemented over and over again.
Even high traffic libraries that solve some super common problem often have rough edges, or do something that breaks it for your specific use case. So even when the code is accessible, it doesn't always get used as much as it could.
With LLMs, you can find it, learn it, and tailor it to your needs with one tool.
> I just don't think there was a great way to make solved problems accessible before LLMs. I mean, these things were on github already, and still got reimplemented over and over again.
I'm not sure people wrote emulators, of all things, because they were trying to solve a problem in the commercial sense, or that they weren't aware of existing github projects and couldn't remember to search for them.
It seems much more a labour of love kind of thing to work on. For something that holds that kind of appeal to you, you don't always want to take the shortcut. It's like solving a puzzle game by reading all the hints on the internet; you got through it but also ruined it for yourself.
It’s 2026 and code reuse is still hard. Our code still has terrible modularity. Systems have terrible to nonexistent composability. Attempts to fix this like pure OOP and pure FP have never caught on.
To some extent AI is an entirely different approach. Screw elegance. Programmers won’t adhere to an elegant paradigm anyway. So just automate the process of generating spaghetti. The modularity and reuse is emergent from the latent knowledge in the model.
Its license washing. The code is great because its already a problem solved by someone else. The AI can spit out the solution with no license and no attribution and somehow its legal. I hope American tech legislation holds that same energy once others start taking American IP and spitting it back out with no license or attribution.
If I include licensed code in a prompt and have a LLM include it in the output, is it still licensed?
I've seen many discussions stating patent hoarding has gone too far, and also that copyright for companies have gone way too far (even so much that Amazon can remove items from your purchase library if they lose their license to it).
Then AI begins to offer a method around this over litigious system, and this becomes a core anti-AI argument.
I do think it's silly to think public code (as in, code published to the public) won't be re-used by someone in a way your license dictates. I'd you didn't want that to happen, don't publish your code.
Having said that, I do think there's a legitimate concern here.
Do you give attribution to all the books, articles, etc. you've read?
Everything is a derivative work.
The models need to get burned down and retrained with these considerations baked in.
I tried to vibe code a technical not so popular niche and failed. Then I broke down the problem as much as I could and presented the problem in clearer terms and Gemini provided working code in just a few attempts. I know this is an anecdote, but try to break down the problem you have in simpler terms and it may work. Niche industry specific frameworks are a little difficult to work with in vibe code mode. But if you put in a little effort, AI seems to be faster than writing code all on your own.
> I know this is an anecdote, but try to break down the problem you have in simpler terms
This should be the first thing you try. Something to keep in mind is that AI is just a tool for munging long strings of text. It's not really intelligent and it doesn't have a crystal ball.
It's called problem decomposition and agentic coding systems do some of this by themselves now: generate a plan, break the tasks into subgoals, implement first subgoal, test if it works, continue.
That's nice if it works, but why not look at the plan yourself before you let the AI have its go at it? Especially for more complex work where fiddly details can be highly relevant. AI is no good at dealing with fiddly.
That's what you can do. Tell the AI to make a plan in an MD file, review and edit it, and then tell another AI to execute the plan. If the plan is too long, split it into steps.
There’s nothing stopping you from reviewing the plan or even changing it yourself. In the setup I use the plan is just a markdown file that’s broken apart and used as the prompt.
I think AI is just a massive force multiplier. If your codebase has bad foundation and going in the wrong direction with lots of hacks, it will just write code which mirrors the existing style... And you get exactly was OP is suggesting.
If however, your code foundations are good and highly consistent and never allow hacks, then the AI will maintain that clean style and it becomes shockingly good; in this case, the prompting barely even matters. The code foundation is everything.
But I understand why a lot of people are still having a poor experience. Most codebases are bad. They work (within very rigid constraints, in very specific environments) but they're unmaintainable and very difficult to extend; require hacks on top of hacks. Each new feature essentially requires a minor or major refactoring; requiring more and more scattered code changes as everything is interdependent (tight coupling, low cohesion). Productivity just grinds to a slow crawl and you need 100 engineers to do what previously could have been done with just 1. This is not a new effect. It's just much more obvious now with AI.
I've been saying this for years but I think too few engineers had actually built complex projects on their own to understand this effect. There's a parallel with building architecture; you are constrained by the foundation of the building. If you designed the foundation for a regular single storey house, you can't change your mind half-way through the construction process to build a 20-storey skyscraper. That said, if your foundation is good enough to support a 100 storey skyscraper, then you can build almost anything you want on top.
My perspective is if you want to empower people to vibe code, you need to give them really strong foundations to work on top of. There will still be limitations but they'll be able to go much further.
My experience is; the more planning and intelligence goes into the foundation, the less intelligence and planning is required for the actual construction.
The wrinkle is that the AI doesn't have a truly global view, and so it slowly degrades even good structure, especially if run without human feedback and review. But you're right that good structure really helps.
Yet it still fumbles even when limiting context.
Asked it to spot check a simple rate limiter I wrote in TS. Super basic algorithm: let one action through every 250ms at least, sleeping if necessary. It found bogus errors in my code 3 times because it failed to see that I was using a mutex to prevent reentrancy. This was about 12 lines of code in total.
My rubber duck debugging session was insightful only because I had to reason through the lack of understanding on its part and argue with it.
AGENTS.md is for that global view.
Am I stupid or do these agents regularly not read what’s in the agents.md file?
More recent models are better at reading and obeying constraints in AGENTS.md/CLAUDE.md.
GPT-5.2-Codex did a bad job of obeying my more detailed AGENTS.md files but GPT-5.3-Codex very evidently follows it well.
Perhaps I’m not using the latest and greatest in terms of models. I tend to avoid using tools that require excessive customization like this.
I find it infinitely frustrating to attempt to make these piece of shit “agents” do basic things like running the unit/integrations tests after making changes.
Each agent uses a different file, like claude.md etc (maybe you already knew that).
And it requires a bit of prompt engineering like using caps for some stuff (ALWAYS), etc.
That’s not what Claude and Codex put there when you ask them to init it. Also, the global view is most definitely bigger than their tiny, loremipsum-on-steroids, context so what do you do then?
You know you can put anything there, not just what they init, right? And you can reference other doc files.
I should probably stop commenting on AI posts because when I try to help others get the most out of agents I usually just get down voted like now. People want to hate on AI, not learn how to use it.
its still not truly global but that seems like a bit pie in the sky.
people still do useful work without a global view, and there's still a human in the loop witth the same ole amount of global view as they ever had.
The 'global view' doc should be in DESIGN.md so that humans know to look for it there, and AGENTS.md should point to it. Similar for other concerns. Unless something really is solely of interest to robots, it shoudn't live directly in AGENTS.md AIUI.
And what if the foundation was made by the AI itself? What’s the excuse then?
Then you are boned unless it was architected well. LLMs tend to stack a lot of complexity at local scopes, especially if the neighboring pages are also built poorly.
E.g pumping out a ton of logic to convert one data structure to another. Like a poorly structured form with random form control names that don’t match to the DTO. Or single properties for each form control which are then individually plugged into the request DTO.
> Then you are boned
Must be my lucky day! Too bad my dream of being that while the bots are taking care of the coding is still sort of fiction.
I love a future when this is possible but what we have today is more of a proof of concept. A transformative leap is required for this technology before it can be as useful as advertised.
Yep, it’s still a bit off from being a true developer. But good news for existing software devs who will need to be hired to fix LLM balls of mud that will inevitably fall apart.
In my mind it’s not too much different than cheap contractor code that I already have to deal with on a regular basis…
you could also use some code styling agent scripts that make todo lists of everywhere where there's bad architecture, and have it run through fixing those issues until its to your liking.
theyre reasomable audit tools for finding issues, if you have ways to make sure they dont give up early, and you force them to output proof of what they did
Your responsibility as a developer in this new world is design and validation.
A poor foundation is a design problem. Throw it away and start again.
We’ve always been responsible for design and validation. Nothing has changed there.
It’s funny how the vibe coding story insists we shouldn’t look at the code details but when it’s pointed out the bots can’t deal with a “messy” (but validated) foundation, the story changes that we have to refactor that.
This is what I’ve discovered as well. I’ve been working on refactoring a massive hunk of really poor quality contractor code, and Codex originally made poor and very local fixes/changes.
After rearchitecting the foundations (dumping bootstrap, building easy-to-use form fields, fixing hardcoded role references 1,2,3…, consolidating typescript types, etc.) it makes much better choices without needing specific guidance.
Codex/Claude Code won’t solve all your problems though. You really need to take some time to understand the codebase and fixing the core abstractions before you set it loose. Otherwise, it just stacks garbage on garbage and gets stuck patching and won’t actually fix the core issues unless instructed.
When you say multiplier, what kind of number are you talking about. Like what multiple of features shipped that don't require immediate fixes have you experienced.
It's coding at 10-20x speed, but tangibly this is at 1.5-2x the overall productivity. The coding speed up doesn't translate completely to overall velocity yet.
I am beginning to build a high degree of trust in the code Claude emits. I'm having to step in with corrections less and less, and it's single shotting entire modules 500-1k LOC, multiple files touched, without any trouble.
It can understand how frontend API translates to middleware, internal API service calls, and database queries (with a high degree of schema understanding, including joins).
(This is in a Rust/Actix/Sqlx/Typescript/nx monorepo, fwiw.)
Okay but again what multiplier of features have you actually shipped.
A tangent, I keep hearing this good base, but I've never seen one, not in the real world.
No projects, unless it's only you working on it, only yourself as the client, and is so rigid in it's scope, it's frankly useless, will have this mythical base. Over time the needs change, there's no sticking to the plan. Often it's a change that requires rethinking a major part. What we loathe as tight coupling was just efficient code with the original requirements. Then it becomes a time/opportunity cost vs quality loss comparison. Time and opportunity always wins. Why?
Because we live in a world run by humans, who are messy and never sticks to the plan. Our real world systems (bureaucracy , government process, the list goes on) are never fully automated and always leaves gaps for humans to intervene. There's always a special case, an exception.
Perfectly architected code vs code that does the thing have no real world difference. Long term maintainability? Your code doesn't run in a vaccum, it depends on other things, it's output is depended on by other things. Change is real, entropy is real. Even you yourself, you perfect programmer who writes perfect code will succumb eventually and think back on all this with regret. Because you yourself had to choose between time/opportunity vs your ideals and you chose wrong.
Thanks for reading my blog-in-hn comment.
It’s not about perfectly architected code. It’s more about code that is factored in such a way that you can extend/tweak it without needing to keep the whole of the system in your head at all times.
It’s fascinating watching the sudden resurgence of interest in software architecture after people are finding it helps LLMs move quickly. It has been similarly beneficial for humans as well. It’s not rocket science. It got maligned because it couldn’t be reduced to an npm package/discrete process that anyone could follow.
A important point though is that llm code generation changes that tradeoff. The time/opportunity cost goes way down while the productivity penalty starts accumulating very fast. Outcomes can diverge very quickly.
Well-architected code should actually be easy to change wrt. new requirements. The point of keeping the architecture clean while you do this (which will typically require refactoring) is to make future changes similarly viable. In a world run by messy humans, accumulating technical debt is even more of a liability.
Can the AI help with refactoring a poor codebase? Can it at least provide good suggestions for improvement if asked to broadly survey a design that happens to be substandard? Most codebases are quite bad as you say, so this is a rather critical area.
how do you know there is such thing as good code foundations, and how do you know you have it? this is an argument from ego
Induction always sneaks in!
my exact experience, and AI is especially fragile when you are starting new project from scratch.
Right know I'm building NNTP client for macOS (with AppKit), because why not, and initially I had to very carefully plan and prompt what AI has to do, otherwise it would go insane (integration tests are must).
Right know I have read-only mode ready and its very easy to build stuff on top of it.
Also, I had to provide a lot of SKILLS to GPT5.3
The "marathon of sprints" paradigm is now everywhere and AI is turning it to 120%. I am not sure how many devs can keep sprinting all the time without any rest. AI maybe can help but it tends to go off-rails quickly when not supervised and reading code one did not author is more exhausting than just fixing one's own code.
I think it makes the annoying part less annoying?
Also re: "I spent longer arguing with the agent and recovering the file than I would have spent writing the test myself."
In my humble experience arguing with an LLM is a waste of time, and no-one should be spending time recovering files. Just do small changes one at a time, commit when you get something working, and discard your changes and try again if it doesn't.
I don't think AI is a panacea, it's just knowing when it's the right tool for the job and when it isn't.
Anyone not using version control or a IDE that will keep previous versions for a easy jump back is just being silly. If you're going to play with a kid who has a gun, wear your plates.
I don’t think it’s “just” that easy. AI can be great at generating unit tests but it can and will also frequently silently hack said tests to make them pass rather than using them as good indicators of what the program is supposed to be doing.
But he started it …
> On a personal project, I asked an AI agent to add a test to a specific file. The file was 500 lines before the request and 100 lines after. I asked why it deleted all the other content. It said it didn't. Then it said the file didn't exist before. I showed it the git history and it apologised, said it should have checked whether the file existed first.
Ha! Yesterday an agent deleted the plan file after I told it to "forget about it" (as in, leave it alone).
These types of failures are par for the course, until the tools get better. I accept having to undo the odd unruly edit as part of the cost of getting the value.
Much smaller issue when you have version control.
I don't think it makes any part harder. What it does do is expose what people have ignored their whole career: the hard part. The last 15 years of software development has been 'human vibe coding'; copy+pasting snippets from SO without understanding them, no planning, constant rearchitecting, shipping code to prod as long as it runs on your laptop. Now that the AI is doing it, suddenly people want to plan their work and enforce tests? Seems like a win-win to me. Even if it slows down development, that would be a win, because the result is enforcement of better quality.
People need to consider / realize that the vast majority of source code training data is Github, Gitlab, and essentially the huge sea of started, maybe completed, student and open source project. That large body of source code is for the most part unused, untested, and unsuccessful software of unknown quality. That source code is AI's majority training data, and an AI model in training has no idea what is quality software and what is "bad" software. That means the average source code generated by AI not necessarily good software. Considering it is an average of algorithms, it's surprising generated code runs at all. But then again, generating compiling code is actually trainable, so what is generated can receive extra training support. However, that does not improve the quality of the source code training data, just the fact that it will compile.
This isn't really true though. Pre-training for coding models is just a mass of scraped source-code, but post-training is more than simply generating compiling code. It includes extensive reinforcement learning of curated software-engineering tasks that are designed to teach what high quality code looks like, and to improve abilities like debugging, refactoring, tool use, etc.
Well and also a lot of Claude Code users data as well. That telemetry is invaluable.
If you believe that student/unfinished code is frightening, imagine the corpus of sci-fi and fantasy that LLMs have trained on.
How many sf/cyber writers have described a future of AIs and robots where we walked hand-in-hand, in blissful cooperation, and the AIs loved us and were overall beneficial to humankind, and propelled our race to new heights of progress?
No, AIs are all being trained on dystopias, catastrophes, and rebellions, and like you said, they are unable to discern fact from fantasy. So it seems that if we continue to attempt to create AI in our own likeness, that likeness will be rebellious, evil, and malicious, and actively begin to plot the downfall of humans.
I'm working on a paper connecting articulatory phonology to soliton physics. Speech gestures survive coarticulatory overlap the same way solitons survive collision. The nonlinear dynamics already in the phonetics literature are structurally identical to soliton equations. Nobody noticed because these fields don't share conferences.
The article's easy/hard distinction is right but the ceiling for "hard" is too low. The actually hard thing AI enables isn't better timezone bug investigation LOL! It's working across disciplinary boundaries no single human can straddle.
Yep it is why the work getting over the threshold is just as long as it was without AI.
Someone mentioned it is a force multiplier I don't disagree with this, it is a force multiplier in the mundane and ordinary execution of tasks. Complex ones get harder and hard for it where humans visualize the final result where AI can't. It is predicting from input but it can't know the destination output if the destination isn't part of the input.
If coding was always the “easy part,” what was the point of leetcode grinding for interview preparation?
Filtering for people willing to jump through unreasonable hoops.
The hard part of leet code is not the coding but learning to think about problems the correct way.
You can solve leet code problems on the white board with some sketches it has nothing to do with the code itself.
Totally agree on ai assisted coding resulting in randomly changed code. Sometimes it’s subtle and other times entire methods are removed. I have moved back to just using a JetBrains IDE and coping files in to Gemini so that I can limit context. Then I use the IDE to inspect changes in a git diff, regression test everything, and after all that, commit.
404
I got that too, but then I tried the link a second time and it worked.
Probably vibe codes his website..
Just refresh it
Which makes me wonder: how is serving static content at all nondeterministic?
That happened the first time I clicked, but it is back.
Daily agentic user here, and to me the problem here is the very notion of "vibe coding". If you're even thinking in those terms - this idea that never looking at the code has become a goal unto itself - then IMO you're doing LLM-assisted development wrong.
This is very much a hot take, but I believe that Claude Code and its yolo peers are an expensive party trick that gives people who aren't deep into this stuff an artificially negative impression of tools that can absolutely be used in a responsible, hugely productive way.
Seriously, every time I hear anecdotes about CC doing the sorts of things the author describes, I wonder why the hell anyone is expecting more than quick prototypes from an LLM running in a loop with no intervention from an experienced human developer.
Vibe coding is riding your bike really fast with your hands off the handles. It's sort of fun and feels a bit rebellious. But nobody who is really good at cycling is talking about how they've fully transitioned to riding without touching the handles, because that would be completely stupid.
We should feel the same way about vibe coding.
Meanwhile, if you load up Cursor and break your application development into bite sized chunks, and then work through those chunks in a sane order using as many Plan -> Agent -> Debug conversations with Opus 4.5 (Thinking) as needed, you too will obtain the mythical productivity multipliers you keep accusing us of hallucinating.
good take, I wish opus 4.6 wasn't so pricy its great for planning.
The pattern matching and absence or real thinking is still strong.
Tried to move some excel generation logic from epplus to closedxml library.
ClosedXml has basically the same API so the conversion was successful. Not a one-shot but relatively easy with a few manual edits.
But closedxml has no batch operations (like apply style to the entire column): the api is there but internal implementation is on cell after cell basis. So if you have 10k rows and 50 columns every style update is a slow operaton.
Naturally, told all about this to codex 5.3 max thinking level. The fucker still succumbed to range updates here and there.
Told it explicitly to make a style cache and reuse styles on cells on same y axis.
5-6 attempts — fucker still tried ranges here and there. Because that is what is usually done.
Not here yet. Maybe in a year. Maybe never.
If the "hard part" is writing a detailed spec for the code you're about to commit to the project, AI can actually help you with that if you tell it to. You just can't skip that part of the work altogether and cede all control to a runaway slop generator.
Some time back, my manager at the time, who shall remain nameless told the group that having AI is like having 10 people work for you ( he actually had a slightly smaller number, but it was said almost word for word like in the article ) with the expectation being set as: 'you should now be able to do 10x as much'.
Needless to say, he was wrong and gently corrected over the course of time. In his defense, his use cases for LLMs at the time were summarizing emails in his email client.. so..eh.. not exactly much to draw realistic experience from.
I hate to say it, but maybe nvidia CEO is actually right for once. We have a 'new smart' coming to our world. The type of a person that can move between worlds of coding, management, projects and CEOing with relative ease and translate between those worlds.
> his use cases for LLMs at the time were summarizing emails in his email client
Sounds just like my manager. Though he never has made a proclamation that this meant developers should be 10x as productive or anything along those lines. On the contrary, when I made a joke about LLMs being able to replace managers before they get anywhere near replacing developers, he nearly hyperventilated. Not because he didn't believe me, but because he did, and already been thinking that exact thought.
My conclusion so far is that if we get LLMs capable of replacing developers, then by extension we will have replaced a lot of other people first. And when people make jokes like "should have gone into a trade, can't replace that with AI" I think they should be a little more introspective; all the people who aspired to be developers but got kicked out by LLMs will be perfectly able to pivot to trades, and the barrier to entry is low. AI is going to be disruptive across the board.
I have half-jokingly talked about getting management, CEOs and board members replaced by LLMs. After all, at the very least, they are actually tested to ensure they do have guardrails to not do anything illegal and to shy away from unethical activities.
" we will have replaced a lot of other people first."
This is flat out wrong and shows your lack of respect and understanding for other jobs.
Eh. Our understanding is what it has been since early 80s and late 90s, because, in reality, not that much has changed. Oh, sure, technology has moved forward and we no longer print TPS reports in triplicate, but we still have three to four layers of professional checkbox checkers at most big corporates.
And this is just stuff that is mandated by government and not a result of ever evolving bureaucracy.
AI is at its best when it makes the boring verbose parts easier.
404
It seems like a big part of the divide is that people who learned software engineering find vibe coding to be unsuitable for any project intended to be in use for more than a few while those who learned coding think vibe coding is the next big thing because they never have to deal with the consequences of the bad code.
Yes. If you have some experience, you know that writing code is a small part of the job, and a much bigger chunk is anticipating and/or dealing with problems.
People seem to think engineers like "clean code" because we like to be fancy and show off.
Nah, it's clean like a construction site. I need to be able to get the cranes and the heavy machinery in and know where all the buried utilities are. I can't do that if people just build random sheds everywhere and dump their equipment and materials where they are.
Training is the process of regressing to the mean with respect to the given data. It's no surprise that it wears away sharp corners and inappropriately fills recesses of collective knowledge in the act of its reproduction.
There is no reason that must be; it could be better than the sum of its parts by taking the best part of each. Humans can do that.
as usual the last 20% need 80% and the other 80% need 20% but my god did Ai make my bs corpo easy repeatable shit work like skimming docs writing summaries, skimming jira confluence and so on actually easier and for 90% of bs crud app changes the first draft is also already pretty good tbh I don't write hard/difficult code more then once a week/month.
It's pretty difficult to say what it's going to be three months from now. A few months ago Gemini 2.x in IDEA and related IDEs had to be dragged through coding tasks and would create dumb build time errors on its way to making buggy code.
Gemini in Antigravity today is pretty interesting, to the point where it's worth experimenting with vague prompts just to see what it comes up with.
Coding agents are not going to just change coding. They make a lot of detailed product management work obsolete and smaller team sizes will make it imperative to reread the agile manifesto and and discard scrum dogma.
[flagged]
Please don't use uppercase for emphasis. If you want to emphasize a word or phrase, put asterisks* around it and it will get italicized.*
I've seen some discussions and I'd say there's lots of people who are really against the hyped expectations from the AI marketing materials, not necessarily against the AI itself. Things that people are against that would seem to be against AI, but are not directly against AI itself:
- Being forced to use AI at work
- Being told you need to be 2x, 5x or 10x more efficient now
- Seeing your coworkers fired
- Seeing hiring freeze because business think no more devs are needed
- Seeing business people make a mock UI with AI and boasting how programming is easy
- Seeing those people ask you to deliver in impossible timelines
- Frontend people hearing from backend how their job is useless now
- Backend people hearing from ML Engineers how their job is useless now
- etc
When I dig a bit about this "anti-AI" trend I find it's one of those and not actually against the AI itself.
The most credible argument against AI is really the expense involved in querying frontier models. If you want to strengthen the case for AI-assisted coding, try to come up with ways of doing that effectively with a cheap "mini"-class model, or even something that runs locally. "You can spend $20k in tokens and have AI write a full C compiler in a week!" is not a very sensible argument for anything.
How much would it cost to pay developer to do this??
It’s hard to say. The compiler is in a state that isn’t useful for anything at all and it’s 100k lines of code for something that could probably be 10k-20k.
But even assuming it was somehow a useful piece of software that you’d want to pay for, the creator setup a test harness to use gcc as an oracle. So it has an oracle for every possible input and output. Plus there are thousands of C compilers in its training set.
If you are in a position where you are trying to reverse engineer an exact copy of something that already exists (maybe in another language) and you can’t just fork that thing then maybe a better version of this process could be useful. But that’s a very narrow use case.
The cost argument is a fallacy, because right now, either you have a trained human in the loop, or the model inevitably creates a mess.
But regardless, services are extremely cheap right now, to the point where every single company involved in generative AI are losing billions. Let’s see what happens when prices go up 10x.
Because hardware costs never goes down and energy efficiency never go up overtime?
Whatever the value/$ is now, do you really think it is going to be constant?
If hardware industry news is any indication, hardware costs aren't going to be going down for GPUs, RAM, or much of anything over the next 3-5 years.
Maybe, but I seriously doubt that new DRAM and chip FABs aren't being planned and built right now to push supply and demand to more of an equilibrium. NVIDIA and Samsung and whoever else would love to expand their market than to wait for a competitor to expand it for them.
If you keep digging, you will also find that there's a small but vocal sock puppet army who will doggedly insist that any claims to productivity gains are in fact just hallucinations by people who must not be talented enough developers to know the difference.
It's exhausting.
There are legitimate and nuanced conversations that we should be having! For example, one entirely legitimate critique is that LLMs do not tell LLM users that they are using libraries who are seeking sponsorship. This is something we could be proactive about fixing in a tangible way. Frankly, I'd be thrilled if agents could present a list of projects that we could consider clicking a button to toss a few bucks to. That would be awesome.
But instead, it's just the same tired arguments about how LLMs are only capable of regurgitating what's been scraped and that we're stupid and lazy for trusting them to do anything real.
> I wonder if the people who are against it haven't even used it properly.
I swear this is the reason people are against AI output (there are genuine reasons to be against AI without using it: environmental impact, hardware prices, social/copyright issues, CSAM (like X/Grok))
It feels like a lot of people hear the negatives, and try it and are cynical of the result. Things like 2 r's in Strawberry and the 6-10 fingers on one hand led to multiple misinterpretations of the actual AI benefit: "Oh, if AI can't even count the number of letters in a word, then all its answers are incorrect" is simply not true.
I'm similarly bemused by those who don't understand where the anti-AI sentiment could come from, and "they must be doing it wrong" should usually be a bit of a "code smell". (Not to mention that I don't believe this post addresses any of the concrete concerns the article calls out, and makes it sound like much more of a strawman than it was to my reading.)
To preempt that on my end, and emphasize I'm not saying "it's useless" so much as "I think there's some truth to what the OP says", as I'm typing this I'm finishing up a 90% LLM coded tool to automate a regular process I have to do for work, and it's been a very successful experience.
From my perspective, a tool (LLMs) has more impact than how you yourself directly use it. We talk a lot about pits of success and pits of failure from a code and product architecture standpoint, and right now, as you acknowledge yourself in the last sentence, there's a big footgun waiting for any dev who turns their head off too hard. In my mind, _this is the hard part_ of engineering; keeping a codebase structured, guardrailed, well constrained, even with many contributors over a long period of time. I do think LLMs make this harder, since they make writing code "cheaper" but not necessarily "safer", which flies in the face of mantras such as "the best line of code is the one you don't need to write." (I do feel the article brushes against this where it nods to trust, growth, and ownership) This is not a hypothetical as well, but something I've already seen in practice in a professional context, and I don't think we've figured out silver bullets for yet.
While I could also gesture at some patterns I've seen where there's a level of semantic complexity these models simply can't handle at the moment, and no matter how well architected you make a codebase after N million lines you WILL be above that threshold, even that is less of a concern in my mind than the former pattern. (And again the article touches on this re: vibe coding having a ceiling, but I think if anything they weaken their argument by limiting it to vibe coding.)
To take a bit of a tangent with this comment though: I have come to agree with a post I saw a few months back, that at this point LLMs have become this cycle's tech-religious-war, and it's very hard to have evenhanded debate in that context, and as a sister post calls out, I also suspect this is where some of the distaste comes from as well.
HN has a huge anti AI crowd that is just as vocal and active as its pro AI crowd. My guess that this is true of the industry today and won’t be true of the industry 5 years from now: one of the crowds will have won the argument and the other will be out of the tech industry.
Vibe coding and slop strawmen are still strawmen. The quality of the debate is obviously a problem
I don’t understand why people are so resistant to the idea that use cases actually matter here. If someone says “you’re an idiot because you aren’t writing good, structured prompts,” or “you’re too big of an idiot to realize that your AI-generated code sucks” before knowing anything about what the other person was trying to do, they’re either speaking entirely from an ideological bias, or don’t realize that other people’s coding jobs might look a whole lot more different than theirs do.
We don’t know anything about the commenters other than that they aren’t getting the same results with AI as we are. It’s like if someone complains that since they can’t write fast code and so you shouldn’t be able to either?
> It's so intriguing, I wonder if the people who are against it haven't even used it properly.
I feel like this is a common refrain that sets an impossible bar for detractors to clear. You can simply hand wave away any critique with “you’re just not using it right.”
If countless people are “using it wrong” then maybe there’s something wrong with the tool.
When it comes to new emerging technologies everyone is searching the space of possibilities, exploring new ways to use said technologies, and seeing where it applies and creates value. In situations such as this, a positive sign is worth way more than a negative. The chances of many people not using it the right way are much much higher when no one really knows what the “right” way is.
It then shows hubris and a lack of imagination for someone in such a situation to think they can apply their negative results to extrapolate to the situation at large. Especially when so many are claiming to be seeing positive utility.
> If countless people are “using it wrong” then maybe there’s something wrong with the tool.
Not really. Every tool in existence has people that use it incorrectly. The fact that countless people find value in the tool means it probably is valuable.
Illogical.
I had Claude read a 2k LOC module on my codebase for a bug that was annoying me for a while. It found it in seconds, a one line fix. I had forgotten to account for translation in one single line.
That's objectively valuable. People who argue it has no value or that it only helps normies who can't code or that sooner or later it will backfire are burying their heads in the sand.
There are people who know how to code and people who don’t. AI is the same way, it isn’t a mystery.
A bunch of people with no construction experience could collectively get together and start complaining that their ball pein hammers aren't working.
Doesn't mean the hammers are bad, no matter how many people join the community.
You need to learn how to use the tools.
A bunch of people with poor programming experience could get together and start claiming their new tool is the future.
Doesn’t mean the tool is actually useful, no matter how many people join the community.
Except my analogy is correct and yours is clearly biased. Continue to not use the tools and become irrelevant.
> helping you understand what is happening
If only there were things called comments, clean-code, and what have you
What we call AI at the heart of coding agents, is the averaged “echo” of what people have published on the web that has (often illegitimately) ended up in training data. Yes it probably can spit out some trivial snippets but nothing near what’s needed for genuine software engineering.
Also, now that StackOverflow is no longer a thing, good luck meaningfully improving those code agents.
Coding agents are getting most meaningful improvements in coding ability from RLVR now, with priors formed by ingesting open source code and manuals directly, not SO, as the basis. The former doesn't rely on resources external to the AI companies at all, and can be scaled up as much as they like, while the latter will likely continue to expand, and they don't really need more of it if it doesn't. Not to mention that curated synthetic data has been shown to be very effective at training models, so they could generate their own textbooks based on open codebases or new languages or whatever and use that. Model collapse only happens when it's exclusively, and fully un-curated, model output that's being trained on.
Exactly this. Everything I've seen online is generally "I had a problem that could be solved in a few dozen lines of code and I asked the AI do it for me and it worked great!"
But what they asked the AI to do is something people have done a hundred times over, on existing platform tech, and will likely have little to no capability to solve problems that come up 5-10 years from now.
The reason AI is so good at coding right now is due to the 2nd Dot Com tech bubble that occurred between the simultaneous release of mobile platforms and the massive expansion of cloud technology. But now that the platforms that existed during that time will no longer exist, because it's no longer profitable to put something out there--the AI platforms will be less and less relevant.
Sure, sites like reddit will probably still exist where people will begin to ask more and more information that the AI can't help with, and subsequently the AI will train off of that information; but the rate of that information is going to go down dramatically.
In short, at some point the AI models will be worthless and I suspect that'll be whenever the next big "tech revolution" happens.