Ask HN: Am I the only one not using AI?
I've tried using various AI tools and models over the past couple of years, but honestly it feels like it gives me a false sense of confidence. Plus, the time I supposedly save building things gets eaten up debugging, correcting, improving the AI-generated slop.
Am I using the tools wrong or are others finding the same thing? AI ruins your flow. That's the biggest problem. I sit here and wait for Claude to do something. Then I get distracted by social media. No these things don't actually work if you study human psychology: * Switching to another work task (what for like a minute?) * Playing chess or something (sure its better than social media, but still a distraction) But I do like AI tools that don't interfere with my flow like Github Copilot, or even chatting with Claude / ChatGPT about a task I'm doing. I started doing pushups between claude code responses. I started with 10 but now I rip ~50 like nothing. I'm getting a pull up bar and trying to do the same. Pull ups until it completes then prompt and again, squats, pushups, ect. I'm getting stronger and better at code. You are getting stronger. I very much doubt you are getting better at code. stare out the window. look at clouds. wonder how they take the shapes they do. think about water and how it moves through time and space. how those water molecules were once in a bowl of rice or loaf of bread. how many other things has this water been in? what about the water in my body, right now? holy shit, i’ve been a cloud before… oh, claude’s done now. how does this thing work? E-e-e-xactly. It took an embarrassing long time for me to come to this conclusion also. There’s something hypnotising about seeing it work which is also distracting. I wonder if I’ve actually saved time overall or, if I was in an uninterrupted flow state I would have done not just a better but also quicker job. "allowing AI actually increases completion time by 19%--AI tooling slowed developers down." I just ask right now Cursor-GPT about where a service was being called from, its has over 10 minutes and it hasn't come up with an answer. Just constant grepping and reading and planning next moves So aggravating You are not alone. After falling in love and hacking away with Claude for a few weeks, I'm now in the hangover phase, and barely using any AI at all. AI works well to build boilerplate code and solve easy problems, while confidently driving full-speed into the wall as soon as complexity increases. I also noticed that it makes me subtly lazier and dumber. I started thinking like a manager, at a higher-level, believing I could ignore the details. It turns out I cannot, and details came back to bite me quickly. So, no AI for me right now, but I'm keeping an eye out for the next gens. You're definitely not alone. Social media amplifies the "AI is everywhere" narrative, but in reality? Most people are still shipping code the old-fashioned way. I'd estimate maybe 20% of devs have actually integrated AI into their daily workflow beyond occasional ChatGPT queries. The other 80% either tried it and bounced off the friction, or are waiting to see which tools actually stick. Not using AI doesn't mean you're falling behind - it means you're avoiding cargo-culting. The real skill is knowing when it's worth the context-switching cost and when grep + your brain is faster. Whenever I try to use Claude for my programming work, it surprises me with how confidentially it states wrong facts or analysis results. I spend a lot of my tokens quota just for correcting it and having it generate an apology. Just started a Claude Code experiment this week. I'm building a new NAS but instead of using an off the shelf appropriate distro like TrueNAS I just installed NixOS and I'm having Claude Code fully manage the entire operating system. It's going pretty well so far. Initially it would reach for tools like dig that weren't available on the install but after a "# memorize We're on NixOS, you need to try to do things the NixOS way first. Including temporarily installing tools via nix-shell to run individual commands." those issues went away and it's doing NixOS things. From a clean NixOS command line install, we've got containers and vms handled. Reverse proxy with cloudflare tunnels with all endpoints automatically getting and renewing SSL certs. All the *arr stack tools and other homelab stuff you'd expect. Split horizon DNS with unbound and pihole running internally. All of my configurations backed up in github. I didn't even create the cloudflare tunnels or the github repos. I had claude code handle that via API and cli tools. The last piece I'm waiting on to tie it all together are my actual data drives which should be here tomorrow. Is this a smart thing to do? Absolutely not. Tons of things could go wrong. But NixOS is fairly resilient and rollbacks are easy. I don't actually have anything running on the NAS in use yet and I've got my synology limping along until I finish building this replacement. It's still an open question whether I'll use Claude Code like this to manage the NAS once I've migrated my data and my family has switched over. But I've had a very good experience so far. I don't use it for large-scale code generation, but I do find it useful for small code snippets. For example asking how to initialize a widget in Kendo UI with specific behavior. With snippets, I can just run the code and verify that it works with minimal effort. It's often more about reminding me of something I already knew rather than discovering something novel. I wouldn't trust it with anything novel. In general, I think of it as a better kind of search. The knowledge available on the internet is enormous, and LLMs are pretty good at finding and synthesizing it relative to a prompt. But that's a different task than generating its own ideas. I think of it like a highly efficient secretary. I wouldn't ask my secretary how to solve a problem, but I absolutely would ask if we have any records pertaining to the problem, and perhaps would also ask for a summary of those records. “I've come up with a set of rules that describe our reactions to technologies: 1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. 2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. 3. Anything invented after you're thirty-five is against the natural order of things.” ― Douglas Adams While I love Douglas Adams, and there's a hint of truth there, I don't think it works well here. By that logic, anything invented after 2005 would be abhorrent to me. Yet, there's tons of new tech invented since that I find/found "new and exciting." My dislike of GenAI stuff is based on practical, ethical, and economic concerns. Practical: GenAI output is bland and untrustworthy. It also discourages thought and learning, IMO. Lest folks line up to tell me how wonderful it is for their learning, that may be true, but my observation is that is not how the majority uses it. Once upon a time I thought the Internet/Web would be a revolution in learning for people. Fool me once... Ethical: So many problems here, from the training data sets, to people unleashing scraper bots that have been effectively DDoS'ing sites for going on a year (at least) now. If these are the kind of people who make up the industry building these tools, I want nothing to do with the tools. Economic: Related to ethics, but somewhat separate. GenAI and other LLM/AI tools could benefit people. I acknowledge, for example, there's real promise in using various related tech to do better medical diagnostics. That would be wonderful. But, the primary motivation of the companies pushing AI right now is to 1) get people hooked on the tools and jack up prices, 2) sell tech that can be used to lower wages or reduce employment, and 3) create another hype technology so they can stuff their pockets, and the coming crash be damned. Again, what is driving AI/LLM is not well intentioned. Ignore that at your own peril. Probably everybody else's peril, too. Adams no doubt knew people who were aghast at PCs or mobile phones because they were not around when they were younger. I get it. But, well, I wonder how Adams would feel about GenAI tools that spit out "write blah in the style of Douglas Adams" after being trained on all of his work. I think you could make identical arguments about any new technology. Amazingly expensive computers were once not very practical, it’s easy to point at any massive world changing tech and call it unethical, you can find these arguments about oil, railroads, 24 hour news, etc. and the same is true for economic incentives. Rubber barons and railroad tycoons were not well intentioned from this point of view. I don’t think there’s anything about AI that is inherently different from previous tech. And isn’t that the entire point of the quote? I’m not trying to be dismissive of your point, just to pose a counterpoint. My workflow is simple, step 1) THINK hard about the problem by yourself, 2) Define rough sketches of function names, params, flow, etc. adapt to your problem 3) Iterate with any LLM and create an action plan, this is where you correct everything, before any code is written 4) Send the plan to one the CLI LLM thingies and attack the points one by one so you don't run out of context. So far has been working beautifully for real work stuff, sometimes the models do drift, but if you are actually paying attention to the responses, you should be able to catch it early. I've recently started to use it to help with writing tests. I'll write the code, then scaffold out the test scenarios that I want it to do, give it my code, my scaffolding, and say fill it in. It's done pretty well and saves me a ton of time doing the part of the job that I hate the most. I go through and tweak probably 10% of the generated code, and typically about 1 out of 30 tests will fail, badly, and I'll have to rewrite it from scratch. I found the same thing, so I don't bother with ai-gen code AT ALL. I found that the time wasted fixing up the slop was not worth it - it's more efficient to code it yourself, as shown by studies (eg referred to here: https://www.linkedin.com/pulse/vibe-coding-myth-when-feeling...). No, your vibe-coding is not more productive, unless your only metrics for productivity are commit counts, PR counts, deployment counts. I can commit, PR and deploy crap all day long and "score well" - and this is what people are clinging to with their ai-gen defenses. I'm really sorry to inform you that your experienced "speed-up" is just a trick of the brain (remembering from an article written, iirc, by Gurwinder, but I'm having trouble finding it now) - you're actually going slower, and your brain is tricking you into thinking it's faster because whilst the ai was "coding", you didn't have to, so it feels like more of a win than it actually is, considering the results. I don’t and I won’t. My large clients do not care. The day they start to require any of that nonsense, I’ll drop them as a client. Simple as that. I have decided to be radical about AI and LLM: I don’t like them because they are a waste of time, and I would like them even less if they were this magical world-changing technology people want us to believe. I am at a point of my career where concerns of productivity or how to operate in large-scale tech companies are the least of my problems, while I increasingly appreciate the artistic craft of programming and computers, especially in small-scale to improve our lives rather than accumulate profit. So while I could admit LLMs they have their use, I want to consciously follow a path where human intelligence and well-being is of the utmost concern, and any attempt at creating intelligent machines is tantamount to blasphemy. Under this philosophy, seeing that all the talk about imminent AGI has led to creating spam and porn at large scale, I can only roll my eyes and remain on the sidelines while we continue down this idiotic path of resource extraction and devaluation of human ingenuity for the profit of the few. I use llms everyday for coding assistance the same way I used search engines in the past, and to that extent I do not see an issue. Why would you avoid that out of principle? For the same reason Richard Stallman only uses free software. Sometimes, it's good to have a moral imperative and stick to it. This is mine. I'm far enough in my career to know that avoiding coding assistance or LLM-assisted "search" won't make my life or craft worse in any way. Quite the opposite, in fact. Shouldn't you refuse to use anything but pencil and paper by that logic. An abacus. No not that that's technology. Only your fingers? The godhead resides equally in the pedal of a flower the gears in an engine, the human typed code on servers as well as the machine generated code on be very same servers. There is an extreme difference between outsourcing physical difficulties to mechanical advantages, and outsourcing your brain to a large corporation that is stealing all of the IP that we have collectively created, as a society, and using it to replace our brains. It feels clever to make comments like yours right now, but in two years when the order of control flow moves up two more steps and you are no longer needed at all, it'll be frustrating to look back and think "I wish I wouldn't have given money to them." You might be right, as we don't know yet if these tools actually enhance or make our craft better in the long term. I’ll drop them as a client Not unless they drop you first. I can't actually be bothered myself either... Did take a look at gemini result, but then it was different from immediate results under it so didn't leave lot of confidence even to get simplest things right. >Am I using the tools wrong or are others finding the same thing? Like any new tool, there is a learning curve. The Curve is rather steep right now with the horizon changing to quickly. The right tool also matters a great deal; right now you can run a model at home on 32gb vram that's objectively better than gpt 3.5 from 2023 or grok 2. >lus, the time I supposedly save building things gets eaten up debugging, correcting, improving the AI-generated slop. Those complaining about ai slop are almost certainly complaining about lack of prompt engineering skills. Let me also explain the proper evolution here. In 2021, you would go to stackoverflow, copy some of your code or ask a question and hopefully someone helped you sometime. Then you'd get the help and probably paste their code in. In 2024, you would go to AI, copy some of your code, ask a question and the AI responds quickly. The solution might be bad, buggy, and so you reprompt because your first prompt wasnt engineered well. You finally get good code and copy and paste. In 2025, why all this copy and paste? Why not use agentic where it does the copy and paste for you. It knows what to read, and what to do. Also 2025, what if you have AI also orchestrating 1 level higher and verifying that it itself is doing a good job. One is passively receiving the answer and the other is actively reading and comparing multiple choices. If you were the type that would just copy paste whatever came up first, then yeah its just quicker to do it that way. You forgot 2023 where you'd generate get half-lucid unit tests In my company (big tech), the tools are integrated in our environment, and most people use them, and actually a 20-30% of the new code in prod is generated by AI. I was skeptical, and I don't like this new world, but it's happening. I don't know what will be the final form of this, how our jobs will be impacted, and how much more productive we really are with the tool. But it's not a hype, these tools are here to stay and have changed the way we code. I don't think they will replace coders but they will make the best programmers more efficient. As you said, it's easy to lose time with the generated slop, but someone who use the tools wisely is more efficient. I am not. Technically that would mean adding a stochastic process driven by an ambiguous language to my deterministic workflow, and change my job from deep thinking about problem solving to reviewing and testing code that changes more often than it should. Ethically it abuses people, the environment, and copyright and the people selling the product are lobbying governments to break the social contract in a bid to extract more wealth. It's a shit show. Yes, you are using the tools wrong. These tools are _hard_ to use well. Very hard. Deceptively hard. So hard that smart engineers bounce off of them believing them to be all smoke and hype. But if you study the emerging patterns, experiment, and work through the difficulty and rough edges, you will come out the other side with a new skill that will have you moving faster than you believed possible. There are people who will think I'm either lying or delusional. It's fine. They just haven't made it through the fog yet. They'll get there eventually. These AI tools are hyped as "look how easy and fast it is to build an app with no coding experience." But then I see posts like yours saying that the tools are so _hard_ to use that developers with advanced skills struggle to get it to do what they can already do faster and better and with more satisfaction on their own. Both "it's so easy" and "it's hard, but believe me, worth it someday" are completely unconvincing arguments to me. I'd rather just do my job well than spend all my time chasing someone's overhyped fantasy down a rabbit hole. You are conflating two different things. There are "vibe coding" tools like Lovable that let you build a prototype-level app with no coding experience. They're easy and fun, but I probably wouldn't want a novice (or anyone, really) using them within an enterprise codebase. Then there are tools like Claude Code which, when used by a skilled practitioner, can be used to accelerate real SWE work in enterprise production codebases. You are of course free to bury your head in the sand and ignore both categories of tools under the same "overhyped fantasy" umbrella, but I think you're doing so at your own professional peril. Just my opinion though. Every single time I try to use it for research or learning it ends up spitting subtly invalid code. Results range from imaginary APIs that don't exist to straight up invalid syntax, not to mention outdated info, contradictory reasoning and flat out wrong explanations. Maybe spending $200/mo or whatever to access the top-of-the-line models will mitigate some of that, but I'd rather come up with the solution and gain the understanding myself, then spend the money on something worthwile. It's a mixed bag. It depends on your problem domain, the problem you are trying to solve (within that domain), the context you provide the LLM, the output it generates (you are using libraries to coerce the output into predictable (JSON) structures, right?). What's more, based on what you are trying to do, the LLM you are using might have sufficient training data, but not necessarily so (resulting in possible hallucinations/confabulations). So, there's that. Also, LLMs are not deterministic, they can (and will) generate a different response every time you call them (even if the context you provided is the same). So, yeah... sometimes these things really deliver and other times, it's just... meh! Yep agreed