Ask HN: To what extend have you stopped or limited your use of AI?
Hi HN, I'm a researcher trying to understand the ways in which you have limited or stopped using AI tools.
Knowledge work is evolving, and I'm trying to understand the lived experiences of how you are actually working. There's plenty of content out there in the genre of AI for "X", using tools etc - but I'm curious to learn if you adopted AI as part of some area of work - but have now chosen to stop it. What was the context? What did or did not work? I stopped using AI for code, everything it gives me requires manual checking and editing to work but now I have the overhead of not having the deep understanding I would otherwise have had if I figured out things myself from the start. I also stopped using AI as a sort of replacement for quick google searches in general, but I didn't go back to using google because now I feel those results are too shallow. I been trying to figure things out from primary sources as much as possible and thinking for myself since it seems to me the value going forward is having depth and breadth over the competition that you only get by doing things the very old and slow way. AI is like a calculator. We don't degrade because we use it. We just spend more time thinking, not doing routine calculations. > AI is like a calculator. ...that confidently tells you "1+1=2.71828182845904523536" I installed claude code (an agentic ai that runs on your machine and can browse and edit code, run the build, and use unix commands) This has largely taken me out of the loop. I give it detailed task like I would a junior engineer, and we discuss approaches and align on direction, priorities, and goals, and it then goes off for literally hours iterating on the code. I have done about 3 months worth of extremely complex engineering work in about a week since I started doing this. It is a step change from trying to use the chat interface and copy/pasting snippets. Once it’s going it writes code like a staff engineer. There are some obscure bugs it can run into that it needs my 20 years of experience to unblock or unwind when it goes down a rabbit hole. But it has accelerated my development 500x and while it’s iterating I’m not filling my mind with code and syntax and so on, I’m more like a tech lead or manager now. I’m in another room playing with my dog. This. I have started testing Copilot for fun, my wife needs a web-based project that is not maintained for a while now and written in PHP. I asked Copilot (Agent mode) to translate it to rust just for the fun of it, seeing how far it would come - I expected nothing out of it. I broke down the tasks into manageable chunks, directed it in design choices, and asked it to use some specific frameworks. So far it wrote 40k lines of rust on its own, and keeps track of what functionality is missing compared to the original project. It was impressive seeing it iterate alone for 30m+ at a time. I'm no programmer, more systems/cloud engineer, so a rewrite like this would have likely costed me >2 years of work and still ending up useless for all intents and purposes. I'm pretty sure that the end result won't work at first try, and I'll need to fix stuff manually or direct Copilot to fix it, but after two weeks of 1-2h iterating at night, I have 90% of something that would have required someone full time for at least a couple years. The two things I found most valuable (also in other things, like shorter bash and python scripts): 1. Syntax boilerplate: if your task is specific enough, it normally gets it right 99.99% of the time, and my brain can look at the actual logic rather than the commas, brackets and (in python's case) spaces 2. Documentation: I spend 95% less time looking at documentation for something, I don't need to comb the entire language/package/class for the specific things, it normally gets it right, and worst case I can ask it to do a refactor with the most modern standards for specific library version X Are you exaggerating about "literally hours"? If true, that implies you're making better use of CC than I am. For me the record is maybe 10 mins. Do you write extensive functional/technical instructions? E.g. how large is your prompt (incl. task descriptions in files the prompt refers to). Curious to know. I have been giving it about the same amount of context as a large github/jira/whatever issue. But i’m giving it a goal with a fairly distant milestone where it needs to hit 5-10 smaller milestones and even for advanced ai mixing the system languages I’m mixing in this large project there are build errors it powers through on its own (fixing the building, linking, or syntax, or using the apis for the versions of all the things i’m using) and there are some combinations of tech (windowing, webviews, etc.) and some architectures of things that require it to try multiple approaches to succeed. Sometimes it will ask like “how am i doing so far should i continue”, or can you run this visual thing and report back. But there can be 30-60 mins between it doing that. It also has to scan a lot of files and find the things it needs to find to fill its own context. Like “let me examine how you implemented that on mac and windows and come up with a solution for linux” Other times while it’s reasoning if i notice it’s going down a rabbit hole I’ll point it out and set it in the right direction without interrupting it and when it moves onto the next thought it’ll incorporate my suggestion but it didn’t really stop or need another big prompt or goal. There are plenty of times where I do need to step in and baby it with small goals through something tricky or obscure. But even in those cases I don’t have to write any code directly i just ask it to do xyz small change and hand it back for me to test I haven't limited my AI use. In fact, it has increased. It still feels experimental, and I often use it even when it does not save time, simply to avoid effortful thinking. That concerns me. I believe we are heading toward a world where AI offers easy mental shortcuts for nearly everything, similar to how cheap carbs became widespread in our diets. I do not yet know how I will deal with that. For now, I am just a kid in a candy store, enjoying the novelty. Interesting - what are the kinds of tasks that make up 'effortful thinking'? Do feel like you're now putting in "effort" towards other kinds of thinking / work? I am curious if using AI has changed the fundamental ways in which you view "effort" and "value" from pursuing a piece of work. Are there are new kinds of challenges that come up when you're using some new AI tools? I find the analogy to candy particularly interesting. The default comparison being that "too much of it is bad for you". Do you feel that you are putting on "cognitive weight" as a result of using AI? What did not work? Many things appear to work at first, right. Most of the time, using AI seems great, until one spends a lot of time working out lots of important details. A bunch of prompts later... Yeah. Sometimes it is nice to begin with something, even if it is wrong. AI is great for that. Funny how often it is we can write in response to errors! Out it comes! Like that fire hose trope. In that vein: Proposal templates, and other basic create tasks can start with a boost. Oh, a surprising one was distilling complex ideas into simple, direct language! And for code, I like getting a fragment, function, whatever all populated, ready for me to just start working with. Bonus for languages I want to learn more about, or just learn. There are traps here. You have to run it with that in mind. Trust, but verify. What did not work: Really counting on the things. And like most everyone I suppose, I will easily say I know better. Really, I do, but... [Insert in here.] Filtering of various kinds. I may add to this later. I personally do not use LLMs (knowingly) at all, largely for environmental reasons. The improvements are, to me, not worth the attrocious energy use. I will happily pay 20% or whatever more in both money and time. I also have no interest in technology that impedes my skill development. I do not want to use anything that makes me a worse writer over time. YMMV, I am answering the OP not evangelizing. Counter arguments will be ignored. You don't owe anyone a back and forth discussion anyway, so the last line comes off as redundant. You are, of course, right. But my experience with comments in response to environmental impacts of LLMs here has been so negative that it feels appropriate nonetheless. What do you mean by energy here? Personal energy (i.e. time, effort, up-skilling etc) or energy as a physics concept (i.e. literal energy usage in watts, the production of which will harm the environment)? > YMMV, I am answering the OP not evangelizing. Counter arguments will be ignored. Reminds me of the Monty Python Arguing sketch. No it doesn't My only negative thing about AI is that currently it's hard to do it locally, you rely on a third party who now suddenly has access to a large portion of the code. That is really a downgrade from before AI, when I had absolutely everything locally except Google search. But other than that, the bottleneck is with me, not with the AI. I think I can get a lot more out of it by getting more experienced in offloading work to AI.
It's just so good how I can just let it run bash commands on my system and figure out my problems. Too good of a utility to pass up on. For me, it’s increasing and increasing by the day. I have no interest in limiting or stopping it, and every day I’m working to improve my workflows and interactions either AI. And I say that as a software engineer with 20-something years of experience; I’m not a new kid on the block. I did not stop. I do limit my use today, compared to a few months ago. Most of that is having successfully mapped out use cases that make sense, I find myself doing less seeking. Where it is a net gain, go; otherwise, why bother? What are the kinds of use cases for which it makes sense to you? And how do you evaluate the nature of the gain before deciding whether or not using AI will be worth it? My other comment on this discussion partially answers your questions. I don't evaluate the nature of the gain before deciding whether AI will be worth it. Fact is I do not know prior to some use. I suppose the better question, given the above is how I decide to try it. I wait until I have both time and a purpose to gamble with. When both become available, I will make an ernest attempt. I may make several. The process is not a whole lot different than it has always been for me and new tools and tech. One simply has to give it a go to understand value. Use cases are in other comment. Using gen AI for anything artistic (illustrations, music, video, creative writing) is a dead end. The results are soulless and bland. People notice immediately. Code completions are fine. Driving code through chat is a complete waste of time (never saves time for me; always ends up taking longer). Agentic coding (where the LLM works autonomously for half an hour) still holds some promise, but my employer isn't ready for that. Research/queries only for very low stakes/established things (e.g., how do I achieve X in git). I don't use AI at all, primarily because I believe its harmful and I am quite mindful of things. I've observed colleagues who have used it extensively, I've often been a late adopter for things that carry unspecified risk; and AI was already on par with Pandora's box in my estimation when the weights were first released; I am usually perceptually pretty far ahead of the curve naturally (and accurately so). Objectively, I've found these colleagues attitude, mental alacrity, work product, and abstract reasoning skills have degraded significantly in reference to their prior work pre-AI. They tried harder, got more actual work done, and were able to converse easily and quickly before. Now its, let me get back to you; and you get emails which have been quite clearly put through an LLM, with no real reasoning happening. What is worse, is its happened in ways they largely do not notice, and when objective observations are pointed out, they don't take kindly to the feedback despite it not being an issue with them, but with their AI use, and the perceptual blindspots it takes advantage of. Many seem to be adopting destructive behaviors common to junkies, who have addiction problems. I think given sufficient time, this trend will be recognized; but not before it causes significant adverse effects. This is super interesting, could you share some examples? Plenty of philosophers are with you in that technology doesn't just exist as a "tool" but actively affects the ways in which we perceive and relate to the world, and understand ourselves. What are some ways in which you have seen the perceptual abilities of coworkers erode over time? An efficiency oriented logic makes us think that we're getting the work done "faster", and it "feels" like faster time to market, but in reality you experience a slowdown and a decline in quality... PS: my own dependance on Wispr (a speech to text dictation tool) changed the way I write / interact with computers - my over-reliance meant I didn't proofread the title, and the "EXTEND" sticks out like a sore thumb... Although I do read a lot of philosophy books, I came to this point of view from a different direction. I took a course awhile back taught by a retired military professor on communications and it was eye-opening. He covered what you would expect but with a slant towards 5GW, irregular warfare, political warfare; and heavily referenced Gershanek as a supplemental book; which is published by Navy Press.
(https://www.usmcu.edu/Portals/218/Political%20Warfare_web.pd...) Communications holds a privileged position that is tied strongly to and influences our individual psychology and identity. Reflected appraisal is how we adopt culture from our parents, and it can be manipulated to distort that in ways that are harmful, if you understand the mechanics; and distortions cause psychological stress (the basis for torture), which can be used for malign influence, destructive interference of core identity, compulsion, or torture. This along with other structures, elements, and clustering, can cause changes where if you aren't mindful of your environment, you don't recognize its happened, all you have is confusion, as your identity/soul gets pealed back and hollowed out, and this is the basis for how cult programming, and the related involuntary hypnosis works in practice. The same goes for PoWs from the 1950s. There's quite a lot of material on this if you know where to look. > Some ways in which you have seen the perceptual abilities of coworkers erode over time? This is going to sound very subjective, but their overall cognitive speed has decreased dramatically. When you learn a skill to the point where its automatic, you can get a good flow going from a to b to c pivot to e, etc with no delays; and they struggle with each step/connection, each reasoning portion. Almost like there's interference, but its persistent and consistent; and they either don't notice, or they get defensive. When they need to make a determination or design decision, they will miss the pivots, and not account for things that lead to significant mistakes which would never have happened before. The solutions they come up with are for the most part no longer creative. They used to take functional structures they had collected and knew well that worked, and repurpose them, or apply them in ways that were quite creative towards a problem that they defined. Now they largely don't; and the definition of the problems they define are only slightly better than the LLM at this point; it used to be much better. A lot of due diligence is also no longer being done. When asked about specific things, instead of being able to answer, they get confused, sometimes even incoherent, behaviors that seem very dementia-like, but these are guys almost fresh out of college in their mid 20s, and they aren't on drugs (we are all tested regularly). There are ways people can be blinded, where they will adopt a misleading stance based upon structure (without any reasoning), even very intelligent people. I'm of the opinion the inconsistency of the LLM's responses which are treated as communication, are gradually damaging people. Incidentally, people who have had a lot of exposure also have stopped taking on the more difficult or challenging tasks. I see AI the same way we see calculators: they don’t make us worse at math, they just offload repetitive computation. The core question is not “are we degrading,” but rather: are we thinking better with better tools? Personally, I use AI only to reduce boilerplate and explore alternatives — the decision-making and abstraction stays on me. If someone starts thinking less because of tools, the problem isn't the tool — it's how it's used. This is misplaced circular fallacy, but to each their own. I value my life, and by extension my mind quite highly. Those that seem to use these tools become dumber in ways they do not notice. In much the same vein I become smarter in relative retrospect just holding to my guns and shielding my exposure. If you use a tool, whose primary consequence of use is that you become damaged and less each time you use it, and this happens in most cases in a way where you cannot recognize it happening. How do you ever stop? If you cannot know how to safely use it, and you cannot recognize the mechanism or issue, what is left? If it alters your ability to perceive things, you certainly can't decide something if you don't recognize the need to decide. If the factors required for that decision to come about are outside your perception, where the connections for a correct decision no longer exist, there isn't anything you can do. You take an old argument that its just a tool, saying the choice is with the person who is responsible, not the tool, and yet the person doesn't or more likely cannot notice, or recognize the damage happening. Its a very rare person who is capable of introspection at such a subtle degree. There is also no informed consent of the danger so all those children being force fed this stuff as GenAI when the data finally is in; well I don't want to think about a future like that, where there may be no future at all for them. The decision-making process requires things that you may not have anymore, and while you may continue to think falsely that you do and are still capable of that but you've been blinded and when that happens, you've definitionally entered a state of delusion. Quite a lot of delusional people don't realize they've gone off the deep end, its a perceptual deficit. Who knows maybe it will go so far as delirium as the debasement progresses and you unravel as a sentient person. We all have psychological blindspots, and there is one blindspot above all others that we have no defense against; called distorted reflected appraisal. There are some things where the issue is directly with the tool, not how its used. You make some very good points but I would argue that many if not all the people you observe “without introspection” have enough to know they’re getting dummer but they don’t care because they never liked the art of writing code itself or problem solving they just liked getting a paycheck and are relieved that they can still have output their managers don’t fire them over with far less effort. They see the same thing as you but are overjoyed to power down their brain and spend more time thinking about their personal hobbies and sports and 90 day fiance instead of real problem solving or code. Writing. I've tried for years to build writing tools with AI. I think for the most part, it doesn't work well and they have become worse (more unnatural) since GPT-3, with the exception of GPT-4.5 and Gemini 1.5 Flash. There are bits you can delegate to AI: Writing punchy intro paragraphs. Brainstorming titles. Starting off dialogue in a certain style, but it can't sustain it for very long. Or dialogue as another person - you often don't want two characters with similar language. Writing is thinking. You can rubber duck it for ideas. And it does bounce back some good ones. But you can't expect it to do the heavy work. Lately, I've been reversing the dynamic - getting AI to generate the bullet points while I write the document. The last straw was when I got it to summarize a doc, and then got it to do work based off the doc it wrote. It would get half the work wrong. I tried it for writing, and while the main LLMs do a decent job of vomiting out somewhat wordy but essentially okay text if you want some kind of generic content on a specific subject, there's always a distinctly generative feel to it, at least in my impression. The real problems emerge when you ask for technical or datum-rich writing. The little invented or "mistaken" details are just too frequent to let it be useful unless you do enough editing that you almost might as well write what you wanted yourself. Given the above, it's useful as hell for generating templates and usable starters for creating your own work when you're feeling stuck, and that's mainly it for me. I don't use it at all. I'm not delegating my thinking to a machine that can't think. Learned helpesness as a service isn't a thing I want and I worry that long term it will make me think less deeply in ways I can't predict. I stopped using ChatGPT because even though it’s faster I find its answers and ability to do tasks very unreliable. For instance I had a series of references I wanted it to quickly MLA format. It deleted 10/100 then made up 10 real sounding ones. If it can’t help with formatting then I can’t trust it with bigger tasks. I use DeepSeek both locally and online more frequently and find it more trustworthy but it still frequently makes things up. Maybe because it’s Chinese I’ve found it better at translation and somewhat more nuanced. Haven't tried it yet. I hear it's having some impact! The main reason I’m slowly using less and less AI is price. I’m relying on the free tiers but I know that’s not gonna last forever. I don’t want to pay for top notch AI, just like I don’t pay for top notch kernels (i.e., linux), top notch version systems (i.e., git) and so on. If anything it's really going the other way. Free stuff keeps catching up with the most expensive pay-for stuff from a year or two ago. If I lost access to all the hosted AI today I don't think I'd really feel like I'm missing out. I'd also disagree Linux is the top notch kernel. It might be the most universal one because of drivers and licensing and that makes it my personal favorite because it "just works" with pretty much no fuss (technical or social) but there are a number of kernels out there with better features. Not at all. I'm an attorney. Fast law is bad law. If you're using LLMs (stop calling it AI ffs -- only morons parrot the marketing hype) to do law, you're just asking toget slapped sooner or later. Like the moron that is My Pillow Mike Lindell's lawyer. See here: https://ia801706.us.archive.org/34/items/gov.uscourts.cod.21... > LLMs (stop calling it AI ffs -- only morons parrot the marketing hype) Researchers have called much less intelligent things AI since 1956. Before there were GPTs, there were RNNs and CNNs. AI is the field of study. Oh, my mistake. Lying is fine as long as other people are doing it. Good logic. I rarely call it AI myself and for similar reasons. The LLM models are something. To me they look like knowledge holograms. Or maybe just a huge pattern inference machine. And they are ODD. Like a fragment of smarts. Dangerous and for sure not something we can rely on. It’s not lying. It’s a term with historic precedence. It’s an aspiration to mechanize thought. Are we there yet? It’s a continuum. Excellent logic. None. I try to use AI more every day. I’m also type 1 diabetic and this is like asking me to what extent I have stopped or limited my use of insulin. AI and insulin (to different extents) make my life better in significant ways. Why would I stop or limit that? I haven’t stopped using AI, but I use it less than I did a few months ago. Now I mostly turn to it when I’m stuck or need inspiration. Using it less actually made me more efficient. I did neither stop nor limit. That's because I use it in moderation when it makes sense (at least to me) and not with religious fervor. AI is like a calculator. We don't degrade because we use it. We just spend more time thinking, not doing routine calculations. Hard disagree. I trust my calculator implicitly to be correct +/- rounding. I distrust AI to the point I have to confirm every little "routine" thing it's thinking. At least they've started putting links/references, so that makes it easier I use AI in Google search results because I have not bothered to turn it off and it’s just there. Otherwise I have avoided AI. I’m thinking about limiting the extent I use it for coding. Just to stay sharp. Need to exercise my brain more. Extent.
The word you're looking for is extent.
Sorry. Word nerd. You are correct! Sadly I can't go back and edit. If this was an LLM, I would just start a new chat without the typo. A new context forgives the sins of the past. Funny that I'm exploring the impacts of over-dependance/reliance on AI tools, and made a mistake because of the same. Will certainly proof-read after using speech to text! and yes, extent* I've been reading John Evelyn's diary, among others from the 17th century. There is absolutely no sense there is a correct spelling of a word, as writing is correctly seen as speaking written down. Correct spelling is a modern invention of limited value (but helpful for AI). That was mean. I'm sorry... But the point stands. Not really a "word nerd" situation. It's a really obvious typo. Might as well have said, "tee hee, I'm, like, autistic for words". The more I use something the more I become both a hater and lover. I use AI a LOT and I have a love hate relationship with it. Therefore, it’s actually something I value daily. If I only love something or hate something it’s shallow and meh. Tell me more about this love hate relationship! What was the moment when you fell in love? And what are the moments that culminate in hate? What are the contexts in which you use AI that make the love persist? "I've been using it continuously without restrictions, trying to find ways to make AI smarter because it really helps improve efficiency!"