Issues with AI: toxic dependencies

10 min read Original article ↗
  • on Fri 06 February 2026

Tags: opinion rambling ai

This is the first part of a series of essays on so-called "AI", because I cannot manage to have the split focus on all the issues for a single blog post. This one has a focus on the relationship with our tools and production, particularly in the context of coding, since that is one of my areas of interest that allows me to eat.

Giving up agency

The first and most immediate consequence of people using LLMs and the associated tooling to produce code or related artefacts, is that there is an immediate dependency between the company selling you access to frontier models, and the production of the artifacts (can be code, commits messages, jira tickets, whatever).

The feedback loop is very tight, and while you can certainly use local models for some tasks, most people offloading the work to LLMs today do it using Copilot, Claude Code, or whatever else that is either a product of a trillion-dollar company, a non-profitable billion-dollar company, or a doomed startup. To some extent, this has been the case for many people doing business, using software that they have limited control over - either because it is actually working for them, or because it is mandated from above - for a long while; I will argue that in the Linux or the broader FOSS community this has been less prevalent.

Today, this is changing and I can see many people in the cultural and/or political FOSS environnment being swayed by the appeal of "efficiency", "less drudge work", etc. I will put aside the "I do not want to be against progress" argument for this post, in part because I have an older blog post briefly mentioning it, and also because that will the theme of a longer essay later.

Some people are now selling hand-written code as a thing of the past because you will just send a few sentences to Anthropic which will answser with a fully-fleshed project, and well, that is terrifying, but not for the reasons usually assumed.

I learnt to code using vim, I have been using vim or neovim for decades now, and part of my reasons for doing that is that it is a solid piece of software, it is not asking me to renew my licenses, it is not exfiltrating my code, it is tailored to my needs, and it is very predictable. If tomorrow I lose access to the US giants and I need to go back to coding from a solar-powered ThinkPad x61, I can. I will certainly lose some nice things (LSPs in particular are a bit heavy for older machines) but nothing of astonishing value will be lost.

It is not that I enjoy closing braces manually or similar mundane tasks - there is of course a plugin for that anyway -, it is that my relationship to code is one of control. I decide what the code does, and interpreter or compiler bugs aside, it will do what I tell it to do.

In contrast, offloading most of the creation process to some coal-guzzling GPUs owned by a tech corp eager to monetize everything feels quite repulsive, and very much the opposite of any definition of freedom. Of course, most people are now used to SaaS offerings, cloud-based offers, web apps, anything that drives down the initial cost of things and makes it a perpetual subscription instead. Sure, you can pretend to be an "overseer", an "architect", a "herder", a "100x developer", but what you actually are is a person trapped in a toxic relationship with companies pretending to sell you the future.

I do not plan to entrust Anthropic, Microslop, OpenAI, MechaHitlerCorp or any other with the stewardship of programming in the future, and it baffles the mind that anybody would think this is a good idea.

Communication breakdown

One thing I notice is that the back-and-forth with LLMs are usually at the expense of interacting either with someone, or more usually something that someone wrote with a purpose. It also and even more tragically prevents interaction with oneself, as in many cases the important part is not producing an artefact, but thinking enough about it to be able to produce it.

Now that the result pages of search engines are overflowing with slop-generated results, it has gotten more difficult, so of course there is profit for LLM companies in selling both the poison that is allowing people to generate plausible text on a topic by the truckload, and the solution which is using the same product to try to get to the useful parts of the dataset through chatbot interactions.

The information retrieval skills of the general population have never been great, but usually people involved in software development were at least decent enough at it. What I notice today is that this is starting to be an issue, because while LLMs can regularly produce correct fact-shaped replies that do not appear to be present in search engines, it will also regularly produce absolute garbage replies with confidence while being anthropomorphized with stuff like "I am thinking..." "checking..." "yes, that's it!", giving even more of an appearance of confidence in the answer. But when you check the reality by going to the documentation, you notice that the "phd-level" benchmark darling Gemini still has MariaDB and MySQL too semantically close in its weights to avoid conflating the two.

If someone wrote a blog post 10 years ago on a topic, that information has a pretty high chance of being outdated, but the important part there is that it was probably a cohesive piece written with purpose at the time, and you can get information from that. An LLM answer has none of that, and it will produce a piece of statistically accurate piece of text that may contain relevant facts as a byproduct.

The demise of stackoverflow is a slightly tangential topic because it started going the way of the dodo for a lot of reasons even before the start of the DRAM-hogging craze, but the fall accelerated quite a bit after that. What the LLMs promise is that you do not have to interact with humans to do things, or even to learn how to do things. Humans are complicated, humans have feelings, humans need to eat, humans need to be paid.

You need a barely-good-enough logo and you do not know how to use paint? Ask an LLM

You need to write a React frontend and do not want to lose your sanity? Ask an LLM

You need to translate this in English and you are not confident? Ask an LLM

You need to create an ad for your bus network and do not want to bother taking one (1) picture? Ask an LLM

You need to rewrite this project in Go and it would take you too much time? Ask an LLM

You need to create a Jira ticket from this commit? Ask an LLM

You need to plan your vacation in Spain? Ask an LLM

You need to know a detail of French labor law? Ask an LLM

(all of these are from experience, by the way, and if you are a colleague stumbling upon this, know that it is not meant as a personal attack)

Interacting in a closed loop with a piece of software designed to be sycophantic can be very damaging, and I do not yearn to see the state of online communities in a few years. I am guessing that is part of the plan, making things so miserable for everyone that we have to use their tools to access information.

Training is always an issue

One of the common rebuttals to "why would you give them so much power over you" is that there are "open" models which can run locally. Yes there are, but most if not all of the usable ones are produced by similar morally bankrupt companies, like Meta (which was an enabler in various genocides already).

The breadth of data pilfering required to train a correct LLM is hardly achievable by an individual, which is why there are now swarms of bots scouring the web and DDoSing us, searching for any new bit of data that might not be in the previous datasets.

Therefore, if your backup plan is "I will just run an “open” model", well your model still needs to be updated to be useful, because platforms change, APIs break, software evolves, and knowledge shifts. But yes indeed, it will be useful for a while provided it can prey on compute and memory to run at acceptable speed without burning down your house. The question is: will you stomach the speed and results after being spoon-fed the proprietary ones at a loss for months or years?

Additionally, what those companies get out of your usage is very detailed statistics and ways of prompting, at scale, and I do not believe them one second when they say those interactions or part of them do not become part of their training set, and that is something that cannot be easily replicated by the most open ones, which further widens the gap.

P.S.: I do not want to know that you fed this 10-minute read into your favourite racist text slushie machine to get a summary, and if you email me about that I will make fun of you on mastodon - yes, I got that the last time I got onto hacker news, so there is now a fair warning -.

If you have remarks or suggestions concerning this article, please by all means contact me.