Almost no matter what you do for a living, you create a timestamped log of your actions. The action itself, sometimes its results, are written down, along with their date and time. But the high-level thought process that leads to action usually stays in your head, it’s not logged automatically (unless you disclose it in an email).
If you use AI, that may change. Perhaps not completely, but parts of your higher-level thinking may now leak out to a machine that writes them down, date and time attached. Why that could happen, and what you could do about it, are the topics of this text.
Let’s say you were an engineer in the tech industry decades ago. Perhaps you used email once in a while. Often, you spoke to people in the real world. You worked on a project on your own machine. When you (or your team) were done, the new work would be pushed out (compiled, deployed, shipped to customers, etc). The paper trail was mostly the work itself; some timestamps did exist, but by and large, the work spoke for itself.
Then, at some point, version control became the dominant paradigm. Eventually, git and GitHub won the battle. Now, every single action you take, in great detail, is written down for all to see:
And if you’re not a programmer then this may look different, but the idea is the same: your actions are written down in detail, but not your high-level thoughts. Your actions can be defended, if that time comes. Your high-level thoughts about it, not so much. But in the old days, your high-level thoughts stayed in your head.
Now this might be changing.
If you use an AI agent for a project, with you providing direction and the AI generating most of the output, but also providing feedback and criticizing the approach when needed, the picture is quite different. There is another will in that room with you.
The human + AI pair is sometimes described as a “centaur”: half human, half horse (or something else). This is more like a partnership. You do not give the AI guidance at a low level, you don’t tell it how to move the mouse. Plug enough information sources into it, and it can operate over a much wider range. And then your guidance can and should be at a higher level.
It has to be at a higher level. Current models (February 2026), if given enough information, can make broad decisions. But they work better, are more useful, produce better results, if they understand the whole context. You have to tell them more, to get more out of them. They may raise objections, and you want to keep them on track. So then you go ahead and you open your big mouth:
The “Human” in the above exchange was you, assuming responsibility for a decision. You use an Enterprise account with an AI provider. Your employer is paying for that account. Your employer can access the entire history of all your conversations:
If you still want the best results, so you still communicate with the AI at a high level, like an adult, but the log of all your conversations, including poorly thought-out declarations of intent, seems problematic, there are a few things you can do about it.
Open a personal account with the AI provider. You own it, you pay for it, and it shortens the list of entities with access to your thoughts. There are still a few issues:
you pay for that account
it may not be plugged into the same data sources as the corporate account (your AI is more blind, so it’s slightly dumber)
it may not be able to generate artifacts into the corporate repositories, data stores, etc. (you copy/paste all things)
you may not be allowed access to such an account when using corporate laptops
the logs of your conversations are still owned by the AI provider
you may literally be forbidden to do so, by various rules and regulations
On the flip side, clearly, your personal account can still be used for brainstorming. You could generate a detailed plan this way, and have the other centaur execute the plan. Results may vary, since the issue of lack of access to data sources may remain.
The personal AI account is an imperfect, narrow-scope, but decent, low-effort solution. What else is out there?
This is called local inference. You could run your own AI model and AI harness, maybe even at home. You do need technical skills, and it’s not cheap. But it’s definitely doable.
The technical resources for local inference can be boiled down to: a mediocre GPU chip, with a large amount of fast RAM plugged into it. If that sounds like a foreign language, let me break it down for you: a top-shelf Mac Mini can sort of do it. Better yet, a top-shelf Mac Studio can definitely do it. See this page for details.
If you want to pursue this path, I have a pretty detailed HOWTO here:
https://github.com/FlorinAndrei/local-inference-docs
You will not run the absolute top models this way. You will have to buy the hardware. You will have to maintain the whole thing. But it will provide an okay alternative.
This might also be useful if you can’t afford a paid AI account anymore. Depending on where the economy goes in the future, this may or may not become the case.
Or, if you do use a paid AI account, but you run out of tokens with it, local inference provides an unlimited supply of tokens. The quality will be somewhat less (models are smaller), but it’s not catastrophically less. For certain applications, there are open-weights models that perform pretty well at home. See the project linked above.
If you own the inference hardware, then there is no AI provider with access to your logs. But does that mean your logs are entirely safe? Depending on what you do, the answer might still be negative. So here’s another solution:
Keep your thoughts to yourself.
If you persuade people for a living, then you already know this: you can work with other entities that possess their own mind and their own will, and you can still keep your high-level thoughts to yourself. In this case, it might be a lot easier for you to simply assume the AI is yet another partner: powerful and useful, but still a partner who does not need to know all your inner thoughts. So, you compartmentalize.
If you’re not a professional persuader, this might be harder to do. But it’s still probably doable. Use the corporate AI account with a provider you do not control, but be careful what you type into the prompt. You will be Dr. Jekyll to some, but Mr. Hyde to others, Prof. Moriarty to yet others, and Capt. Obvious to yet other entities. Inevitably, this will lower the performance of the human/AI centaur.
Even if you’re used to showing different sides to different people, the issue remains that AI is more powerful when it knows more about your plans.
In the late 1700s, English philosopher Jeremy Bentham proposed plans for the most efficient prison: a system where the largest number of inmates could be watched by the smallest number of officers, efficiently. The idea was to build the prison in a round pattern, with the officers at the center, giving them low-effort, discrete access to all the cells.
By not knowing when they are being watched, but by potentially being watched at any given moment, it was thought the inmates would stay on their best behavior all the time. The name is Greek: pan-optes, or all-seeing.
It is sometimes useful to keep this proposal in mind when considering the directions modern technology is taking.
