/Deslop
tahigichigi.substack.comJust don't use LLMs to generate text you want other humans to read. Think and then write. If it isn't worth your effort, it certainly isn't worth your audience's.
It comes down to this for me as well. Just the same way I never open auto generated mails, I see no reason to read text other people have got an LLM to write for them.
What is nice is that sometimes you will just write very badly what you want to say, like a scenario or badly written sentences, and just ask the LLM to reformulate that to be proper nicely written text. But in that case, there are big chances that the stylistic issues described in the article are present despite you having carefully crafted the content.
No, the output is normalized garbage. Don't do this. Delete it instead. If it's worth saying, you'll spend the time to edit your original text.
The best is when you use a speech to text app like Whispr Flow and just ramble to the AI about an idea or an experience, get your thoughts out and it returns a silhouette of an insight or article.
So when people say they never get a good output it's because they're trying to from
thought > article
instead of
thought > exploration > direction > structure > outline > article
Yes, I've had great results with a similar workflow.
I record myself rambling out loud, and import the audio in NotebookLM.
Then I use this system prompt in NotebookLM chat:
> Write in my style, with my voice, in first person. Answer questions in my own words, using quotes from my recordings. You can combine multiple quotes. Edit the quotes for length and clarity. Fix speech disfluencies and remove fillers. Do not put quotation marks around the quotes. Do not use an ellipsis to indicate omitted words in quotes.
Then chat with "yourself." The replies will match your style and will be source-grounded. In fact, the replies automatically get footnotes pointing to specific quotes in your raw transcripts.
I also like brainstorming by generating Audio Overviews, Slide Decks, and Reports in NotebookLM. The Audio Overviews don't sound like AI writing. The Slide Decks and Reports do sound like AI writing, if you use the defaults, but you can use custom prompts.
This workflow may not save me time, but it helps me get started, or get unstuck. It helps me stop procrastinating and manage my emotions. I consider it assistive technology for ADHD.
I would also point to a human-generated (and maintained) list:
Which is also very useful in writing skills to help avoid these kinds of issues.
not as catchy though is it
> The elephant in the room is that we’re all using AI to write but none of us wants to feel like we’re reading AI generated content.
My initial reaction to the first half of this sentence was "Uhh, no?", but then i realized it's on substack, so probably more typical for that particular type of writer (writing to post, not writing to be read). I don't even let it write documentation or other technical things anymore because it kept getting small details wrong or injecting meaning in subtle ways that isn't there.
The main problem for me aren't even the eye-roll inducing phrases from the article (though they don't help), it's that LMs tend to subtly but meaningfully alter content, causing the effect of the text to be (at best slightly) misaligned with the effect I intended. It's sort of an uncanny valley for text.
Along with the problems above, manual writing also serves as a sort of "proof-of-work" establishing credibility and meaning of an article - if you didn't bother taking the time to write it, why should i spend my time reading it?
Had the same thought reading this. I haven't found a place for LLMs in my writing and I'm sure many people have the same experience.
I'm sure it's great for pumping out SEO corporate blogposts. How many articles are out there already on the "hidden costs of micromanagement", to take an example from this post, and how many people actually read them? For original writing, if you don't have enough to say or can't [bother] putting your thoughts into coherent language, that's not something AI can truly help with in my experience. The result will be vague, wordy and inconsistent. No amount of patching-over, the kind of "deslopification" this post proposes, will help salvage something minimum work has been put into.
Indeed. I have never used an LLM to write. And coding agents are terrible at writing documentation, it's just bullet points with no context and unnecessary icons that are impossible to understand. There's no flow to the text, no actual reasoning (only confusing comments about changes made during the development that are absolutely irrelevant to the final work), and yet somehow too long.
The elephant in the room is that AI is allowing developers who previously half-assed their work to now quarter-ass it.
"Please write me some documentation for this code. Don't just give me a list of bullet points. Make sure you include some context. Don't include any icons. Make sure the text flows well and that there's actual reasoning. Don't include comments about changes made during development that are irrelevant to the final work. Try to keep it concise while respecting these rules."
I think many of the criticisms of LLMs come from shallow use of it. People just say "write some documentation" and then aren't happy with the result. But in many cases, you can fix the things you don't like with more precise prompting. You can also iterate a few rounds to improve the output instead of just accepting the first answer. I'm not saying LLMs are flawless. Just that there's a middle ground between "the documentation it produced was terrible" and "the documentation it produced was exactly how I would have written it".
Believe me, I've tried. By the time i get the documentation to be the way I want it, I am no longer faster than if i had just written it myself, with a much more annoying process along the way. Models have a place (e.g. for fixing formatting or filling out say sample json returns), but for almost anything actually core content related I still find them lacking.
I guarantee if you give me your prompt and the output you got I can fix it and get you a 10x better output in less than 5 minutes.
DM me on substack if you don't wanna post it here, I'm honestly happy to help wherever I can.
I won't share work related stuff for obvious reasons, but feel free to post an example of some LLM generated (technical) article or report of yours (I also doubt you would be able to understand the subtle differences i take issue with in LLM output in 5 minutes in the first place)
But are you gaining a meaningful amount of time, and are your coworkers that thorough.
Honestly I just don't read documentation three of my coworkers put on anymore (33% of my team). I already spend way to much time fixing the small coding issues I find in their PRs to also read their tests and doc. It's not their fault, some of them are pretty new, the other always took time to understand stuff and their children de output always was below average in quality in general (their people/soft skills are great, and they have other qualities that balance the team).
Why not write it yourself?
Sure, but that's part of my point. It gives a facade of attention to detail (on the part of the dev) where there was none.
OP here. You're absolutely right!
Most people drop a one line prompt like "write amazing article on climate change. make no mistakes" and wonder why it's unreadable.
Just like writing manually, it's an iterative approach and you're not gonna get it right the first, second or third time. But over time you'll get how the model thinks.
The irony is that people talk about being lazy for using LLMs but they're too lazy to even write a detailed prompt.
I have tried using them, both for technical documentation (Think Readme.md) and for more expository material (Think wiki articles), and bounced off of them pretty quickly. They're too verbose and focus on the wrong things for the former, where output is intended to get people up to speed quickly, and suffer from the things i mentioned above for the latter, causing me to have to rewrite a lot, causing more frustration than just writing it myself in the first place.
That's without even mentioning the personal advantages you get from distilling notes, structuring and writing things yourself, which you get even if nobody ever reads what you write.
> > The elephant in the room is that we’re all using AI to write but none of us wants to feel like we’re reading AI generated content.
Reminds me of a quote from St. Augustine's autobiography, "Confessions":
"I have known many men who wished to deceive, but none who wished to be deceived."
This article itself feels LLM written.
It is also an advertisement for a magic prompt to make LLM edit text to look less LLM-y.
This anon gets it
this sentence feels LLM written
Your response feels LLM written
We wrote the paper on deslopping LLM and their outputs: https://arxiv.org/abs/2510.15061
Wow. We are not worthy.
What would you say are the top 2 red flags missing from the piece? Would love to know
>The elephant in the room is that we’re all using AI to write
This makes me sick.
so... anyone who's not native english, reading all these overpuffed texts, is cooked? The nuances are most times lost on me
And even if the style is (or isn't) LLM'ish, it does not say/help if the (even-filtered) content makes sense / is correct or is BS
Style does matter, sure..
https://hbr.org/1982/05/what-do-you-mean-you-dont-like-my-st...
As they say, "Bait used to be believable."
what's unbelievable about it?
Please try and follow this advice, because there's nothing more annoying than some comic book guy wannabe moaning about AI tells while I'm trying to enjoy the discussion.
You just need to use this list as a prompt and instruct the LLM to avoid this kind of slop. If you want to be serious about it, you can even use some of these slop detectors and iterate through a loop until the top three detectors rate your text as "very likely human."
I'm surprised that there is not a "skill" for that attached to the article.
There is a pdf with a deslop prompt at the end of the article. Prob the skill definition you’re looking for.
there's a downloadable prompt at the bottom that'll kill any slop dead. Will replace with a Notion link to keep it updatable
There’s a really cool technique Andrew Ng nicknamed reflection, where you take the AI output and feed it back in, asking the model to look at it - reflect on it - in light of some other information.
Getting the writing from your model then following up with “here’s what you wrote, here’re some samples of how I wrote, can you redo that to match?” makes its writing much less slop-y.
tone of voice match is similar but different. That's why I made toneofvoice.app
AI can copy 90% of your tone of voice but still use em dashes and corrective antithesis.
Ideally you'll have both /deslop and /soundlikeme (coming soon)
Just seems like the author could have said "write the damn thing yourself" and been done with it.
It will definitely help, but also some people, especially in marketing/sales, were writing like that before LLMs. So you should not only write the thing yourself, but also learn some good writing style.
This. Back in the day, the Cluetrain Manifesto said that corporate writing sounds "literally inhuman". Human beings don't talk the way corporations do. And people want human connection, not the impersonal touch of a corporation. So learn to write like a human being, not like a corporate drone.
but if I did you wouldn't be writing about it
wow mea culpa I guess