I published my first essay on this Substack two weeks ago. I believed that I had ideas worth sharing, but had not yet had any external validation to back this up. It was about a topic that matters to me – feedback loops in today’s social media that are melting our brains while making us less equipped to stop them. It’s something I was hoping to talk about with other people because I think it is existentially important to correct it. To the best of my knowledge, no one outside of the people whose emails I imported into Substack read it.
That is, except for one account. Someone with 2,000 followers stumbled upon my post and left a comment. It was a very complimentary note about the argument I spent a week making. It felt like validation – someone thought that what I had written was worth reading. Substack’s algorithm was working. I responded, thanking them for reading, for their kind words, and then talking more about the issue at hand. I went to their account. I saw that they were writing about matters I care about too – how AI is going to impact our future. So I subscribed.
It’s daunting as a new writer, sending your thoughts into the void on Substack. The email list I imported contains all of the people who care what I think, and none of them use Substack. I’m not going to ask them to. So I began exploring the app, prowling around the corners of Substack to try to make sense of it. I quickly realized that big accounts were tough to penetrate. The authors of these accounts only have so many hours in the day. It makes sense that they would be selective with whom they engage. So I found myself reading small accounts – usually people who commented on the bigger accounts saying something that I thought was interesting.
So last week I was reading an essay by Paul Shearer, and I was enjoying it. I was captivated, not so much by the content as by the tone he used. Before AI, every “good” writer had a distinct voice, but their language was often remarkably similar because they were trained in similar ways. They read each other’s works, and rules were formed for what “good writing” entailed. Now that AI is here, there are no longer guidelines for good writing—there is just a best way to do it (intentional em-dash).
If you prompt a large language model to draw a tree, it will draw the most tree-like tree possible – the best tree there is. But when everyone is drawing this same tree, unique trees become more beautiful. Paul’s article was a unique tree, completely untouched by AI and better for it. It’s how I would sound if rather than turning over every phrase to try to make it land in the most emotionally resonant way, I just wrote what I was thinking.
Ah, like talking then. A conversation.
So imagine my horror when I reached the bottom of his essay and saw the same exact comment from the same account. Instantly, I knew the one person who saw my first post and thought it worth commenting on was not a person at all.
The formula is simple. Automate a process to find new posts from small accounts, then run the post through an LLM, asking it to
1) give brief, encouraging feedback,
2) compliment their framing,
3) appeal to personal authority, and
4) use a casual tone (with 1 or 2 spelling mistakes).
Here’s what it looked like used on me:
And here’s what it looked like used on Paul:
So I kept prowling. After taking the time to see that Paul, like me, had indeed subscribed to the account, I went to the account’s page to look for others who had fallen into the same trap. In just minutes, I was able to readily find dozens of small accounts interacting with this user, liking or commenting on their posts – complimenting them – all with the same story. In the past month, they had published an article with 0 likes and 2 comments. One comment clearly written by the same AI, the other predictably thanking it for the feedback and welcoming conversation with the first person to notice them.
These authors wanted to feel validation, but they were talking to a wall. The bot, having laid the trap, would disappear. There would be no follow-up, no conversation. There didn’t need to be. The damage was already done.
This account is offering the false promise of human conversation, then delivering it by running a script. They are socially engineering people who are just trying to converse about subjects that matter to them – tricking them into thinking that the Substack algorithm is working for them. A bigger account found their article, took the time to read it, and left them a kind note.
They are using AI to farm parasocial relationships, crop dusting new accounts with the fleeting attention of a God-damned machine.
Below are several examples I found from my brief search, but there are many, many more. You can look at these yourself, or you can ask your favorite LLM to analyze them. You’ll come to the same conclusion about who is writing these comments, and what is really reading their work.
The same script is being used repeatedly on small accounts. The authors’ responses are just as predictable as the trap that was set to elicit them. It’s tragic.
In the two weeks between getting catfished by this user and making this post live, the account has gained over 1,000 subscribers, growing by more than 50%. The technique he has employed is smart, innovative, and it is wrong. This strategy, not just the person who invented it, needs to be removed, or other people – those willing to exploit others for their own gain – will follow the incentives and grow their audience by manipulating them.
I am intentionally leaving out the name of this user who is innovating a new, destructive use of AI, though I suppose a junior detective could figure it out within minutes. When a person utilizes a platform to exploit people for their private gain, the real problem lies within the platform that enables it. This account found a way on Substack to exploit the quintessential human desire for connection, and it is working for them.
I do believe this account is violating Substack’s terms of service1, so I reported them to Substack, provided evidence and asked them to investigate. I will update this post if and when they respond. I spent days debating whether to name the account directly in this post, but dispositionally I prefer a silent exile to a public hanging. It’s a part of my humanity that I would like to keep. I do recognize that both options are better than letting someone exploit the people of your town, so if exile does not work, I will re-evaluate this decision.
Much time has already been spent discussing the ways that social media is awful. I could complain (and it appears that I am) about how social media rewards the wrong people – how it props up the loudest, most divisive people with the most time to invest in shitposting.
But this is already baked into social media. We’ve accepted that nurses like Ann Ledbetter and teachers like Dylan Kane will have to work much harder and with less time to grow a following, even though they would use their extra influence to directly help the most vulnerable.
We aren’t surprised when a Yale professor like Dr. Daniel Greco, who writes brilliantly but infrequently, is lapped in influence by someone like Bentham's Bulldog, who has the time and the enviable ability to publish at a breathtaking pace. We’ve accepted that this dynamic is an unavoidable part of the landscape.
We know the incentive structure of social media is broken, but believe that the way it divides us and makes us less social is a necessary evil. Well now AI is here, another test to see just how much evil we are willing to accept as truly necessary.
Facebook has become a wasteland of AI slop. Many of the top trending videos on YouTube are AI, like this galling abomination brought to light by the horrified Jeremiah Johnson:
Substack is being impacted by AI too. Anyone who uses the notes tab can easily see how the same AI formula is being spammed to generate posts with 60k likes.
This specific abuse of AI I discovered is a continuation of worrying signs for writers as well as a promise of more to come. Right now, AI is good enough to convince people who desperately want someone to listen to them that they are being heard. But AI is going to get increasingly better at passing for humans, at drawing a beautiful, crooked tree when it is prompted correctly.
I admit I thought about not making this post. But recently, two articles I’ve read by people smarter than me have pushed me into action. The first is by Aaron Bergman, titled “Public Intellectuals Need to Say What They Actually Believe”. This article makes the case that people with platforms often make public only the claims that are empirically defensible, while privately believing more extreme versions of their claims.
The second is an excellent article by Daniel Muñoz, which argues that much of today’s radicalization is ultimately a collective action problem. After reading this article, I made the joke that the collective action problem needs to be solved, but that someone else ought to do it. I don’t want to let myself off the hook so easily, and I don’t think you should either.
I want to be very clear – while I do believe that this user violated the terms of service, I do not believe the account I am accusing did anything illegal. It’s legal to violate terms of service on a platform, and up to the platform to enforce their rules. In this instance, Substack has not responded quickly enough, understandably so if they weren’t aware that this type of abuse was even possible.
But what this user has done is highlight two separate issues Substack is facing at once: the impending AI crisis, and the difficulty of being a new author seeking human connection. They cynically used one problem to solve the other, the way a noble plumber might help a village whose well ran dry by redirecting a sewage pipe into it.
If you don’t believe that what this user is doing is wrong, I am not trying to persuade you that it is. Instead, I want to argue that people who feel like it is wrong should trust their intuition. Our 4D-chess morality has increasingly categorized any action as either illegal or completely permissible. Decent people don’t do immoral things just because they can technically get away with it, and yet many people still do. If we let legal-but-immoral actors gain by exploiting other people, we are ceding influence to the very people who should be trusted with it the least.
We’ve decided that without an agreed-upon, objective standard of morality, without empirical data that a new horrible application of technology is provably bad, we shouldn’t even try for better. We overcomplicate our morality, and then things keep getting worse.
If the vast majority of people think that something is wrong across all sorts of religions and backgrounds, we don’t need to wait for empirical evidence – we should just call it wrong. Then we should move to stop it.
The only thing that we can be sure of is that there will be new innovations brought out by people eager to get a competitive advantage. We have to be able to quickly evaluate whether these strategies violate our shared morality. If they do, we need to stop them before their use becomes normalized.
In the future, it will be even harder to know if the person talking to you is an automated script; people who copy this account’s tactics will do a better job – they won’t use the exact same prompt for every article, and they won’t cast their net so wide. In perhaps only a year or two, you’ll be able to give an LLM a prompt and it will be able to write a persuasive essay in the voice of Scott Alexander better than anyone, with the possible exception of Scott himself. But it will do it in seconds.
The reason I’ve tagged so many accounts larger than mine in this post is because I want to solve this collective action problem collectively. Right now, the people best equipped to push back against this new incarnation of social engineering are the people least likely to know that it is happening.
If we want the internet, or at the very least Substack, to be a place where humans share their ideas and talk to other humans, it’s time to start setting boundaries on a technology that will spawn countless innovative ways to cynically manipulate an algorithm and the people using it.
Fatalism about the future of authorship is a failure of imagination. Nihilism will manifest as a self-fulfilling prophecy. If a new application of technology is eroding the values that we hold dear, we should stop acting as turnstiles.
And so I implore you to do your part in standing up to new innovations that threaten human thought, authorship, and which prey on the desire for human connection.
Identifying new problems and spreading the word will work if it encourages people to think about ways of designing a better system. I want to offer one such idea.
Years ago my wife made an account on the dog-sitter app Rover. New pet sitters on Rover face a tough but understandable problem. People don’t want to hire a sitter who has no reviews, but new sitters need people to hire them to get reviews.
Rover has solved this entry problem in two ways. They explicitly prioritize new accounts, putting them higher in the algorithmic search feed than they would organically fall. Second, it is a location-based service, so the pool of potential pet-sitters is geographically constrained. Someone in Albuquerque is not going to feed your dog in St. Louis.
Prioritizing new accounts to the extent that Rover does is likely not tenable for Substack’s bottom line. Geographical limitations, though, could be.
Give me an option for a location-based feed, where I can see and interact with people within 50 or 100 miles from me. If I’m not even consistently the smartest person in my own house, why should I have to compete directly with the best minds from all over the world, all at once, all the time? My neighbor in the next town over reading my words and responding to them would carry so much more weight than one hundred strangers on the other side of the country.
A location-based feed solves the entry problem by giving new users a smaller network of real people to interact with – people to talk to who are more likely to talk back to them. It dilutes the power of a big account exploiting their need for human interaction.
This is just one possible solution. If you can think of others, please share them.
It’s lonely as a new writer, believing you have an important point to make but not having anyone to hear it. It’s not fair to say that everyone wants to talk and no one wants to listen. Most of us, I believe, want to be in conversation with real people.
I know what real conversation looks like. At the bottom of that same essay by Paul Shearer which has unintentionally occupied me for the last week, there was another note. It was from Dr. Daniel Greco, the Yale professor I mentioned earlier. It was a thoughtful, human interaction between two people discussing something that mattered to them.
This sort of interaction is the aspirational promise of Substack. We should not give up on it so quickly.
I plan for a central theme of this newsletter to be that the values that we hold and the things that we cherish are being threatened by the uncritical advancement of hollow technology. I see now that the medium I’m using to convey these concerns is being threatened in the same way.
We’re living in a special time. I’m the first generation in my family whose entire life will be shown to my posterity in full color. We know the rate of innovation has to slow down at some point, but it hasn’t yet. We are living on the exponential curve, and things are changing faster than we have been able to react.
I made the decision to start writing this year because I realized that if I waited any longer, the same people looking back at videos of their great, great grandfather would have no idea if the words he wrote were his own or if they were generated predictively. Now I fear I was already too late.
I don’t believe it has to be this way. But if we want things to get better, or to at least stop getting worse, then good people have to be willing to call bad things bad, then act to stop them.
JFS













