Yesterday, Alexandra Alter at The New York Times broke the news that Hachette has canceled a book, Shy Girl by Mia Ballard, because the author used LLMs to write the book:
“Hachette remains committed to protecting original creative expression and storytelling,” a Hachette spokeswoman said. She added that Hachette requires all submissions to be original to the authors, and asks authors to disclose to the company whether they are using A.I. during the writing process.
The timeline of this story seems to be that the book was originally self-published and sold decently. There were accusations of AI use—and the author admitted to stealing a painting without credit for the self-pub cover—but the book was picked up by Hachette UK and published in November of 2025. The book continued to receive LLM accusations, as well as criticisms that the book was poorly written and filled with errors, including a nearly 3-hour YouTube review of the book titled “i 'm pretty sure this book is ai slop” that has been viewed 1.2 million times (!). (Imagine how many human-authored books could have been read with all that time…) Despite these rumors, Hachette US was going to publish an American edition this spring. Now, the US edition has been canceled and the UK edition discontinued.
It is unclear exactly how AI use was determined. The NYT says Hachette did so after “conducting a thorough and lengthy review of the text.” Jane Friedman said that “it looks like they only canceled it after the New York Times queried them about it and presented evidence of AI use.” And I will note that the author seems to have basically confessed in a YouTube comment, although she blames it on a friend who edited the book and “changed a lot of the wording” and then claims she didn’t have time to do a final pass herself to check what was changed. (Authors: Read your book before you publish it. You are responsible for edits you accept!) Others have pointed out how the author’s interviews have all the usual LLM tells. The NYT reported that Ballard reiterated the claim that “an acquaintance she hired to edit the self-published version of the novel had used A.I.” and that her “name is ruined for something I didn’t even personally do.” She says she is pursuing legal action.
If you want to do a deep dive, read this Reddit thread and watch the two hour and forty minute YouTube.
Some people have asked why this book was canceled at all, even if it was LLM-written or just slop. Lots of bad books are published, right? First, let me note that more than a few books have been published partially or entirely written by LLMs. But these books have all disclosed the fact both to the reader and to the publisher. LLM use is not banned from publishing entirely. In general, the safe thing to do is to disclose any LLM use beyond basic things like spellcheck or research.
The published books that have used LLMs used them in thoughtful, artistic ways. Not to simply generate a book the author didn’t “have time” to write or even read before it was uploaded. What we’re talking about with this case is something closer to traditional copy and paste plagiarism than anything creative.
The issue here seems to be that the author lied about it and signed a contract attesting to sole authorship. The reason to cancel this book is similar to the reason you cancel a traditionally plagiarized book. There are ethical reasons, sure, but also just plain business ones. It hurts your brand and reputation to publish slop—and perhaps also potential legal headaches given recent court rulings—and those things still matter in publishing. Readers might barely notice what colophon is on a book’s spine, but authors care and publishers’ reputation affects how books are covered and sold.
Like many, I’ve been thinking and writing about AI and art a lot over the past few years. And I’ve been particularly worried about the expected flood of slop that threatens to clog up all parts of publishing. So, here are some scattered thoughts on the first book that has been publicly canceled over LLM usage.
We all knew a scandal like this would hit sooner or later. Every editor I know has been crossing their fingers the first big scandal wouldn’t be one of their books or a book on their imprint. What I’ve read of the book was, in my view, bad. But, I have to give a point to the AI boosters here. This book got criticism and suspicions of AI use from readers…but also clearly some people bought it and enjoyed it. It sold 1,800 copies in the UK. This is not like the AI romance author story I wrote about last month, where an author was seemingly spamming the Kindle store with hundreds of books that individually weren’t getting much readership but collectively could grift some income. Shy Girl sold copies and tricked at least some readers.
Does this mean LLMs can write artistically now or compete with great or even just good authors? I wouldn’t go that far. The book seems bad. Sloppy. But we must acknowledge that lots of human slop sells too. Perhaps we can say that LLM slop has achieved human slop levels.
Many may wonder how a book this poorly written and also probably AI-generated—again, there were rumors and viral videos about how bad this book was—could have gone through the vetting process and layers of gatekeepers in publishing. Well, it probably didn’t. It was self-pubbed and seemingly sold without an agent. When a US publisher acquires an already published book, it is rare to do much editing. So, the question is how much editing did Hachette UK do? I have no inside knowledge in this case. My suspicion is not much editing was done. Probably a copyedit and proofs for typos. Perhaps nothing more. Again, maybe I’m wrong. But it seems hard to believe a book with this many issues was published with a rigorous vetting process.
This is not to say there aren’t many great self-pub books. There are. But big publishing is a business and many publishers have been grabbing bad self-pub books (and fan fics) because they were selling. The incentive in such cases is to take over the sales as quickly as possible, or else you risk the book’s customers running out by the time your edition hits the marketplace. This scandal shows the risks to that approach.
In general, agents and editors may need to pay attention (if they are not already) to how LLMs are developing and what tics and tells LLM text displays.
For four years now, people have been insisting everyone will “get used to it” or “get over it” and just all accept LLM use in any and all scenarios. Others seem confident LLMs are a bubble and will disappear. I think both camps are naive. With a technology like this, you should expect to see a lot of different reactions. I expect many readers to remain adamantly against LLMs in the art they consume (even if they use them for other things like boring work emails). I would expect another group to not care. And probably a not insubstantial number will actively seek out LLM art and primarily consume it. Although I suspect most of those will spend their money on access to better models to prompt their own works and not on purchasing books someone else prompted.
The segment of readers who want human work—which, yes, might include artistic uses of LLMs as opposed to low-quality slop—are the readers that publishers will need to keep. That’s going to be their business. The slop will always be cheaper elsewhere. The layers of vetting and editing are the advantages of traditional publishing. They provide, or are supposed to provide, a certain level of quality control. A level of trust. Publishers may need to be a lot more careful now. I mentioned in my last piece that over four million books were published last year, and most of those were self-published books. The big increase in books is almost certainly because of AI. Traditional publishing’s survival may depend on creating a safe island in the sea of slop.
Everyone hates blurbing. It is an annoying, awkward, and time-consuming process for everyone involved. It will get worse.
The blurbers of this canceled book likely feel embarrassed and angry. The publisher’s failure can almost be understood by pure profit motives, but blurbing is doing a favor for someone. You don’t get paid to blurb. You put your name on someone else’s work, attesting to the quality. Will authors become even more selective about blurbing? Even less willing to blurb a stranger and more likely to only blurb friends or colleagues who they know enough to trust?
This would be a sad outcome. I know some assume that blurbs are all connections and cronyism, but that isn’t the case. I’ve blurbed authors I’ve never met, and similarly my books have blurbs from authors who had no reason to blurb except that I wrote them a fan letter about how much I loved their work.
In general, I fear everything may get even harder for emerging authors. (Not that it won’t cause problems for established ones too.) If there is a tsunami of slop in every agent’s query inbox, every lit mag’s slush pile, and every editor’s submission stack, then those gatekeepers will have no choice but to figure out a way to drastically filter the flood. Likely, this will mean leaning even more on connections and vetting systems.
Here is a bit of practical advice. Agents are the first filtering system in traditional publishing, and probably the ones hit hardest with slop. One thing I’ve heard is agents are seeing obviously AI query letters even when the book itself doesn’t seem to be AI (at least at a glance). Those authors are being passed on. Why risk working with an author who clearly uses AI for query letters? Are they someone you can trust? I get why authors might turn to LLMs for query letters even if they avoid them in their books. Queries seem like a crass and uncreative part of publishing. Still, I’d advise not risking it.
For a more hopeful read on how we human authors can navigate a world of robot slop, read this recent piece:
My new novel Metallic Realms is out in stores! Reviews have called the book “brilliant” (Esquire), “riveting” (Publishers Weekly), “hilariously clever” (Elle), “a total blast” (Chicago Tribune), “unrelentingly smart and inventive” (Locus), and “just plain wonderful” (Booklist). My previous books are the science fiction noir novel The Body Scout and the genre-bending story collection Upright Beasts. If you enjoy this newsletter, perhaps you’ll enjoy one or more of those books too.




