From Wikipedia, the free encyclopedia
- The following discussion is an archived record of a request for comment. Please do not modify it. No further edits should be made to this discussion. A summary of the conclusions reached follows.
In this discussion, with a grand total of 44 WP:!VOTEs in favor compared to two !votes in opposition, there is clear, overwhelming support in favor of adopting Chaotic Enby (talk · contribs)'s proposed amendments towards WP:NEWLLM. Closing this under WP:SNOW.
Wikipedia's LLM policy architecture has only really been articulated in the past year. Prior proposals for an immediate, all-encompassing community guideline on LLMs have failed due to the standard issues of addressing complex, large-scale issues at once: people, even those who broadly agreed with the goals of said proposals, found specific issues with certain parts of it and critiques that it was too vague/specific. Consensus has existed on the idea of change, but not on the implementation of change. With this in mind, the community has endorsed Chaotic Enby's in large part due to it's enhanced clarity and directness. Participants generally believed that their proposals were good enough to serve as, at the very least, a current foundation for future LLM policy. It targets blatantly problematic issues with LLM use, while still giving leeway for what are seen as decent uses for it (mainly WP:COPYEDITING and translation). This was deemed concise enough to adopt as a guideline in the face of increasing issues relating to LLM use on Wikipedia. It was also applauded for including a segment seeking the ward off baseless/malicious accusations of LLM use against editors who may have a writing style akin to many LLMs.
Among the few who didn't support, they largely argued that the wording was excessively strict. It was viewed as excessively harsh on light forms of LLM use in drafting by one user, while another opposed on the basis that it wouldn't do much in a world where LLM are becoming so prominent, prefering instead to educate people on acceptable LLM use on-wiki and mandating the disclosure of LLM use. Supporters of the proposal argued that people already lied about LLM usage (this was in response to WidgetKid's statement that people will just be incentivized to lie under this guideline). In general, the dispute revolved around the inherent issues of leaving judgement of what is LLM-generated to man, with all his errors, and whether this meant that we should seek to accommodate LLM usage more or whether it should be used as justification for restricting it more.
While this policy had garnered approval for adoption, it's important to realize that a handful of editors based their support largely on it being a good placeholder guideline, and they either implied or openly stated that a they hoped in using it as a stepping stone towards an imagined, total LLM ban on Wikipedia. Whether or not the community is willing to go to the lengths of a blanket ban is unclear, and ultimately, it will require lots of discourse between fellow Wikipedians. However, these new guidelines are likely to serve as a good base to develop future LLM policy on the English Wikipedia (and by proxy the broader WMF), which, in a world where such technology is becoming ever more prescient and seemingly hard to control, is believed by the community to be its greatest strength.
Signed,
— Knightoftheswords 02:37, 20 March 2026 (UTC) (non-admin closure)[reply]
Should we replace the current text of the guideline Wikipedia:Writing articles with large language models with the proposal at Wikipedia:Writing articles with large language models/March 2026 proposal? Chaotic Enby (talk · contribs) 14:27, 15 March 2026 (UTC)[reply]
The December 2025 RfC on replacing WP:NEWLLM with a draft guideline found consensus for better guidelines along the lines of and/or in the spirit of this draft, and a weak consensus for that specific proposal, which wasn't enough for guideline promotion. The closure encouraged further improvements on clarity before a future discussion.
This new proposal focuses on addressing these pitfalls by writing a more concise guideline in the spirit of WP:NEWLLM, by focusing on prohibiting the use of LLMs to generate new content wholesale. It aims to limit the kind of large-scale LLM abuse that has become commonplace, notably at ANI, where the effort required for large-scale disruption is minimal compared to the effort to clean up and verify every sentence of generated text, placing an unfair burden on volunteers. Additionally, it will prevent shifting the blame to the tool used when "accidentally" including hallucinated sources or other policy-violating content. Many newer editors do not realize that LLMs present these risks, and having a clear, unambiguous policy will help avoid such mistakes.
On the other hand, great care should be taken when writing such a guideline. While the community has proved to be reliable at identifying LLM-written text, there is always the risk of editors being unfairly accused and sanctioned based on their writing style alone, which the guideline explicitly calls out against.
This guideline only focuses on using LLMs to generate new article content, and should in essence be viewed as an expansion of the current WP:NEWLLM to existing articles. More specialized, constructive use cases do exist, from using them as research assistants to citation-formatting tools, and this guideline does not aim to restrict any of these use cases. Nor does it deal with translation of existing articles (for which Wikipedia:LLM-assisted translation exists) or communication between editors.
- Support as proposer. Chaotic Enby (talk · contribs) 14:27, 15 March 2026 (UTC)[reply]
- Support. This is definitely clearer than the current wording. In previous discussions, I and fellow editors have pointed out that blanket bans would also prohibit experienced editors from using AI after careful review, but after considering the disproportionate time and effort needed to clean up after AI-generated content, I've changed my mind. I still think that thoroughly verified, policy compliant AI-generated content can be useful, but this is and still would be allowed by WP:IAR, and I believe the proposed wording not mentioning this can discourage inexperienced editors from using AI. Kovcszaln6 (talk) 15:03, 15 March 2026 (UTC)[reply]
- Support, hopefully it is clear and easy to understand (all the more important given the people dispositioned to use LLMs), while conceding to major concerns people have had about an 'LLM ban' re content. The idea behind the first bit was to be ideologically quiet and focus on reducing disruption, and not to lock us into a position. If/when LLMs can write a 'perfect' article, the first sentence would become incorrect, and the guideline would need to be changed (basically kicking the can down the road re the ideological question surrounding LLM use). If people want to write some essays on the main ideological positions, I think that'd be helpful to inform future discussions. Kowal2701 (talk, contribs) 15:11, 15 March 2026 (UTC)[reply]
- Support, sorely needed. We really need to catch up to the massive issue that is AI use before it's too late. CoconutOctopus talk 15:28, 15 March 2026 (UTC)[reply]
- Support - Simple, clear, and minimally impactful on positive use cases. This finally looks like a serious guideline. -- LWG talk (VOPOV) 15:26, 15 March 2026 (UTC)[reply]
- Support – Per Kovcszaln6, this proposal threads the needle between "there is no acceptable use of LLMs" and the current state of affairs. We need a more sophisticated guideline than the current one in place, per my
Not yet
!vote at the October 2025 RfC; this proposal is still concise, butmore hashed out
. That's what I said I was looking for in an LLM guideline, so this is an easy support. mdm.bla 15:42, 15 March 2026 (UTC)[reply] - Oppose as written. The proposal is trying to address a real problem, but the text as drafted goes too far, and treats all LLM-assisted drafting as though it's the same thing, and it isn't. Nobody is disputing that LLM-generated slop is a serious issue. Patrollers are drowning in unsourced, inaccurate, carelessly produced material, and the community is right to push back hard against that. But the proposed wording doesn't distinguish between that kind of abuse and a much narrower workflow where an experienced editor identifies the sources, constrains the task, checks the output against the sources, and takes full responsibility for what gets published. Lumping those together doesn't help anyone. "Use of LLMs to generate or rewrite article content is prohibited" is just too blunt. It would catch not only wholesale machine-written garbage but also source-led drafting assistance that produces perfectly policy-compliant text after human review. Wikipedia has never normally regulated tools in the abstract, it regulates outcomes and editorial responsibility. The question should be whether the material is verifiable and accurately reflects the sources, not what software touched the prose along the way. A blanket prohibition is broader than the actual problem and would ban some genuinely beneficial uses along with the harmful ones. I'll be transparent about my own practice here: I've used LLMs as a drafting aid within a fairly constrained workflow. I locate and assess the sources myself, feed them to the model with a specific prompt, and then review and copyedit the result before publication. The model isn't deciding what the sources say or what belongs in the article; I am. It speeds up drafting, it doesn't replace editorial judgement. I've published a good number of articles (and a number of "Good Articles") this way and I don't think the encyclopaedia gains anything by declaring that work prohibited per se, especially when the end product is indistinguishable from (or better than) what I'd have written longhand, just slower. What I'd actually support is language aimed at the real mischief: prohibiting wholesale, inadequately reviewed, or source-detached use of LLMs to create article text, while leaving room for limited source-based assistance under full human verification. That protects patrollers from the flood without forbidding careful uses that improve productivity and still produce good work. Esculenta (talk) 15:52, 15 March 2026 (UTC)[reply]:I consider your editing to be just about the best case scenario for LLM workflows, and if it were possible to permit you while stopping people who erroneously think they do what you do, I would, because I think your edits are a net benefit to the wiki. With that said, it only took me a few minutes to find the kinds of issues typical of AI in your articles (inappropriately generalizing or expanding on claims from sources, uncomfortably close paraphrasing, use of interpretive words like "especially" or "only rarely" that aren't necessarily supported by the source, etc). -- LWG talk (VOPOV) 16:08, 17 March 2026 (UTC)[reply]
- Support. While I appreciate WP:BABY concerns, the sheer disbalance between the effort required to generate and to clean up AI slop has to be addressed, and this is a good way to do it. JustARandomSquid (talk) 16:05, 15 March 2026 (UTC)[reply]
- Support as as obvious improvement on NEWLLM. I'm still a bit worried that the path we're taking on these guidelines is incentivizing lying (especially with the caveats that if text is good, then the guideline/policy won't be enforced so what does it matter?) and that it's hard to properly enforce. I'd rather prioritize guidelines that focus on enforcement, responsibility for ones own edits, and making it clear that, under pre-existing policies, editors are required to label LLM-generated text. But, again, clear improvement. GreenLipstickLesbian💌🧸 16:19, 15 March 2026 (UTC)[reply]
- Support: The #1 thing we can do to reduce the amount of LLM-generated content is to clearly tell editors - which now includes agents [1] - not to add it. Esculenta's suggestion for a rule that allows for reviewed use is not meaningfully different than the status quo, which is obviously not working. Experience at AfC, NPP, AINB, ANI, GAN (I have a recent example), etc. has shown that almost no one reviews LLM-generated content sufficiently for content policy compliance, but most (mix of experience and AGF here) think that they do. NicheSports (talk) 16:34, 15 March 2026 (UTC)[reply]
- I remember in HS, a girl in my class did a really cool experiment for a presentation - she got everybody who showed up to class early (about half of us) to agree that when she gave her presentation later, she would ask us some questions: and we would all answer "A" - no matter what the question was. "Which of these two squares is bigger - A) an obviously small square or B)an obviously big square?" Just by being incredibly confident, we managed to convince the second half of the class to also answer A, even though it was obviously incorrect. I think about that experiment a lot in terms of reviewing LLM content - why, yes, sure, it's possible to review it competently, it doesn't matter how smart you think you are, it's incredibly easy to fall for a confidently told lie. GreenLipstickLesbian💌🧸 16:47, 15 March 2026 (UTC)[reply]
- Support, there are quibbles I could raise but it's clear that the AI guidelines we have now, although they're much better than the zero AI guidelines we had throughout most of 2023-2025, are insufficient. Gnomingstuff (talk) 17:12, 15 March 2026 (UTC)[reply]
- Support Some thoughts as follows. 1: If you care about LLMs and want to see them succeed (not me!), you should attempt to restrict AI content being added to Wikipedia, which often serves as LLM training data. 2: We should be nurturing the writing skills of humans if we want to achieve long-term goals, and this proposal will help with that. JuxtaposedJacob (talk) | :) | he/him | 17:32, 15 March 2026 (UTC)[reply]
- support i think this is sufficiently strong and clear. ... sawyer * any/all * talk 17:52, 15 March 2026 (UTC)[reply]
- Support A good improvement over the current iteration. The strong wording is good to instruct new/inexperienced editors to not use LLM as it is unlikely they can use them in compliance with the policies/guidelines. As this is only a guideline, if there are any exceptions experienced editors find that were not covered, they is also no problem as they can use their judgment and overrule the guideline for their usecase. Jumpytoo Talk 19:13, 15 March 2026 (UTC)[reply]
- Support. I think there's still space for expansion of this guideline, but this is 100% an improvement over what we have currently. {{GearsDatapack|talk|contribs}} 19:24, 15 March 2026 (UTC)[reply]
- Support – I'm still worried about the LLMs being (mis)used to rewrite the Second Cold War. Recently, someone else tried to add content in attempt to make the topic more multilateral, but that took a recently-closed RfC discussion and some more cleanups to undo the mess. Furthermore, I had to nominate most of articles about individual Survivor winners primarily due to notability and content issues. The thought about LLMs being used to rewrite almost every article, create new articles, or make deleted articles reborn doesn't sit well with me. Well, at least I can see proposed exceptions written there. George Ho (talk) 19:27, 15 March 2026 (UTC)[reply]
- Support. The proposed text is clean, understandable and helpful. Does an excellent job addressing and building off of the work of the previous guideline drafts and the RFC discussions. 🌸wasianpower🌸 (talk • contribs) 21:00, 15 March 2026 (UTC)[reply]
- Support I would like to see a framework for dealing with editors who are just dumping AI slop. Somethings that I think would help in no particular order.
- Use of AI is (high-risk? censurable?): Editors are 100% responsible for the content they write.
- Why? Removes the defense of "I had the AI help", "The AI made a mistake" and prevents people with low english ability from trying to use AI as a shield.
- Editors that use AI tools must have a high degree of English proficiency: AI is a tool for the skilled, not a crutch for the unable.
- Why? See my talk page, I have an editor now who writes at a 10th grade level but is able to create graduate level work in their articles. If you don't fully understand the material that you are writing then it's already a violation of Competence is Required.
- Admins may block or topic ban if any of the following behaviors are found: Citation hallucinations, mass article creation, or factual errors. This should be determined based on the output and quality of the material being evaluated not just AI detection tools.
- As an aside I do like to try and recreate a suspected AI article with some AI models and every now and then I'll get one that's lik 70%+ the same.
- I'm sure some will disagree and this might not be the best way forward but it's just something I've been thinking about today after cleaning up AI slop ¯\_(ツ)_/¯ Dr vulpes (Talk) 22:45, 15 March 2026 (UTC)[reply]
- Use of AI is (high-risk? censurable?): Editors are 100% responsible for the content they write.
- 100% heck yes. This looks perfect. Thank you. Toadspike [Talk] 22:48, 15 March 2026 (UTC)[reply]
- Support Excellent expansion on the current guideline. nil nz 23:07, 15 March 2026 (UTC)[reply]
- Support, the succinct pointers towards some of the issues is appreciated and may help with those asking why. CMD (talk) 05:52, 16 March 2026 (UTC)[reply]
- Support – This seems like it meets the brief. Much better than what we have now, and a good foundation for future iterations. Yours, &c. RGloucester — ☎ 09:51, 16 March 2026 (UTC)[reply]
- Support – this covers the basics. ClaudineChionh (she/her · talk · email · global) 10:25, 16 March 2026 (UTC)[reply]
- Support Not perfect, but we need something that puts the onus squarely on the user submitting the article/draft. WeirdNAnnoyed (talk) 11:21, 16 March 2026 (UTC)[reply]
- Support - obvious improvement compared to the current version. sapphaline (talk) 12:02, 16 March 2026 (UTC)[reply]
- Support, as a natural extension of WP:NEWLLM. NEWLLM has made AFC much easier in terms of simply declining drafts, in my opinion, which then forces the user to rewrite drafts. This proposal means it's not just new contributors who face this. It's less about sanctions, it's more about giving a clear reason to just remove LLM wording. I just wish the proposal said at the end that we don't mind articles in less than perfect English, which is the overwhelming reason given at AFC as to why LLM is used. ChrysGalley (talk) 18:04, 16 March 2026 (UTC)[reply]
- Support. Improves on the existing text of WP:NEWLLM by serving as a more detailed, but still succinct, explanation of what we do/don't want and why. I believe the second paragraph of the proposed text provides a sufficient carve-out to allow for thoughtful editors with a strong understanding of LLMs' strengths and weaknesses to utilized human-reviewed LLM text, while still keeping the gate closed against unreviewed or low-quality output. ModernDayTrilobite (talk • contribs) 18:20, 16 March 2026 (UTC)[reply]
- I think this is an improvement over the existing guideline, but I do still think there are a number of legitimate uses of LLMs not covered by this proposal. I'd prefer something to make it clear that editors are responsible for always checking any newly introduced facts or reframings against the sources, rather than forbidding any use of LLMs that isn't to rephrase human-generated text. I've experimented with workflows for using LLMs to gather sources and generate (parts of) an article and then edit for phrasing and check it myself against the sources, and while I personally found the checking stage frustrating enough I'd rather write it from scratch, I don't expect this process to introduce more errors than good old fashioned human incompetence. That being said, we need something to help with the flood of bad LLM content, and I'm not sure how a "but you must fact check it" rule would be possible to enforce without the reviewer manually checking every potentially bad LLM edit, which is getting increasingly difficult given the prevalence of LLMs. So I guess this is a support in practice and a grump in principle. Rusalkii (talk) 19:01, 16 March 2026 (UTC)[reply]
- Support. I agree that this is an improvement, highlighting the ways to use LLMs without harming the verifiability of Wikipedia. Ajheindel (talk) 21:10, 16 March 2026 (UTC)[reply]
- Support It's definitely an improvement over the narrow current text. Something has to be done against the ever increasing addition of AI text into articles/drafts and the current guideline isn't doing enough, imo. HurricaneZetaC 21:13, 16 March 2026 (UTC)[reply]
- Support. Broadening the current guideline is more than justified. Apocheir (talk) 23:17, 16 March 2026 (UTC)[reply]
- Support given the imbalance between the effort required to generate AI slop and the effort required to review it we don't have much choice.©Geni (talk) 23:49, 16 March 2026 (UTC)[reply]
- Oppose as written. I am concerned that this guideline isn't dealing with the realities of an AI-augmented world. I don't think editors using LLMs to write or heavily edit articles is going to go away, but only become more commonplace. A fool with a tool is still a fool.

Per GreenLipstickLesbian, banning it's use just means we're asking people to lie about their usage. I would prefer we aim to educate people on how best to use these tools and hold editors accountable for their outputs. Require that people disclose their use. Per Esculenta, there is a big difference between AI slop and using these tools in a constrained, reviewed, responsible manner. WidgetKid Converse 15:41, 17 March 2026 (UTC)[reply]- people already lie about their usage.©Geni (talk) 17:20, 17 March 2026 (UTC)[reply]
- And prohibition incentivizes (or at least amplifies) that lying. nub :) 17:24, 17 March 2026 (UTC)[reply]
- Nope. At the moment people already have an incentive to lie because it delays things coming to a head. A lack of prohibition just allows them to waste further time once caught.©Geni (talk) 17:48, 17 March 2026 (UTC)[reply]
- What if we assumed positive intent and treated the editors like any other editor who is having issues writing good content? Sure there will be repeat offenders who don't get it and need to be dealt with, but there is also likely to be plenty of good content created as well. WidgetKid Converse 18:19, 17 March 2026 (UTC)[reply]
- We are well beyond speculation at this point. You can follow the cases coming through WP:AN/I.©Geni (talk) 18:27, 17 March 2026 (UTC)[reply]
- People lie because they want to avoid consequences, or because they don't realize they're lying; when the conversation shifts from "why are you inserting hoaxes/original research/misinformation in mainspace" to "why are you using an LLM in mainspace", it's much easier for somebody to say "No, I wasn't using an LLM/I don't recall using an LLM" and prolong the thread because, unless the use is obvious, how do you prove them wrong? And maybe the other editor honestly believes that they didn't use an LLM either, people aren't always technologically literate enough to know what exactly is an LLM and what isn't. And, ultimately, like COI, it's hard to prove conclusively one way or the other through a Wikipedia talk page. The entire focus on LLMs, in a way, is annoying, because it's hiding the fact that we, as a community, aren't very good at dealing with good faith editors who chronically insert subtle misinformation and original research in articles. If we were, then most of the LLM abuse cases would have been dealt with under existing policies. GreenLipstickLesbian💌🧸 20:30, 17 March 2026 (UTC)[reply]
- Further rant: reluctance to sanction good faith editors means that, often, to sanction a good faith editor, many people seem to have to convince themselves that the editor isn't acting in good faith; that they have a conflict of interest, that they're intentionally being disruptive, that they're a vandal, ect. Or, in this particular case, that somebody using an LLM is being negligent, or deceitful, or even the actual act of using an LLM is inherently malicious. Which isn't very good - and, don't get me wrong, I don't want good faith editors sanctioned when at all possible, either, but to deal with it you have to a)address why it's happening (which, if you're lying to yourself about the motives of the other party, isn't gonna happen) and b)if attempts at explaining the problematic editing don't work, be ready and willing to sanction good faith editors. GreenLipstickLesbian💌🧸 20:42, 17 March 2026 (UTC)[reply]
- In my experience outright lying is pretty rare. Usually people just dodge the question in various ways. Gnomingstuff (talk) 21:14, 17 March 2026 (UTC)[reply]
- What if we assumed positive intent and treated the editors like any other editor who is having issues writing good content? Sure there will be repeat offenders who don't get it and need to be dealt with, but there is also likely to be plenty of good content created as well. WidgetKid Converse 18:19, 17 March 2026 (UTC)[reply]
- Nope. At the moment people already have an incentive to lie because it delays things coming to a head. A lack of prohibition just allows them to waste further time once caught.©Geni (talk) 17:48, 17 March 2026 (UTC)[reply]
- And prohibition incentivizes (or at least amplifies) that lying. nub :) 17:24, 17 March 2026 (UTC)[reply]
- people already lie about their usage.©Geni (talk) 17:20, 17 March 2026 (UTC)[reply]
- Support. Our current guideline is two sentences long and only covers writing articles from scratch, which is obviously not sufficient. This guideline is a step in the right direction. I do have issues with this new version, however. It outright proscribes the use of content generated by LLMs except for copyediting and translation. This will inevitably be ignored by editors who feel inclined to use LLMs anyway. I've seen enough AI usage in my RC patrolling and at ANI to know that there are people who will use AI and deny that they do so for as long as they can, apparently believing that their "thorough" review of the output makes the content not "LLM generated" anymore. I've seen this happen in the real world as well, especially in education circles. Human review of LLM output is far too error-prone in my experience and I support the guideline's idea of taking suggestions from an LLM rather than wholesale taking its output and reviewing it. These issues with the guideline could be solved by changing the wording to be that LLMs "should not" be used to generate article content and if they are, it should be disclosed. It should also be clear that all content in an edit is the sole responsiblity of the editor. For what it's worth, I've never used an LLM to generate content in my edits. I just think we need to be realistic about what LLMs mean for Wikipedia. Alas, this is much better than what we currently have, so I support. IsCat (talk) 17:28, 17 March 2026 (UTC)[reply]
- To clarify, it would be for the best if nobody used LLMs for article content. Unlike our other guidelines, however, this bans the addition of certain content based on how the content was produced rather than any real issues with the edit. Under this guideline, it would be acceptable to revert an edit that conforms to every other PAG just because it was LLM generated. The guideline does say that the text's compliance with content policies should be the foremost consideration, but at the top in bolded text is a blanket ban on LLM-generated content. A blanket ban is good in theory as the vast majority of people do not have the experience with English, the proper use of LLMs, and Wikipedia's content policies to use LLMs to generate article content constructively. The thing is, some will ignore both discouragement and bans if they have the perhaps foolish belief that they are skilled enough to use LLMs correctly. If we have required disclosure, we can perhaps get a portion of those people to admit to it rather than be in fear that they will be blocked if they disclose it. I'd think it would be best if we have a checkbox on the edit summary screen to ask if the edit is LLM generated, which would allow for us to track and monitor LLM usage while encouraging disclosure. I understand this would be a technical change which would require WMF approval and is thus unlikely, though. IsCat (talk) 17:58, 17 March 2026 (UTC)[reply]
- The checkbox idea came up in at least one of the previous RfCs -- don't remember which one, apologies -- but there was some opposition from people who were concerned having a checkbox for AI would encourage more people to use it. (which personally I don't buy, but it was a pretty common stance) Gnomingstuff (talk) 21:16, 17 March 2026 (UTC)[reply]
- To clarify, it would be for the best if nobody used LLMs for article content. Unlike our other guidelines, however, this bans the addition of certain content based on how the content was produced rather than any real issues with the edit. Under this guideline, it would be acceptable to revert an edit that conforms to every other PAG just because it was LLM generated. The guideline does say that the text's compliance with content policies should be the foremost consideration, but at the top in bolded text is a blanket ban on LLM-generated content. A blanket ban is good in theory as the vast majority of people do not have the experience with English, the proper use of LLMs, and Wikipedia's content policies to use LLMs to generate article content constructively. The thing is, some will ignore both discouragement and bans if they have the perhaps foolish belief that they are skilled enough to use LLMs correctly. If we have required disclosure, we can perhaps get a portion of those people to admit to it rather than be in fear that they will be blocked if they disclose it. I'd think it would be best if we have a checkbox on the edit summary screen to ask if the edit is LLM generated, which would allow for us to track and monitor LLM usage while encouraging disclosure. I understand this would be a technical change which would require WMF approval and is thus unlikely, though. IsCat (talk) 17:58, 17 March 2026 (UTC)[reply]
- Support, long overdue. High-speed low-effort generation necessitates high-speed low-effort removal. Thebiguglyalien (talk) 00:05, 18 March 2026 (UTC)[reply]
- Support mainly because it would replace
should not be used
byis prohibited
and extends the scope of that prohibition to any addition of content. The rest is unimportant. ~ ToBeFree (talk) 00:37, 18 March 2026 (UTC)[reply] - Support for all the same reasons as the last proposal, but without any reservation regarding the wording. Let's finally get this done. Choucas0 🐦⬛ 17:47, 18 March 2026 (UTC)[reply]
- Support. For all the reasons editors above have explained, this is far more representative of how the community should be dealing with LLM content. Our previous attempts to strengthen our guidelines on LLM content have all closed with something along the lines of "there is support for something like this, but this version isn't good enough." Well, now we have a version that is good enough. More than good enough, I think this is excellent. It's delightfully succinct, it's neutral, and the carve-outs it has for acceptable LLM usage are fair. MEN KISSING (she/they) T - C - Email me! 02:38, 19 March 2026 (UTC)[reply]
- Strongly support. This has been a nagging problem, and we need a resolution as LLMs continue to get used and won't stop getting used anytime soon.
Rhinocratt07:18, 19 March 2026 (UTC)[reply]
c - Support. I really like the fact the guideline is phrased as a prohibition against the usage of LLMs rather than a prohibition of LLM-generated content's presence in articles, thus avoiding the concern about the guideline potentially encouraging people to delete any content they think is likely LLM-generated simply on the basis of their suspicion rather than any actual evaluation of whether or not the text is actually an issue or not. ―Maltazarian (talkinvestigate) 10:30, 19 March 2026 (UTC)[reply]
- Support It is an improvement but wikipedia should just ban all LLM use in the article space. If you want to use a spelling checker, use a spelling checker (better yet why isn't this already integrated in the various editors as standard?). Make a bright red line and publicize it widely with banners and press release. Rolluik (talk) 15:45, 19 March 2026 (UTC)[reply]
- Support as a clear improvement over WP:NEWLLM. That's much better than the LLM guidelines we have right now. Cicada1010 (talk) 21:15, 19 March 2026 (UTC)[reply]
- Support and I'm absolutely thrilled to see the near-unanimous support fot this expansion. This is one step closer to the blanket ban on LLMs that Wikipedia desperately needs. Athanelar (talk) 21:25, 19 March 2026 (UTC)[reply]
- Support. Short and sweet, and I think very much in line with general community preferences. No doubt we will have a lot of little changes as time goes on, but the core message is sound. Andrew Gray (talk) 22:26, 19 March 2026 (UTC)[reply]
- Support, very happy to see this getting nearly unanimous support. This would move us closer to the policies and guidelines we need. --Gurkubondinn (talk) 00:45, 20 March 2026 (UTC)[reply]
- Support I am not convinced that this goes far enough, but I am willing to sign on with it. Stepwise Continuous Dysfunction (talk) 00:54, 20 March 2026 (UTC)[reply]
- Support especially with the last paragraph of the proposal,
Some editors may have similar writing styles to LLMs. More evidence than just stylistic or linguistic signs is needed to justify sanctions, and it is best to consider the text's compliance with core content policies and recent edits by the editor in question.
This paragraph is essential to establish a baseline of evidence required to accuse an editor of using a LLM, without veering on casting aspersions. It also acknowledges that people can happen to write similarly to LLMs, but those individuals shouldn't be punished for being false positives to this guideline. Overall, this is a more fleshed out guideline compared to the current one. Gramix13 (talk) 01:47, 20 March 2026 (UTC)[reply] - Support, huge improvement over the current one. It takes less than a minute to generate an entire article of AI slop but it takes hours to clean it up. SecretSpectre (talk) 02:30, 20 March 2026 (UTC)[reply]
- This is an extremely minor nitpick, but should the clause
use of LLMs to generate or rewrite article content is prohibited
be preceded by "the" as seen below?
I feel like the clause flows better that way, but maybe that's just me. mdm.bla 15:29, 15 March 2026 (UTC)[reply]− For this reason, use of LLMs to generate or rewrite article content is prohibited,
+ For this reason, the use of LLMs to generate or rewrite article content is prohibited,
- I think you're right, it doesn't change meaning so I'll add it now Kowal2701 (talk, contribs) 15:32, 15 March 2026 (UTC)[reply]
- Thanks to both of you! Chaotic Enby (talk · contribs) 15:33, 15 March 2026 (UTC)[reply]
- I would like to see the guideline require editors to disclose in the edit summary when an LLM is used for any of the listed exceptions, and would support this proposal if this disclosure requirement were added. — Newslinger talk 15:56, 15 March 2026 (UTC)[reply]
- I agree, but I think that would have to be a separate proposal around WP:LLMDISCLOSE and WP:PLAGIARISM. I haven't seen it discussed much with broad agreement, and am concerned that adding more to this guideline right now makes it more likely there being something for someone to take issue with Kowal2701 (talk, contribs) 16:14, 15 March 2026 (UTC)[reply]
- Seeing as this is becoming the place to suggest improvements, I'll opine that, if I was writing this, the sentence
Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it isn't supported by the sources cited.
should readEditors are responsible if LLMs go beyond what they ask of them and change the meaning of the text such that it isn't supported by the sources cited.
JustARandomSquid (talk) 16:05, 15 March 2026 (UTC)[reply]- I think it may be too late to make such a change, but this can probably be done via normal editing processes if this gets promoted? Kowal2701 (talk, contribs) 17:11, 15 March 2026 (UTC)[reply]
- @JustARandomSquid I prefer the meaning and wording of the first proposal - just my 2 cents. JuxtaposedJacob (talk) | :) | he/him | 17:35, 15 March 2026 (UTC)[reply]
- I think it may be too late to make such a change, but this can probably be done via normal editing processes if this gets promoted? Kowal2701 (talk, contribs) 17:11, 15 March 2026 (UTC)[reply]
- If
refinement
is intended to mean WP:Basic copyediting, as the link would imply, then the termbasic copyediting
should be used instead as it has a more narrow and clear meaning. If not then the link should be removed from the word refinement and placed into an aside like... refinement, such as basic copyediting, and to ...
fifteen thousand two hundred twenty four (talk) 16:52, 15 March 2026 (UTC)[reply]- Would this work?
Editors are permitted to use LLMs to suggest refinements to their own writing (ie. copyedit), and to incorporate some of them after human review, provided the LLM doesn't introduce content of its own.
Kowal2701 (talk, contribs) 17:11, 15 March 2026 (UTC)[reply]- If we change anything, I'd suggest the change "refinements" -> "basic copyedits", which simply aligns the wording with the existing, underlying link. NicheSports (talk) 17:19, 15 March 2026 (UTC)[reply]
- Of the two options this is the one I would prefer as well, it's clearer and leaves less room for overly creative interpretations. fifteen thousand two hundred twenty four (talk) 18:47, 15 March 2026 (UTC)[reply]
- Agreed, but tbh on second thought it may be best to leave minor tweaks re wording to normal editing after the RfC, just so everyone's !voting on the same thing Kowal2701 (talk, contribs) 19:15, 15 March 2026 (UTC)[reply]
- Of the two options this is the one I would prefer as well, it's clearer and leaves less room for overly creative interpretations. fifteen thousand two hundred twenty four (talk) 18:47, 15 March 2026 (UTC)[reply]
- If we change anything, I'd suggest the change "refinements" -> "basic copyedits", which simply aligns the wording with the existing, underlying link. NicheSports (talk) 17:19, 15 March 2026 (UTC)[reply]
- Would this work?
There appears to be a contradiction between these statements:
Editors should not use an LLM to add content to Wikipedia, whether creating a new article or editing an existing one. Do not use an LLM as the primary author of a new article or a major expansion of an existing article, even if you plan to edit the output later.
Editors should not:
Paste raw or lightly edited LLM output as a new article or as a draft intended to become an article.
Paste raw or lightly edited LLM output into existing articles as new or expanded prose.
Paste raw or lightly edited LLM output as new discussions or replies to existing discussions.
There is a big difference between not using an LLM to create new article content and not using raw or lightly edited content. WidgetKid Converse 15:28, 17 March 2026 (UTC)[reply]- @WidgetKid that's from the old December proposal, the version being discussed here is linked at the top of the page and is from March. fifteen thousand two hundred twenty four (talk) 16:06, 17 March 2026 (UTC)[reply]
- Thank you @Fifteen thousand two hundred twenty four.
Facepalm WidgetKid Converse 18:05, 17 March 2026 (UTC)[reply]
- Thank you @Fifteen thousand two hundred twenty four.
- @WidgetKid that's from the old December proposal, the version being discussed here is linked at the top of the page and is from March. fifteen thousand two hundred twenty four (talk) 16:06, 17 March 2026 (UTC)[reply]
I would put this statement into it's own section for emphasis:[reply]Do not use LLMs to write comments or replies in discussions.
Discussing with AI-generated nonsense is awful. WidgetKid Converse 15:28, 17 March 2026 (UTC)- I encourage everyone here to help out in dealing with LLM-generated content. You can help with requests at WP:LLMN, patrol new articles at WP:AfC and WP:NPP, warn and report editors who insert LLM content before they can do more damage, or check the thousands of articles flagged as likely AI. Thebiguglyalien (talk) 00:11, 18 March 2026 (UTC)[reply]
- Glad to see this survived the previous NC close. I was worried that it might kill the initiative, so pleased to see it return stronger than ever. Also given the survey so far I'd recommend a snow close; I'm only refraining myself due to closing the previous RfC, and perceptions on involvement and that sort of nonsense. CNC (talk) 19:46, 18 March 2026 (UTC)[reply]
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.