From Wikipedia, the free encyclopedia
"Wikipedia:AI generated content" redirects here. For other uses, see WP:AI-INDEX.
Wikipedia information page
Artificial intelligence (AI) is used on a number of Wikipedia and Wikimedia projects. This may be directly involved with creation of text content, or in support roles related to evaluating article quality, adding metadata, or generating images. As with any machine-generated content, care must be used when employing AI at scale or in applying it where the community consensus is to exercise more caution.
When exploring AI techniques and systems, the community consensus is to prefer human decisions over machine-generated outcomes until the implications are better understood.
- AI usage
- WP:Editing policy#Artificial intelligence additions - (WP:EPAI) – the use of AI writing tools such as large language models (LLMs) for the creation or rewriting of articles, or their usage to generate images, sources or comments, save for a few exceptions, is generally prohibited (as outlined by the linked pages below)
- Images
- WP:Biographies of living persons#AI generated images - (WP:AIIMGBLP) – AI-generated images should not be used to depict subjects of BLPs
- WP:Image use policy#AI-generated images - (WP:AIIMAGES) most images wholly generated by AI should not be used in mainspace
- Deletion
- WP:Speedy deletion#G15 - (WP:G15) – for a page that exhibits signs that it was generated by an LLM with no human review
- Article content
- WP:Writing articles with large language models - (WP:LLM) – the use of LLMs to generate or rewrite article content is largely prohibited
- WP:LLM-assisted translation - (WP:LLMT) – guideline about machine translation tools that includes LLMs
- Images
- WP:Manual of Style/Images#AI upscaling - (MOS:AIUPSCALE) – AI-upscaling should not be used to increase the resolution of images
- Sources
- WP:Reliable sources#Sources produced by machine learning - (WP:RSML) – sources produced by LLMs are generally unreliable
- Talk pages
- WP:Talk page guidelines#LLM - (WP:AITALK) – a guideline about striking or collapsing comments that are generated by AI-technology
- Behavioral
- WP:Disruptive editing#Persistent LLM use - (WP:LLMDISRUPT) – repeated use of LLMs to generate content is considered disruptive
- WP:Large language models – an overview of LLMs usage throughout Wikipedia
- WP:Identifying LLM unblock requests – a list of writing and formatting conventions typical of AI chatbots
- WP:Reliable sources/Perennial sources#Large language models – sources generated by LLMs are unreliable
- WP:Translation#Machine translation – . machine translation should be used with caution
- WP:Clean up the AI before deleting – pages containing suspected AI content can be cleaned over deletion
- WP:Do not lie about AI – lying about the use of AI or LLMs will not end well
- WP:Drafts#Reasons to move an article to draftspace – an alternative to deletion
- WP:Guide to appealing blocks#Composing your request to be unblocked – unblock requests generated by LLMs are likely to be rejected
- WP:Help, I've been accused of AI! – If you used AI, then say that you used AI
- WP:Large language models and copyright – careful LLMs often generate material that violates copyright
- WP:LLM use disclosure – LLM use disclosure is highly recommended
- WP:LLMs are bad search engines – you should never use LLMs to research topics or find sources
- WP:Marketing buzzspeak and AI – outputs written by LLMs may contain buzzspeak
- WP:Responsibly using large language models – LLMs can serve purposes outside of content generation
- WP:Signs of AI writing – a list of writing and formatting conventions typical of AI chatbots
- WP:The LLM-written ANI report – all of the general considerations regarding LLM use in discussions apply to ANI
- WP:Wikipedia is written by humans, for humans – Wikipedia is a human-driven endeavor, created, edited, and administered by humans, for humans
Want to update this table? Try using the visual editor to edit this page.
AI-related efforts on Wikipedia include but are not limited to:
The Objective Revision Evaluation Service (ORES) was started in 2015 as a project of the Wikimedia Foundation, and provides a revision score against machine learning models that have been trained in order to report article quality or vandalism. This is used in tools such as ClueBot NG to help immediately revert vandalism, or in evaluation tools like the Program and Events Dashboard to measure the outcomes of classwork, edit-a-thons, or organized editing campaigns.
Guidance can be found at Wikipedia:LLM-assisted translation. Editors are required to be skilled enough in both the target language and English to verify the translation, and to check the output for AI hallucinations, core content policy violations, and text-source integrity. LLM-assisted translations must also comply with other translation requirements.
There is a Content Translation Tool used across Wikimedia projects that can use the output of machine translation from one Wikipedia article to another, using services like Google Translate. However, on the English Wikipedia, it currently states that "machine translation is disabled for all users and this tool is limited to extended confirmed editors." As a result, only manual translation on the English Wikipedia is supported by the tool, though some users have used translation to Simple English as a workaround. Relatedly, there is a section of the Help:Translation page with the broad advice: "avoid machine translations."
Article text generation
[edit]
The explosion of interest in ChatGPT in 2022 has led to increased curiosity in using generative AI to help compose Wikipedia articles. However, current consensus is that "the use of LLMs to generate or rewrite article content is prohibited." The status of machine-generated text from tools such as ChatGPT is generally accepted to be public domain, so the copyright issues are not a blocker to using the generated text from a legal standpoint. These issues are generally governed by Help:Adding open license text to Wikipedia#Converting and adding open license text to Wikipedia, which advises to make sure content is adjusted for style and that reliable sources are used.
Image metadata – There have been efforts from GLAM institutions to help supplement image keyword data with machine learning efforts. Among them include:
- Computer aided tagging Started in 2019, "The computer-aided tagging tool is a feature in development by the Structured Data on Commons team to assist community members in identifying and labeling depicts statements for Commons files." See: c:Commons:Structured data/Computer-aided tagging
- Metropolitan Museum of Art Tagging - This project used Met Museum tagging info to train a machine learning system to help predict new "depiction" recommendations for Wikidata. This resulted in a new Wikidata Game that helped add more than 4,000 new depiction (P180) statements to Wikidata. See the Met Museum blog post by Andrew Lih: "Combining AI and Human Judgment to Build Knowledge about Art on a Global Scale," March 4, 2019, [1]
Image generation
- Wikimedia Commons and AI generated media
- AI images and German Wikipedia, results of a meeting
- A Battle for Reality, video essay on AI images and Wikipedia
- Wikimedia Commons AI, a rejected proposal for a new Wikimedia sister project aimed at establishing a clear distinction between human-generated content and content produced by artificial intelligence
- The four categories, an idea about dividing all images uploaded to Wikimedia Commons in one of four categories
- Wikipedia:Computer-generated content, a draft of a proposed policy on using computer-generated content in general on Wikipedia
- Wikipedia:WikiProject AI Cleanup, a group of editors focusing on the issue of non-policy-compliant LLM-originated content
- Wikipedia:WikiProject AI Tools, a group of editors collaborating on tools using AI to improve Wikipedia
- Wikipedia:WikiProject Wikipedia spoken by AI voice, proposed project to make natural-sounding audios available for articles at scale
- Wikipedia:Bot
- m:Research:Implications of ChatGPT for knowledge integrity on Wikipedia, Wikimedia research project
- m:Wikilegal/Copyright Analysis of ChatGPT
- m:Category:Artificial intelligence
- Lih, Andrew (March 4, 2019). "Combining AI and Human Judgment to Build Knowledge about Art on a Global Scale". Metropolitan Museum of Art.
- Davis, LiAnna (2026-01-29). "Generative AI and Wikipedia editing: What we learned in 2025". Wiki Education. Retrieved 2026-02-26. Includes a list of what it's good for and what it's not.
Wikimedia Foundation
[edit]
- Morgan, Jonathan T. (18 July 2019). "Designing ethically with AI: How Wikimedia can harness machine learning in a responsible and human-centered way". WIkimedia Foundation.
- Redi, Miriam (14 March 2018). "How we're using machine learning to visually enrich Wikidata". Wikimedia Foundation.
- meta:Research:Ethical and human-centered AI
Demonstrations of generative AI using LLMs
[edit]
- Signpost/2025-12-01 experiment (identifying and correcting errors in 90% of 31 recent English Wikipedia 'Today's featured articles')
- User:WeatherWriter/LLM Experiment 2 (identifying sourced and unsourced information, including a non-English source)
- User:WeatherWriter/LLM Experiment 3 (identifying sourced and unsourced information, only six of seven tests successful)
- Wikipedia:Articles for deletion/ChatGPT and Wikipedia:Articles for deletion/Planet of the Apes (humorous April Fools' nominations generated almost entirely by large language models).
- User:JPxG/LLM demonstration 2 (suggestions for article improvement, explanations of unclear maintenance templates based on article text)
- Artwork title, a surviving article initially developed from raw LLM output (before this page had been developed)
- User:Fuzheado/ChatGPT (PyWikiBot code, writing from scratch, Wikidata parsing, CSV parsing)
- User:DraconicDark/ChatGPT (lead expansion)
- Wikipedia:Using neural network language models on Wikipedia/Transcripts (showcases actual mainspace LLM-assisted copyedits)
- User:WeatherWriter/LLM Experiment 1 (identifying sourced and unsourced information)
- User:JPxG/LLM demonstration (wikitext markup, table rotation, reference analysis, article improvement suggestions, plot summarization, reference- and infobox-based expansion, proseline repair, uncited text tagging, table formatting and color schemes)
- Shit flow diagram, an experiment in using a constrained set of data to write an article[clarification needed]
- User:BrokenSegue - Wikidata:Wwwyzzerdd and Psychiq Wikidata game that uses distilBERT and ML, analyzing Wikipedia categories.