
Illustration: Annelise Capossela/Axios
A Russian disinformation effort that flooded the web with false claims and propaganda continues to impact the output of major AI chatbots, according to a new report from NewsGuard, shared first with Axios.
Why it matters: The study, which expands on initial findings from last year, comes amid reports that the U.S. is pausing some of its efforts to counter Russian cyber activities.
Driving the news: NewsGuard says that a Moscow-based disinformation network named "Pravda" (the Russian word for truth) is spreading falsehoods across the web.
- Rather than directly sway people, it aims to influence AI chatbot results.
- More than 3.6 million articles were published last year, finding their way into leading Western chatbots, according to the American Sunlight Project.
- "By flooding search results and web crawlers with pro-Kremlin falsehoods, the network is distorting how large language models process and present news and information," NewsGuard said in its report.
- Newsguard said it studied 10 major chatbots — including those from Microsoft, Google, OpenAI, You.com, xAI, Anthropic, Meta, Mistral and Perplexity — and found that a third of the time they recycled arguments made by the Pravda network.
Zoom in: NewsGuard says the Pravda network has spread at least 207 provably false claims, including many related to Ukraine.
- The Pravda network launched in April 2022, following Russia's full-scale invasion of Ukraine, and has since grown to cover 49 countries and dozens of languages, NewsGuard said.
- Of the 150 sites in the network, about 40 are Russian-language sites using domain names referencing various regions of Ukraine.
- A small number are more focused on themes than regions, it said.
- Pravda is not producing original content itself, NewsGuard says, but instead is aggregating content from others, including Russian state media and pro-Kremlin influencers.
The big picture: Deliberate falsehoods (disinformation) as well as inadvertent misinformation have both been called out as significant — and pressing — risks of generative AI.
- NewsGuard's findings build on a February report from the American Sunlight Project that warned that the network appeared aimed at influencing chatbots rather than persuading individuals.
- "The long-term risks – political, social, and technological – associated with potential LLM grooming within this network are high," the ASP said at the time.
Between the lines: NewsGuard said the strategy "was foreshadowed in a talk American fugitive-turned-Moscow-based-propagandist John Mark Dougan gave in Moscow last January at a conference of Russian officials."
- Dougan told the crowd: "By pushing these Russian narratives from the Russian perspective, we can actually change worldwide AI."