Settings

Theme

"Grok, Is This True?" Analyzing LLM-Powered Fact-Checking on Social Media

osf.io

3 points by ytpete a month ago · 2 comments

Reader

ytpeteOP a month ago

Some highlights from the abstract:

- Analyzes fact-check requests on X (Grok and Perplexity)

- "exposure to LLM fact-checks meaningfully shifts belief accuracy" comparable to the degree observed in studies of professional fact-checking

- 54.5% of Grok ratings and 57.7% of Perplexity ratings agreed with human fact-checkers ("significantly lower than the inter-fact-checker agreement rate of 64.0%"). But "API-access versions of Grok had higher agreement with fact-checkers"

- "Responses to Grok fact-checks are polarized by partisanship when model identity is disclosed, whereas responses to Perplexity are not"

- "Users requesting fact-checks from Grok are much more likely to be Republican than Democratic, while the opposite is true for fact-check requests from Perplexity – indicating emerging polarization in attitudes toward specific AI models."

- "posts from Republican-leaning accounts are more likely to be rated as inaccurate by both LLMs"

- Grok and Perplexity "strongly disagree" (one rates a claim as true and the other as false) 13.6% of the time

stephenr a month ago

Trusting spicy autocomplete to fact check something is already a bonkers concept.

Trusting a spicy autocomplete created with the explicit purpose of promoting the views of a white nationalist man child who thinks he's Tony Stark in real life, is batshit crazy.

You might as well trust a magic 8 ball to tell if something is true.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection