LLM-powered, grounded fact-checking
Introducing Vera, our virtual assistant for fact-checking that automatically retrieves, analyzes, and verifies claims and texts using trusted sources. Vera makes fact-checking transparent and accurate by comparing each input claim with trustable information from the web (or your knowledge base!), checking its veracity, and motivating its decision through a detailed analysis grounded on the retrieved evidence.
Integrate Vera to assist your decision-making with real-time fact-checking!
How it works
Babelscape Vera performance evaluation
Vera was evaluated against several fact-checking competitors using a dataset of human-verified claims, which is updated daily. The dataset includes the 1,600 most recently published claims - 800 in Italian and 800 in English - each labeled as either True or False by reputable fact-checking agencies.
Vera’s predicted labels were compared to human fact-checking results and benchmarked against other systems. As shown in the plot, Vera outperforms the top competitor by 7.2 and 8.7 F1 points on the Italian and English datasets.
These results highlight Vera’s superior accuracy in fact verification and its effectiveness in identifying misinformation with high precision.
Use cases
In an age where misinformation spreads rapidly, tools like Vera are crucial for ensuring information accuracy. Vera verifies statements by retrieving relevant sources from the web and proprietary databases, assessing their truthfulness, and classifying claims as true, false, neutral or controversial. It also provides detailed analyses of supporting sources. As large language models (LLMs) blur the lines between fact and fiction, Vera empowers users to make informed decisions and promotes trust in reliable information.