EU Researchers Built Trustworthy AI Tools to Fight Disinformation — Here's What They Found

The challenge of verifying information has always been hard. Generative AI and deepfakes have made it significantly harder. A European research initiative called vera.ai, funded under the EU's HORIZON program, spent years building AI tools specifically designed to help media professionals detect manipulation, analyze content, and track disinformation campaigns — and the results are now publicly available.
The problem vera.ai set out to solve is structural. Disinformation is multimodal — it combines text, images, video, and audio in ways that overwhelm traditional verification tools. At the same time, thorough analysis requires time and expertise that most newsrooms don't have in abundance.
"While false information spreads rapidly, thorough analysis requires time and expertise," explained project coordinator Akis Papadopoulos. "Accessible and robust solutions remain limited."
What vera.ai Built
The project produced a suite of AI tools across several functions:
- Deepfake and manipulated content detection — identifying synthetic media across formats
- Content analysis and enhancement — processing and interpreting multimodal content at scale
- Disinformation narrative tracking — measuring the spread and impact of coordinated campaigns
- Intelligent verification assistant — a chatbot-driven tool to support journalists in real-time fact-checking workflows
A key design principle was "fact-checker-in-the-loop" methodology — ensuring that human experts provided continuous feedback throughout development to maintain usability and scientific rigor.
Key Takeaways
- The project validated its tools against real-world cases provided by media partners — a reminder that AI tools built without practitioner input rarely survive contact with operational reality
- Three tools are now publicly accessible: the Fake News Debunker plugin, Truly Media, and the Database of Known Fakes
- The broader application of this work extends beyond journalism into platform governance, regulatory compliance under the EU's Digital Services Act, and enterprise brand safety — any organization concerned about information integrity can draw lessons from this approach
As generative AI capabilities continue to democratize synthetic media creation, the tools and frameworks developed by projects like vera.ai will become increasingly relevant for marketing, communications, and legal teams managing brand reputation at scale.
🔗 Read the full article on CORDIS / European Commission
Stay in Rhythm
Subscribe for insights that resonate • from strategic leadership to AI-fueled growth. The kind of content that makes your work thrum.
More from Thrum
Additional pieces exploring adjacent ideas
