How Journalists Are Using AI Tools in 2026—And What They’re Getting Wrong
A new report from Press Gazette surveyed journalists across major newsrooms about their AI tool usage in 2026. The findings are revealing—not just about what journalists are doing with AI, but about where the risks are being underestimated.
What Journalists Are Actually Using AI For
The most common uses are transcription, first-draft generation, research summarization, and headline testing. These are productivity applications—AI doing the time-consuming parts of journalism so reporters can focus on reporting.
Less common but growing: AI tools for data analysis, source identification, and audience engagement optimization. These applications are more consequential and more complex.
Where the Risks Are Emerging
Press Gazette identified three recurring failure modes:
- Accuracy drift: Journalists who rely on AI-generated summaries without checking primary sources are introducing factual errors at a higher rate.
- Voice homogenization: News organizations using AI for first-draft generation are producing content that sounds increasingly similar across outlets.
- Over-reliance in breaking news: AI tools trained on past data perform poorly in fast-moving situations where the context is new.
Key Takeaways
- AI tool adoption in journalism is widespread and accelerating—transcription and first-draft generation are now standard in many newsrooms.
- The failure modes aren’t about AI being bad—they’re about workflows that don’t include sufficient human verification.
- Organizations building AI into editorial workflows need verification checkpoints, not just deployment playbooks.
🔗 Read the full article on Press Gazette
Stay in Rhythm
Subscribe for insights that resonate • from strategic leadership to AI-fueled growth. The kind of content that makes your work thrum.
More from Thrum
Additional pieces exploring adjacent ideas
