Fact-Checkers Battle AI-Generated Hoaxes with Fewer Tools, Greater Stakes
Jerusalem, 7 August, 2025 (TPS-IL) -- (Adnkronos) – Artificial intelligence is radically transforming the disinformation landscape. David Puente , deputy editor and head of the fact-checking project at the online newspaper “Open,” observes to Adnkronos that today it faces a much more powerful, faster, and often invisible enemy.
“The real challenge is that AI can create content that’s difficult to trace,” he explains. “Artificially generated images have no recognizable primary source, making it nearly impossible to trace ‘patient zero.'” While in the past, it was possible to unmask a hoax by tracing a photo or video back to its source, now the content is synthetic, and often already goes viral when alarm bells ring about its veracity.
The wars in Gaza and Ukraine are the perfect battleground for hoaxers and propagandists: “But it’s one thing when Chef Rubio circulated images of children killed in Syria passing them off as Palestinians, or when Moscow released videos purporting to show Russian flags flying from buildings in Odessa, but actually filmed in Khabarovsk, in the Russian Far East. It’s another thing entirely to have to contend with images generated out of thin air.”
Sure, in some of the AI-generated videos, people have six fingers, or shrivel up when they leave the center of the scene, or have invented writing and numbers on their clothes. “But the quality improves every week, and almost everyone watches the videos on their phones on small screens, without noticing the details that those seeking authenticity focus on,” adds Puente, who doesn’t see a solution even in the (hopefully) proposed regulations that would require watermarks, the ‘stamps’ on images, to certify their synthetic origin.
“Those stamps can be covered, blurred, cropped. And not even metadata is a guarantee: a video created with OpenAI’s Sora or Google’s Veo has ‘traces’ to trace its origin, but if that video is re-saved by recording the screen, all traceability is lost,” he said.
The foundation created by the late Yevgeny Prigozhin, head of Wagner, released false dossiers against Ukraine, claiming that Kiev was genetically selecting soldiers to create new generations of armed forces to deploy against the Russians. “A complete hoax, but the original video, shot by a gynecologist filming his patients, had a watermark, which was later covered. It was also edited to change the aspect ratio of the footage, making it even more difficult to trace the source.”
Community Notes: Moderation or Echo Chamber?
There’s the Community Notes tool, already adopted by X and now by Facebook and Instagram, which allows anyone, by registering with specific credentials, to report content and contribute to collective moderation.
“The problem,” Puente explains, “is that sometimes the note is taken as valid even before being voted on by registered users. A domino effect is triggered: if it confirms what people already believe, it’s immediately considered true, even if it’s wrong.” And so a small army of self-styled debunkers ends up fueling the confusion.
A recent case involves a photo that Grok (Elon Musk’s chatbot) claimed was of a Yazidi girl in Syria in 2014, when in fact it was a recent image from Gaza. The error not only went viral, it caused a reversal: the manipulation lies in the denial, not the original content.
The ease with which realistic photos and videos can be generated today has raised the bar. “It’s not just the image quality,” Puente observes, “but the lack of historical references: if an AI photo depicts an injured child, but that child doesn’t exist, how can you verify it?”
Added to this is the technical difficulty: online tools available to discover artificially generated content often yield flawed results. “A scanned photo of me from 2006 was 100% identified as having been created with AI. The same goes for the Baggio sticker from the ’94 World Cup!” It’s a problem of pixels, of blurry and therefore unreliable backgrounds, but mainly of prodigious machines that still have many limitations.
The dilemma doesn’t spare even the traditionally most reliable sources.
“There have been cases where photo agencies have spread incorrect material. A video attributed to bombings in Pakistan, actually filmed in Palestine, ended up on the news.” Another example: “After October 7th, several people in the Middle East sent me images of a mass grave. According to them, it was the Israeli army dumping Palestinian bodies there. Instead, it dated back to the Syrian conflict. Imagine what would have happened if it had been credited that way. The truth is that in certain cases you have to have the courage to come second, pause for a moment, and conduct further checks. This is often something that news outlets, which must always stay on top of things, don’t like.”
One of the most dangerous aspects of AI applied to information is the illusion of impartiality. “What would an AI trained in Russia say about the Ukrainians? That Zelensky is a cocaine addict and the soldiers are all Nazis.” Cultural and political biases are transmitted to algorithms, with chatbots creating seemingly neutral but actually deeply biased content. “One minute they call me pro-Pal, the next a pro-Zionist. But fact-checkers are human too; they make mistakes, and unfortunately they are labeled according to the convenience of the moment.”
Despite the challenges, the work of fact-checkers remains crucial. “There are more and more of us, even though in an ideal world our profession wouldn’t exist. We’ve structured ourselves into international networks, we collaborate with colleagues. We help each other to trace the origin of hoaxes and try to slow their spread. We establish ethical rules, we create standard verification procedures to minimize errors. But I don’t want people to tell me ‘I trust you.’ I shouldn’t trust myself too much either, or I’d fall into the same trap.”