Oren Etzioni, founder of TrueMedia.org.
Oren Etzioni
Generative AI tools have made it easier to create fake images, videos, and audio.That sparked concern that this busy election year would be disrupted by realistic disinformation.The barrage of AI deepfakes didn’t happen. An AI researcher explains why and what’s to come.
Oren Etzioni has studied artificial intelligence and worked on the technology for well over a decade, so when he saw the huge election cycle of 2024 coming, he got ready.
India, Indonesia, and the US were just some of the populous nations sending citizens to the ballot box. Generative AI had been unleashed upon the world about a year earlier, and there were major concerns about a potential wave of AI-powered disinformation disrupting the democratic process.
“We’re going into the jungle without bug spray,” Etzioni recalled thinking at the time.
He responded by starting TrueMedia.org, a nonprofit that uses AI-detection technologies to help people determine whether online videos, images, and audio are real or fake.
The group launched an early beta version of its service in April, so it was ready for a barrage of realistic AI deepfakes and other misleading online content.
In the end, the barrage never came.
“It really wasn’t nearly as bad as we thought,” Etzioni said. “That was good news, period.”
He’s still slightly mystified by this, although he has theories.
First, you don’t need AI to lie during elections.
“Out-and-out lies and conspiracy theories were prevalent, but they weren’t always accompanied by synthetic media,” Etzioni said.
Second, he suspects that generative AI technology is not quite there yet, particularly when it comes to deepfake videos.
“Some of the most egregious videos that are truly realistic — those are still pretty hard to create,” Etzioni said. “There’s another lap to go before people can generate what they want easily and have it look the way they want. Awareness of how to do this may not have penetrated the dark corners of the internet yet.”
One thing he’s sure of: High-end AI video-generation capabilities will come. This might happen during the next major election cycle or the one after that, but it’s coming.
With that in mind, Etzioni shared learnings from TrueMedia’s first go-round this year:
Democracies are still not prepared for the worst-case scenario when it comes to AI deepfakes.There’s no purely technical solution for this looming problem, and AI will need regulation. Social media has an important role to play. TrueMedia achieves roughly 90% accuracy, although people asked for more. It will be impossible to be 100% accurate, so there’s room for human analysts.It’s not always scalable to have humans at the end checking every decision, so humans only get involved in edge cases, such as when users question a decision made by TrueMedia’s technology.
The group plans to publish research on its AI deepfake detection efforts, and it’s working on potential licensing deals.
“There’s a lot of interest in our AI models that have been tuned based on the flurry of uploads and deepfakes,” Etzioni said. “We hope to license those to entities that are mission-oriented.”
+ There are no comments
Add yours