Sam Altman’s OpenAI is adding watermarks to AI images but acknowledged they could be easily removed.
Win McNamee and Didem Mente/Anadolu Agency via Getty Images
OpenAI says it’s adding new digital watermarks to DALL-E 3 images.The company acknowledged the solution wasn’t perfect as it could be easily removed.There has been increasing concern about the spread of AI-generated misinformation amid elections.
OpenAI says it’s adding new digital watermarks to DALL-E 3 images.
In a blog post published Tuesday, the company said watermarks from the Coalition for Content Provenance and Authenticity, or C2PA, would be added to AI-generated images. The company believed adopting the watermark would help increase public trust in digital information.
Users can check if an AI has generated images using sites like Content Credentials Verify. The company noted that some media organizations are also using the standard to verify the content source.
However, OpenAI acknowledged the method wasn’t a perfect solution, saying the watermark could be easily removed.
There has been increasing concern about the spread of misinformation — particularly AI-generated audio, images, and videos — amid upcoming elections.
As billions head to the polls this year, voters have already run into issues with AI-generated content, including robocalls impersonating Joe Biden and fake video ads of Rishi Sunak. Explicit deepfakes of Taylor Swift also made headlines last month, prompting international condemnation and legislative action.
Fellow tech company Meta has also signaled its preparing to crack down on the spread of AI-generated content. The company said on Tuesday it planned to attach labels to AI-generated images on Facebook, Instagram, and Threads.
Not a ‘silver bullet’
OpenAI knows the plans aren’t perfect.
The company said C2PA is already included in the metadata for images produced with the web version of DALL·E 3 and plans to extend this to mobile users by February 12.
However, OpenAI said the metadata was not “a silver bullet to address issues of provenance,” noting it could easily be removed either accidentally or intentionally.
Marking out AI-generated content has proved hard across the board. Studies have found that most forms of digital watermarking are rife with weaknesses that malicious actors can easily leverage.
Early attempts to provide systems to check written content for signs of AI have also proved relatively fruitless. OpenAI quietly took down its detection service over accuracy concerns.
Representatives for OpenAI did not immediately respond to a request for comment from Business Insider, made outside normal working hours.
+ There are no comments
Add yours