Highlights:
- OpenAI has emphasized that it is fully aware that deepfakes are likely to become a more significant problem as technology advances. This underscores the critical importance of developing effective detection tools.
- The C2PA standard aims to incorporate a “nutritional label” for content that has been altered, spanning still images, text, audio, or video. This label is expected to specify when and how the content was altered, potentially alleviating concerns raised by politicians advocating for increased legislation around AI.
As concerns escalate regarding the spread of deepfake images potentially impacting this year’s numerous elections, OpenAI has unveiled a new tool designed to identify whether an image was generated using its own DALL-E AI image generator.
Regrettably, OpenAI’s deepfake detector is only effective on images generated by its generative artificial intelligence. Nonetheless, it marks a significant step forward. For years, deepfake images, particularly manipulated sexual photos, have been a pervasive issue on the internet. Concerns persist that as image generators advance, the web, mainly social media platforms, will be inundated with increasingly offensive fakes.
OpenAI has emphasized that it is fully aware that deepfakes are likely to become a more significant problem as technology advances. This underscores the critical importance of developing effective detection tools. AI predicts whether an image has been altered from its original state with OpenAI’s deepfake detector. The company reported a detection accuracy rate of over 98% for identifying images generated with DALL-E. However, this accuracy dropped significantly from 5% to 10% when analyzing pictures produced by other companies’ image generators.
Progress in this domain is expected to be gradual, which is why OpenAI has joined the steering committee for the Coalition for Content Provenance and Authenticity (C2PA). This coalition brings together various companies to advance technical standards for digital content provenance. Google LLC has recently joined the board alongside Adobe Inc., Meta Platforms Inc., Intel Corp., Microsoft Corp., Sony Corp, and OpenAI.
The C2PA standard aims to incorporate a “nutritional label” for content that has been altered, spanning still images, text, audio, or video. This label is expected to specify when and how the content was altered, potentially alleviating concerns raised by politicians advocating for increased legislation around AI. The EU and the UK seem to be leading in this area compared to the U.S.
“Our efforts around provenance are just one part of a broader industry effort – many of our peer research labs and generative AI companies are also advancing research in this area,” stated OpenAI in a blog post about its new detector. “We commend these endeavors — the industry must collaborate and share insights to enhance our understanding and continue to promote transparency online.”