Highlights:
- Google has announced that YouTube users can request the removal of videos containing realistic-looking synthetic depictions of individuals as part of the platform’s new policy.
- Meta Platforms Inc., Google’s primary rival in the digital advertising market, has unveiled a comparable policy for Instagram and Facebook, set to take effect next year.
Google LLC will now mandate users to disclose whether the videos they upload to YouTube contain content generated with realistic-looking artificial intelligence tools.
In a recent blog post, Jennifer Flannery O’Connor and Emily Moxley, executives in YouTube’s product management, outlined the new policy’s details. This policy, scheduled for implementation in the upcoming months, will apply to all videos featuring realistic-looking synthetic content. This encompasses content generated by AI as well as real clips that have undergone digital alterations.
Following the policy implementation, users will find new options in YouTube’s video upload tool. Users can specify whether a video contains synthetic content through new options in YouTube’s video upload tool. Google warns that creators may face penalties if they fail to indicate that a video contains synthetic content.
Flannery O’Connor and Moxley posted, “Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties.”
When a user marks a newly uploaded video as containing synthetic content, YouTube will include a disclosure in the video’s description. If the synthetic content concerns a sensitive topic, the Google unit will embed a more prominent disclosure directly in the video player. Moxley and Flannery O’Connor added, “Some synthetic media, regardless of whether it’s labeled, will be removed from our platform if it violates our Community Guidelines.”
Google is introducing this new policy in conjunction with a mechanism that allows users to request the removal of AI-generated content. Google has announced that YouTube users can request the removal of videos containing realistic-looking synthetic depictions of individuals as part of the platform’s new policy. Music labels can also request the removal of videos that imitate an artist’s voice.
Google is working to address the potential risks posed by synthetic content on YouTube, including efforts related to its Dream Screen generative AI tool. Introduced in September, Dream Screen allows YouTube users to create background images for videos using natural language instructions. Google will add disclosures to content generated with Dream Screen.Top of Form
The company plans to integrate guardrails into future additions to YouTube’s generative AI feature set. When Dream Screen was announced in September, Google outlined plans to introduce an AI-powered video remixing feature. The search giant is also developing another machine-learning capability to generate new video clips based on text prompts.
The unveiling of YouTube’s synthetic content policy follows Google’s introduction of disclosure rules for AI-generated political ads a few months ago. Under these rules, political organizations must “prominently disclose” the use of realistic-looking synthetic content in their ads. Meta Platforms Inc., Google’s primary rival in the digital advertising market, has unveiled a comparable policy for Instagram and Facebook, set to take effect next year.