YouTube has unveiled a host of measures and guidelines to combat AI-related misuse, particularly for AI-generated and deep fake music.
The decision comes amid mounting pressure to take action over AI-generated music and deep fake content, exemplified by songs like “Heart on My Sleeve,” featuring AI-generated vocals resembling Drake and the Weeknd without their permission.
Universal Music Group criticized the song for “infringing content created with generative AI.”
After some debate, the Grammys recently declared that such compositions won’t be considered for awards.
YouTube is updating its privacy complaint process to allow for complaints about deep fakes, specifying that not all such content will be removed, with factors like parody and satire playing a role in their decision-making.
Additionally, creators must now disclose when they use manipulated or synthetic content, including AI-generated material.
This is particularly emphasized for content discussing sensitive topics like elections and public health crises. YouTube has also mentioned that non-compliance with these guidelines could lead to content removal or suspension of advertising payments.
Moreover, a label identifying AI-generated content will be added to video descriptions, with more prominent labeling for sensitive topics.
The company has stated that AI-generated material violating existing content guidelines, such as synthetically created violent videos, will also be removed.
Here’s a summary of YouTube’s armory of tools to combat AI misuse:
- Disclosure requirements and content labels: Creators must disclose if their content includes altered or synthetic material that is realistic, particularly when using AI tools. This is crucial for content discussing sensitive topics like elections and public health crises. Non-compliance may lead to content removal or other penalties.
- Viewer information: YouTube plans to inform viewers about synthetic content through labels on the description panel and, for sensitive topics, more prominently on the video player.
- Content removal in high-risk cases: Synthetic media that poses a risk of harm, regardless of labeling, will be removed if it violates Community Guidelines. This includes content depicting realistic violence aimed to shock or disgust viewers.
- Privacy request process: YouTube will allow requests to remove AI-generated or altered content that simulates an identifiable individual’s face or voice. Decisions will be based on factors like parody or satire, the person’s identifiability, and the individual’s public status.
- Special provisions for music content: Music partners can request the removal of AI-generated music that mimics an artist’s unique voice. YouTube will evaluate factors like news reporting or critique relevance in these cases.
- AI in content moderation: YouTube employs a blend of AI technology and human review to enforce Community Guidelines. AI classifiers aid in detecting potential violations, while human reviewers confirm policy breaches. Generative AI is particularly useful in identifying and addressing novel forms of abuse.
- Building responsibility for AI tools: YouTube is focused on developing AI tools with built-in safeguards to prevent the generation of inappropriate content. They are also preparing for potential circumvention by bad actors through continuous improvement and dedicated threat detection teams.
This follows a broader industry trend to expose AI-related content that isn’t aiming to make a genuine and trustworthy contribution to platforms.
Companies like Meta also introducing requirements for political advertisers to disclose the use of AI in ads. Misinformation
This results from a mixture of public and private pressure, with AI companies seeking to show a willingness to meet the promises made in various voluntary frameworks.