YouTube will soon require creators to disclose whether a video was made with generative AI.
On Tuesday, the video streaming giant announced this and other updates to mitigate the misleading or harmful effects of generative AI.
“When creators upload content, we’ll have new options for them to choose from to show that it contains realistically altered or synthetic material,” said Jennifer Flannery O’Connor and Emily Moxley, vice presidents of product management at YouTube.
What YouTube labels indicating AI-generated content will look like.
Creators who do not consistently do so may face penalties such as content removal or suspension from the YouTube Partner Program. The announcement also says that artists and creators will be able to request the removal of content (including music) that uses their likeness without consent.
Tweet may have been deleted
The widespread use of generative AI has heightened the threat of deep falsification and disinformation, especially with the upcoming presidential election. Both the public and private sectors have recognized the need to detect and prevent unfair use of generative AI.
For example, President Biden’s AI executive order specifically addresses the need to label or watermark AI-generated content. OpenAI is working on its own tool, an “origin classifier,” which detects whether an image was made with its DALL-E 3 AI generator. Just last week, Meta announced a new policy that requires political advertisers to disclose whether an ad uses generative AI.
Political ads on Facebook, Instagram required to disclose use of AI
On YouTube, when a creator uploads a video, they will be given the option to indicate whether it “contains realistically altered or synthetic material,” the blog post said. “For example, it could be an AI-generated video that realistically depicts an event that never happened, or content that shows someone saying or doing something they didn’t actually do.”
Labels informing viewers that the video has AI-generated or altered content will be added to the description panel. A “more prominent label” will be added to content involving sensitive topics. Even if AI-generated content is labeled appropriately, if it violates YouTube’s Community Guidelines, it will be taken down.
How will moderation of all this content be enforced? Through AI of course. In addition to creating fake content that looks convincingly real, generative AI can also successfully identify and catch content that violates content policies. YouTube will deploy generative AI technology to help contextualize and understand threats at scale.
Artificial Intelligence YouTube