YouTube has announced a new measure aimed at promoting transparency on its platform, specifically concerning content generated by artificial intelligence (AI). With the increasing use of generative AI in content creation, the video-sharing platform seeks to enable users to recognize whether the material they are consuming is authentic or not.
To achieve this, the platform has added a new tool in Creator Studio that requires creators to disclose to viewers when realistic content (content that viewers could easily mistake for a real person, place, or event) is created using altered or synthetic means, including generative AI. These disclosures will appear as labels in the expanded description or on the video player itself.
New Tool on YouTube to Label AI-Generated Content The new tool in Creator Studio allows you to label AI-generated content. Source: YouTube.
Cases Requiring Clarification of AI-Generated Content In its official blog, the platform clarifies that creators will not be required to disclose the use of generative AI for productivity tasks, such as generating scripts or automatic subtitles, nor when synthetic media is unrealistic or changes are inconsequential, such as color adjustments or lighting filters. In that regard, some examples of content requiring clarification include:
Using the image of a realistic person: digitally altering content to replace an individual’s face with another or synthetically generating a person’s voice to narrate a video. Altering images of real events or places: such as making it appear as if a real building is on fire or altering a real urban landscape to appear different from reality. Generating realistic scenes: depicting a realistic representation of significant fictional events, such as a tornado advancing toward a real city.
Example of a label on the video player. Source: YouTube.
How and When These Changes Will be Seen According to the company, following this change, a label will appear in the expanded description of videos, and for videos addressing more sensitive topics (such as health, news, elections, or finance), a more prominent label will also be displayed on the video itself.
Example of a label in the expanded description. Source: YouTube.
This measure will be implemented across all surfaces and formats of YouTube in the coming weeks, starting with the app, then on PC, and finally on TV. It is noted that “while we want to give our community time to adapt to the new process and features, in the future, we will consider enforcement measures for creators who consistently choose not to disclose this information.”
Moreover, YouTube is also working on an updated privacy process to allow individuals to request the removal of content generated by AI or other synthetic or altered content that simulates an identifiable individual, such as their face or voice.
Concerns Regarding AI Usage While AI promises to revolutionize numerous aspects of our daily lives, from healthcare to traffic management, it has also raised a series of concerns due to its potential for malicious or harmful use.
One of the most critical points in this debate is the creation and spread of deep fakes, an AI technique that allows the manipulation of images and videos to create convincing false representations of real people. These videos, often distributed online, can impersonate the voice and appearance of public figures to spread misinformation, political propaganda, or even perpetrate scams and extortion.
The fear of deep fakes has raised concerns about trust in digital media and the ability to discern between reality and fiction in an increasingly saturated environment of manipulated information. Additionally, AI has also been employed in the production of inappropriate content, such as artificially generated pornographic images and videos, posing serious risks of exploitation and psychological harm, especially to minors.
An example of this is when images of Pope Francis wearing fashion outfits surfaced last year. President Emmanuel Macron of France was also depicted participating in protests. Even images of Donald Trump being arrested emerged. None of these scenarios actually occurred.
These phenomena have led to a widespread call to action, both from governments and from technology companies and civil society, to address the ethical and security challenges posed by AI. The need to regulate and supervise the development and use of AI has become increasingly urgent.