China Rolls Out Law Requiring Platforms To Label AI-Generated Content As Such

As AI becomes more and more powerful, governments across the world are scrambling to adapt to the change.

China has begun implementing a new law that requires social media companies to label AI-generated content as such. The law was passed in March this year, and China has begun rolling it out this week. Top Chinese social media sites have introduced features that ensure that AI-generated content is labelled on their platforms.

The law requires explicit and implicit labels for AI-generated text, images, audio, video and other virtual content. Explicit markings must be clearly visible to users, while implicit identifiers, such as digital watermarks, should be embedded in the metadata.

WeChat has said content creators must voluntarily declare all AI-generated content upon publication. In a post, the company said that it “strictly prohibits the deletion, tampering, forgery, or concealment of AI labels added by the platform, as well as the use of AI to produce or spread false information, infringing content, or any illegal activities”. The Chinese version of Tiktok said that it encourages creators to add visible labels to every AI-generated piece of content that they post, and added that it also detects the source of every piece of content through the metadata. Weibo even gave users the option to flag AI-generated content that was not labeled. RedNote said that it would automatically add markers to AI-generated content that hadn’t been marked.

China isn’t the only country to have passed such a law, thought it might be the first to enforce it. The EU’s Artificial Intelligence Act requires platforms and businesses to clearly disclose when users are interacting with AI systems such as chatbots, and to label any audio, images, video, or text created by generative AI so that consumers are not misled. Spain has also introduced strict labeling rules for AI-generated content in 2024, aiming to clarify the origin and nature of digital material and combat misinformation online. Australia’s National AI Ethics Framework calls for transparency and labeling of AI-generated content, especially for high-risk applications, and is moving toward more enforceable disclosure mandates. Canada’s proposed Artificial Intelligence and Data Act (AIDA) is expected to soon require AI content labeling, especially in scenarios with high impact on individuals’ rights.

The rollout of China’s AI-labeling law follows a larger global trend that as generative AI blurs the line between authentic and synthetic media, transparency is becoming ever more crucial. Government seem to have been stepping in to maintain trust in digital ecosystems and are trying to ensure that consumers aren’t misled by AI-generated media. And with AI now producing lifelike pictures and videos in addition to text, such laws could be crucial to ensure that bad actors don’t misuse AI at the cost of unsuspecting consumers.

Posted in AI