Thursday, December 25, 2025
HomeOpinionFine-tune the AI labelling regulations framework

Fine-tune the AI labelling regulations framework


Two months ago, millions of Indians watched the Finance Minister Nirmala Sitharaman speak about indirect tax reforms. At the same time, a video travelled across instant messaging platforms in which the Minister appeared to endorse an investment scheme promising “30x returns in seven days”. A Roorkee resident (Uttarakhand) lost ₹66 lakh after viewing this viral video, later found to have been created using Artificial Intelligence (AI) tools.

The rapid rise of near-indistinguishable digital alterations demands urgent, multi-stakeholder action.

Although the government initially recognised the existing framework as sufficient to tackle synthetic media, it has now introduced draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The amendments mandate that large social media platforms/Significant Social Media Intermediaries (SSMIs) clearly label synthetic or AI-generated media.

While the proposed rules mark a meaningful step forward, their real-world implementation will be complex and require engagement across multiple stakeholders.

Noble intent, ambiguous grouping

Synthetic media is defined as content that is artificially or algorithmically created, modified, or generated to appear authentic. However, labelling a broad range of content created with computer-generated imagery or altered with editing software can be complicated, since these are not technically made by generative AI. Given the volume of synthetic media and the fact that not all of it is problematic, the focus should be on harmful and/or misleading synthetic media. To put things in perspective, over 50% of all content on the Internet is now considered AI-generated, as per a recent report.

To fix accountability, the draft rules mandate that platforms introduce labels covering at least 10% of visual area of synthetic videos, or 10% of initial duration of synthetic audio; but its application to mixed media — say, real visuals with cloned audio — remains unclear.

Additionally, will a three-second disclaimer in a 30 second audio clip be effective? Or will it be ignored like the fine print in advertisements? Similarly, will a three-minute disclaimer in a 30-minute video inform viewers or overwhelm them?

We are still in the development phase of AI, and any prescriptive mandate on labels would not be principle-based, future-proof and technology-neutral. In some cases, it may not meet the reasonable person test like the 10% rule.

But it is not the question of labels alone. Watermarks promised by AI companies lack reliability. Within days of a large company releasing a text-to-video social media platform, with assurances that these videos would bear watermarks declaring them synthetic, tools emerged that could scrub these markings entirely.

Consequently, the framework needs fine-tuning and precise standards for each category. A tiered-labelling system that distinguishes between ‘fully AI-generated’, ‘AI-assisted’, and ‘AI-altered’ content may be more effective.

Graded compliance, targeted intervention

The proposed rules mandate intermediaries such as Facebook, Instagram, YouTube and X to analyse and label synthetic media, and they must broaden the scope to include creators directly. Creators frequently employ AI for visual storytelling tasks and also generate avatars and digital twins. But few inform the audience about their use. Although certain videos exhibit clear indicators of manipulation, other synthetic media have now achieved such realism that viewers struggle to distinguish between human- and AI-created content.

Also, creators above a follower threshold should disclose AI use similar to SSMIs; voluntary self-labelling can be promoted among smaller ones.

Graded compliance will acknowledge that professional creators hold significant influence and, therefore, owe transparency to their audiences. It can help creators not just gain and retain public trust but also keep up with evolving regulations.

Verification needs more hands on deck

Currently, the rules require SSMIs to ask users to label their content as synthetic. Platforms also have to deploy tools to verify the accuracy of such declarations. But synthetic media is multiplying faster than verification technology, and platforms have, so far, had limited success with labelling.

Most social media platforms adhere to Coalition for Content Provenance and Authenticity (C2PA) standards to identify and establish the origins of digital content. However, as C2PA evolves, content is not strictly labelled as a norm. Besides, it is challenging for social media platforms to detect AI-generated or algorithmically-created content. Ultimately, the platforms would require the assistance of third-party detection tools, which are only as reliable as their training and accuracy.

So far, platforms have not refined their tools. An audit by Indicator, a publication that monitors digital deception, found that most failed to label AI content: Only 30% of its 516 AI-generated test posts across Instagram, LinkedIn, Pinterest, TikTok and YouTube were correctly flagged. Google and Meta did not label content from their own AI tools, TikTok only flagged its in-app creations, and Pinterest, the top performer, effectively labelled just 55%.

As the focus shifts to providing credible information, the social media ecosystem should also rely on the discernment of independent information verifiers and auditors. This is especially critical for harmful, fraudulent and misleading content where the stakes are high. Such auditors can be trusted to close gaps in automated detection systems through human judgment, helping platforms become more resilient to deepfakes and protecting users.

IT Ministry proposes strict rules for labelling deepfakes amid AI misuse

The adage, “If it sounds too good to be true, it probably is”, will soon be codified into India’s IT laws. With nuance, users will no longer need to second-guess authenticity. The label will provide clarity.

Rakesh R. Dubbudu is President of the Trusted Information Alliance (TIA), a cross-industry collaborative effort dedicated to championing information integrity and safeguarding users online. Rajneil R. Kamath is Vice-President of the Trusted Information Alliance (TIA)

Published – November 13, 2025 12:08 am IST



Source link

RELATED ARTICLES

Most Popular

Recent Comments