¯
Govt. Proposes Mandatory Labelling of AI-Generated Content
Oct. 23, 2025

Why in the News?

  • The Ministry of Electronics and Information Technology has proposed amendments to the IT Rules, 2021, making it mandatory to label and declare AI-generated content on social media platforms to curb deepfakes and misinformation.

What’s in Today’s Article?

  • AI-Generated Content (Introduction, Rise in Deepfakes, Frauds, etc.)
  • News Summary (Key Provisions of Proposed Rules, Rationale Behind Proposal, Challenges, Future Outlook, etc.)

Rise of AI-Generated Content in India

  • India is witnessing a rapid surge in the use of artificial intelligence (AI) for content creation across social media, advertising, and entertainment.
  • However, this rise has also sparked concerns about synthetic content, especially deepfakes, which use AI to fabricate hyper-realistic images, audio, and videos.
  • These manipulations blur the line between reality and fiction, often being used for political propaganda, financial fraud, and reputational damage.
  • Deepfakes gained national attention in 2023, after a digitally altered video of an actor went viral, prompting widespread outrage and a strong government response.
  • Prime Minister Modi termed deepfakes a new “crisis,” emphasising the need for regulatory intervention.

News Summary

  • The draft amendment to the IT Rules, 2021, seeks to make it mandatory for creators and platforms to declare and label AI-generated content, including text, audio, video, and images, uploaded to the internet.
  • The proposal aims to enhance transparency, protect users from misinformation, and safeguard democratic discourse in the digital space.

Key Provisions of the Proposed Rules

  • Mandatory Self-Declaration by Creators:
    • Users uploading content on social media platforms like YouTube, Instagram, and X (formerly Twitter) must declare whether their content is synthetically generated.
  • Dual Labelling Mechanism:
    • Platforms must ensure that AI-generated content carries two visible markers:
      • An embedded label or watermark within the content itself, covering at least 10% of the visual or audio duration.
      • A platform-level label displayed wherever the content appears online.
  • Platform Accountability:
    • If users fail to make such declarations, social media companies will be responsible for proactively detecting and labelling AI-generated content using technical measures and automated tools.
  • Definition of Synthetic Content:
    • The draft defines synthetically generated information as “information artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that appears reasonably authentic or true.”
  • Consequences of Non-Compliance:
    • Platforms that fail to verify and label synthetic content may lose their legal immunity under Section 79 of the IT Act, which currently shields intermediaries from liability for third-party content.
  • Metadata and Identifier Requirement:
    • AI-generated material must be embedded with a unique metadata identifier that remains permanent and traceable, ensuring accountability in case of misuse.
  • Scope of Application:
    • The draft applies not only to popular social platforms but also to AI content creation tools such as OpenAI’s Sora and Google’s Gemini, which must implement built-in watermarking and labelling systems.

Rationale Behind the Proposal

  • According to IT Minister Ashwini Vaishnaw, the move addresses growing public concern about the misuse of synthetic content for manipulation. He stated:
    • “People are using prominent personalities’ images and creating deepfakes that affect personal lives and cause social misconceptions. In a democracy, users should know what is real and what is synthetic.”
  • The ministry’s explanatory note underlines that generative AI has the potential to create convincing falsehoods, leading to reputational harm, election interference, and financial fraud.
  • By mandating clear disclosure, the government seeks to empower users to make informed judgments about the authenticity of content.
  • This marks a significant shift in India’s digital governance approach. Earlier, the government relied on general impersonation and fraud provisions under the Information Technology Act, 2000, but the increasing sophistication of AI tools has necessitated specific regulatory safeguards.

International Context and Comparative Framework

  • India’s move aligns with global trends in AI regulation.
    • China introduced similar AI labelling laws in 2025, requiring clear visual tags and hidden watermarks on AI-generated media, including chatbots and synthetic videos.
    • The European Union’s AI Act mandates transparency in AI interactions, requiring users to be notified when engaging with AI-generated content.
    • The United States is developing federal guidelines for content authenticity, while tech firms such as Meta, Google, and OpenAI have pledged voluntary watermarking standards.
  • India’s draft rules, therefore, position it among the early adopters of a legally binding framework to combat misinformation in the age of generative AI.

Implementation Challenges and Future Outlook

  • While the proposal has received broad support, experts caution about implementation complexity. Identifying AI-generated content in real-time, especially across diverse formats and languages, requires advanced detection infrastructure.
  • Moreover, the challenge of balancing regulation with innovation Excessive compliance burdens could deter small creators and startups in India’s fast-growing AI ecosystem, valued at over $12 billion by 2030.
  • To address this, the government has invited public and industry feedback on the draft until November 6, 2025, signalling its intent to refine the framework before final notification.
  • If executed effectively, the labelling rule could become a global model for responsible AI governance, ensuring that technological progress does not come at the cost of truth and public trust.

 

Enquire Now