¯
Regulating Synthetic Media - India Tightens IT Rules on AI-Generated Content
Feb. 11, 2026

Why in News?

  • The Union Government has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
  • It will strengthen regulation of AI-generated (synthetic) content and drastically reduce takedown timelines for unlawful material.
  • The amendments (effective February 20, 2026) aim to curb the spread of non-consensual deepfakes, intimate imagery, and unlawful content, while reinforcing platform accountability under the IT Act, 2000.

What’s in Today’s Article?

  • Key Amendments at a Glance
  • Safe Harbour and Intermediary Liability
  • Administrative Changes
  • Trigger Events - The Global Deepfake Crisis
  • Governance and Constitutional Dimensions
  • Key Challenges
  • Way Forward
  • Conclusion

Key Amendments at a Glance:

  • Sharp reduction in removal timelines:
    • For example,
      • For Court/Government-declared illegal content takedown timeline reduced to 3 hours (from earlier 24–36 hours).
      • Similarly, for non-consensual intimate imagery/deepfakes it is 2 hours (earlier 24 hours), and for other unlawful content from 36 hours to 3 hours.
    • Rationale: Earlier timelines were seen as ineffective in preventing virality. The government argues tech companies possess sufficient technical capacity for faster removal.
    • Concerns: Determining “illegality” within 2–3 hours is operationally difficult. Risk of over-censorship and precautionary takedowns. Increased compliance burden for intermediaries.
  • Mandatory Labelling of AI-generated content:
    • Legal definition of “Synthetically Generated Information (SGI)”: Audio, visual or audio-visual content artificially created, generated, modified, or altered using a computer resource in a way that makes it appear real or indistinguishable from authentic events or persons.
    • Important features:
      • AI-generated imagery must be labelled “prominently.”
      • The earlier proposal requiring 10% of image space to carry the label has been diluted.
      • Platforms must seek user disclosure for AI-generated content, proactively label content if disclosure is absent, and remove non-consensual deepfakes.
    • Exclusions: Routine editing and quality-enhancing tools (e.g., smartphone touch-ups) are excluded — narrowing the scope from the draft October 2025 version.

Safe Harbour and Intermediary Liability:

  • What is Safe Harbour? Under Section 79 of the IT Act, 2000, intermediaries are protected from liability for user-generated content, provided they exercise “due diligence.”
  • Amendment impact:
    • If an intermediary knowingly permits, promotes, or fails to act against unlawful synthetic content, it may be deemed to have failed due diligence.
    • This may result in loss of safe harbour protection, significantly increasing regulatory pressure on platforms.

Administrative Changes:

  • The amendment partially rolls back an earlier rule that limited States to appointing only one officer for issuing takedown orders.
  • Now, States can designate multiple authorised officers, addressing administrative needs of populous States.

Trigger Events - The Global Deepfake Crisis:

  • The urgency follows global controversies, including AI platforms generating non-consensual intimate images of women.
  • Such incidents raise privacy concerns, gender dignity issues, pose threats to democratic discourse, and misrepresents real-world events.
  • This places the reform within a broader international debate on AI governance and platform accountability.

Governance and Constitutional Dimensions:

  • Article 19(1)(a) – Freedom of Speech: Overbroad or rushed takedowns may chill legitimate expression. Short timelines increase risk of defensive over-removal.
  • Article 21 – Right to Privacy and Dignity: Faster removal of non-consensual deepfakes strengthens protection of individual dignity.
  • Federal implications: Allowing multiple State officers enhances decentralised enforcement.

Key Challenges:

  • Determining illegality within 2–3 hours: Legal ambiguity, law enforcement communications may lack clarity.
  • Risk of over-censorship: Platforms may make mistakes on the side of removal - could undermine free speech and digital innovation.
  • Compliance burden on Big Tech: Real-time moderation requires high-end AI tools and human review. Smaller platforms may struggle disproportionately.
  • Verification mechanisms: Ensuring authenticity of user declarations. Deploying “reasonable technical measures” without privacy violations.

Way Forward:

  • Clearer illegality standards: Develop structured guidance for platforms, and standardised digital takedown protocols.
  • Independent oversight mechanism: Appellate or review authority to check arbitrary takedowns.
  • Strengthening AI detection tools: Promote indigenous AI detection systems under India’s AI mission.
  • Harmonisation with Digital Personal Data Protection Act: Ensure consistency in privacy and consent standards.
  • Capacity building for States: Training authorised officers in cyber law and AI governance.

Conclusion:

  • India’s amended IT Rules reflect a decisive shift toward proactive regulation of AI-driven misinformation and digital harm.
  • By these amendments, the government seeks to protect privacy, dignity, and public order in an era of rapidly advancing generative AI.
  • However, the reform also raises critical concerns. So, the long-term success of this framework will depend on calibrated enforcement, technological readiness, and institutional safeguards against overreach.

Enquire Now