Why in news?
- The Indian government has proposed draft rules mandating AI-generated content labelling on social media platforms like YouTube and Instagram to curb the spread of deepfakes and synthetic media.
- Under the amendments to the IT Rules, 2021, platforms must:
- Ask users to declare whether uploaded content is AI-generated.
- Ensure such content carries permanent labels or metadata identifiers.
- For videos, the label must cover 10% of the screen area; for audio, it must play during the first 10% of duration.
- The move follows rising concerns over digitally altered videos, such as the 2023 Rashmika Mandanna deepfake, which went viral and prompted Prime Minister Narendra Modi to warn that deepfakes pose a new national crisis.
What’s in Today’s Article?
- India’s Draft Rules for Labelling AI-Generated Content
- Existing AI Labelling Practices and India’s Stricter Proposal
- Bollywood Pushes Back as Deepfakes Threaten Celebrity Rights
- How Global Regulations Are Responding to the Deepfake Challenge?
India’s Draft Rules for Labelling AI-Generated Content
- The proposed IT Rules amendments require social media platforms to:
- Ask users to declare if uploaded material is AI-generated or synthetic.
- Use automated tools to verify such declarations.
- Clearly label confirmed AI-generated content with a visible notice.
- If platforms fail to comply, they risk losing legal immunity under safe harbour provisions, making them liable for unverified or unlabeled synthetic content.
- The draft also defines “synthetically generated information” as content created, altered, or modified using computer algorithms in a way that appears authentic or real.
Existing AI Labelling Practices and India’s Stricter Proposal
- Platforms like Meta and Google already use AI content labels, asking creators to declare if content was made or modified using AI.
- Instagram tags such posts with an “AI Info” label, though enforcement is inconsistent.
- YouTube adds an “Altered or Synthetic Content” tag, explaining how AI influenced the video’s creation.
- Meta has also partnered with other tech firms under the Partnership on AI (PAI) to develop common identification standards and tools to detect invisible AI markers across platforms like Google, OpenAI, Adobe, and Midjourney.
- However, these global measures are mostly reactive, activated after content is flagged.
- In contrast, India’s proposed rules require companies to proactively verify and label AI-generated content using automated detection tools, even without user notification.
Bollywood Pushes Back as Deepfakes Threaten Celebrity Rights
- The rise of AI-generated deepfakes has alarmed India’s entertainment industry, prompting stars like Amitabh Bachchan, Aishwarya Rai, Akshay Kumar, and Hrithik Roshan to take legal action to protect their personality rights — their likeness, voice, and image.
- Experts note that India lacks explicit legal protection for personality rights, relying instead on a patchwork of existing laws for limited safeguards.
- The issue gained attention after the production company behind Raanjhanaa reportedly used AI to alter the film’s ending without the consent of its director and actors.
- This highlighted the urgent need for clearer legal frameworks against deepfakes in Indian entertainment.
How Global Regulations Are Responding to the Deepfake Challenge?
- Countries worldwide are introducing AI labelling and protection laws to curb the spread of deepfakes and synthetic media.
- European Union: The AI Act mandates that all AI-generated or altered content — including text, audio, video, and images — must be clearly labelled in a machine-readable format. Entities using AI for public-interest content must also disclose artificial generation or alteration.
- China: Recently enforced AI labelling rules require visible markers for chatbots, voice synthesis, face swaps, and scene editing. Hidden watermarks can be used for other AI content. Platforms must detect, label, and alert users about AI-generated material.
- Denmark: Proposes giving citizens copyright ownership over their likeness, allowing them to demand removal of any digitally altered content created without consent — a pioneering move in personal image rights protection.