Context:
- Actors Abhishek Bachchan and Aishwarya Rai Bachchan have sued Google and YouTube in the Delhi High Court, accusing them of hosting AI-generated videos that depict them in fabricated — sometimes explicit — scenarios.
- They argue that such deepfakes violate their personality rights, damage their reputation, and threaten financial interests.
- The case also seeks safeguards to prevent these fake videos from being used to train future AI models.
- The issue underscores how generative AI blurs the line between real and fake, challenging existing legal protections over one’s identity — name, image, likeness, and voice.
- Personality rights, rooted in privacy, dignity, and commercial autonomy, were not designed for the scale and speed of AI manipulation.
- Deepfakes now fuel misinformation, extortion, and loss of public trust, revealing a growing need for stronger legal and ethical safeguards to prevent the misuse and commodification of human identity.
- This article highlights how the rise of generative AI and deepfakes is challenging traditional personality rights in India and across the world.
The Evolving Legal Landscape of Personality Rights in the Age of AI
- India’s Hybrid and Reactive Approach
- India follows a hybrid model rooted in dignity (Article 21) and property-like control.
- Personality rights are not codified but recognised through landmark judgments:
- Amitabh Bachchan v. Rajat Nagi (2022): Affirmed personality rights.
- Anil Kapoor v. Simply Life (2023): Banned AI misuse of Mr. Kapoor’s likeness and “Jhakaas”.
- Arijit Singh v. Codible Ventures (2024): Protected his voice from AI replication.
- While the IT Act and 2024 Intermediary Guidelines tackle impersonation and deepfakes, challenges remain due to anonymity, global data sharing, and limited enforcement capacity.
- United States: Property-Based ‘Right of Publicity’
- The U.S. treats personality rights as commercial property that can be licensed or transferred.
- Key developments:
- Haelan Labs v. Topps (1953): Separated publicity rights from privacy rights.
- ELVIS Act, Tennessee (2024): Prohibits unauthorised AI cloning of voices and likenesses.
- Litigation against Character.AI highlights risks of AI chatbots inducing self-harm or impersonating therapists; courts have rejected First Amendment immunity claims.
- European Union: Dignity and Consent Framework
- The EU’s GDPR mandates explicit consent for processing personal and biometric data.
- The EU AI Act (2024) classifies deepfakes as high-risk, requiring transparency, labelling, and disclosure to prevent manipulation and deception.
- China: Stricter Consumer-Focused Enforcement
- Chinese courts emphasise preventing deception:
- A 2024 Beijing ruling held synthetic voices must not mislead consumers.
- Another case awarded damages to a voice actor whose AI-replicated voice was sold without consent — affirming voice as personality property.
- A Fragmented Global Framework
- AI’s borderless nature outpaces national laws, creating a fragmented regulatory mosaic.
- Scholars argue for expanded “extended personality rights” covering style, persona, and creative patterns to protect individuals from exploitative AI training and misuse.
The Human–AI Nexus: Ethical and Legal Concerns
- UNESCO’s Recommendation on the Ethics of AI (2021) provides a rights-based framework emphasising dignity, autonomy, and protection from exploitation.
- Scholars argue that personality rights must evolve to address AI-generated impersonation and manipulation.
- Gaps in India’s Legal Architecture
- Experts highlight India’s fragmented AI laws and call for:
- statutory definitions of AI,
- explicit classification of deepfakes as high-risk,
- clearer regulation of behavioural data processing.
- They also raise concerns about AI recreations of deceased artists, noting that Indian courts do not recognise personality rights as inheritable.
- Debates on AI Personhood
- Critics, however, warn that granting AI legal personhood could dilute human rights protections and complicate liability frameworks.
- The tension lies between leveraging AI for innovation and preventing erosion of human autonomy and dignity.
- Need for Robust Indian Legislation
- The rise of deepfakes exposes structural gaps. India must:
- codify personality rights,
- mandate AI watermarking and model transparency,
- strengthen platform liability,
- enable cross-border regulatory cooperation.
- The government’s 2024 deepfake advisory is a first step, but inadequate on its own.
- Call for Global Harmonisation
- Adopting and aligning with UNESCO principles can help India prevent ethical deterioration, ensure accountability, and safeguard human identity in the age of generative AI.