Why in news?
The Ministry of Electronics and Information Technology (MeitY) submitted a status report to the Delhi High Court highlighting key concerns about deepfakes.
Deepfake technology creates realistic videos, audio, and images. It can change a person’s face, voice, and actions. This can mislead people and spread false information.
What’s in today’s article?
- Regulation of Deepfakes in India
- Background
- Government Report on Deepfakes: Key Concerns and Stakeholder Insights
Regulation of Deepfakes in India
- In India, while there's no specific law directly addressing deepfakes.
- Existing provisions under the Information Technology Act (IT Act) and other laws can be used to address their misuse, such as defamation, impersonation, and copyright infringement.
- Existing Legal Framework:
- Information Technology Act, 2000:
- Section 66D:Penalizes cheating by impersonation using a computer resource, which could apply to deepfakes used for fraudulent impersonation.
- Section 66E:Addresses violation of privacy, which could be relevant if deepfakes are used to share private content.
- Sections 67, 67A, and 67B:Prohibit and punish the publication or transmission of obscene or sexually explicit material.
- Defamation Laws: Deepfakes used to spread misinformation or damage someone's reputation can be challenged under defamation laws.
- Copyright Act, 1957:Copyright holders can initiate legal proceedings against individuals who use copyrighted material without permission to create deepfakes, with penalties outlined in Section 51.
Background
- Several petitions have been filed in the Delhi High Court seeking regulation of deepfakes and AI-generated content.
- Three petitions filed
- Rajat Sharma's Petition (Journalist, India TV Editor-in-Chief)
- Seeks regulation of deepfake technology.
- Requests blocking of public access to apps enabling deepfake creation.
- Argues that deepfakes pose a threat to society by spreading misinformation and disinformation, undermining public discourse and democracy.
- Chaitanya Rohilla's Petition (Lawyer)
- Calls for regulations on AI usage, addressing concerns over its unregulated deployment.
- Kanchan Nagar's Petition (Model)
- Seeks a ban on non-consensual commercial deepfakes.
- Advocates for fair compensation for original artists in commercial advertising.
- Formation of Committee
- In November 2024, the court instructed the Centre to appoint members to the committee.
- The MeitY had announced the formation of a committee on November 20, 2024, to address this issue.
- A nine-member MeitY committee, formed in November 2024, met stakeholders on January 21, 2025.
Government Report on Deepfakes: Key Concerns and Stakeholder Insights
- Rising Threats from Deepfakes
- Deepfakes targeting women during state elections.
- Increasing AI-generated scam content, particularly post-elections.
- Need for better enforcement rather than new laws
- Lack of a uniform definition for "deepfake."
- Call for AI Content Regulation
- Stakeholders emphasized mandatory AI content disclosure, labeling standards, and grievance redressal mechanisms, focusing on malicious actors rather than creative uses of deepfake technology.
- Debate on Intermediary Liability
- MeitY's panel proposed mandatory compliance for intermediaries regarding deepfake content.
- Stakeholders cautioned against over-reliance on intermediary liability frameworks, advocating better investigative and enforcement mechanisms instead of new regulations.
- Intermediary liability frameworks determine the extent to which intermediaries can be held liable for content on their platforms.
- The frameworks range from holding intermediaries entirely responsible for the content posted on their platform to complete immunity.
- A representative from X stressed the need to distinguish between deceptive and benign AI content.
- Challenges in Deepfake Detection
- The Data Analysis Unit (DAU), part of the Meta-supported Misinformation Combat Alliance, highlighted:
- Audio-based deepfakes are harder to detect.
- Stakeholders underscored the need for collaboration and standard detection frameworks.
- Law Enforcement and Regulatory Actions
- The Indian Cyber Crime Coordination Centre (I4C) has been tasked with gathering data on deepfake-related cases from law enforcement agencies.
- Proposed solutions include awareness campaigns via platforms like YouTube.
- Ongoing Consultations and Next Steps
- MeitY has requested three more months from the Delhi High Court to complete its consultations.
- The ministry is yet to consult victims of deepfakes and is collaborating with the Ministry of Information and Broadcasting to gather their inputs.
- Discussions so far have focused more on reactive measures rather than preventive solutions.