Why in news?
The Election Commission of India (ECI) has issued an advisory directing political parties to clearly label all AI-generated or altered videos, images, and audio clips shared online. The move follows growing concerns over the misuse of deepfakes and synthetically generated content that can distort facts, mislead voters, and disrupt the level-playing field during elections.
The ECI warned that hyper-realistic AI content, especially that which falsely depicts political leaders making sensitive statements, poses a serious risk to electoral integrity and fair competition among candidates.
The advisory also specifies standards for label prominence and placement, aligning closely with the draft rules recently proposed by the IT Ministry to curb AI misinformation.
Calling such technology a “deep threat”, the ECI stressed that the ability of AI-generated content to convincingly mimic truth could mislead the public and undermine trust in the democratic process.
What’s in Today’s Article?
- ECI Tightens Rules on Deepfakes: Mandatory Labels, Watermarks, and Disclosure
- ECI’s Deepfake Guidelines Mirror Draft IT Rules on AI Content Labelling
- AI and Threat to Democracy
ECI Tightens Rules on Deepfakes: Mandatory Labels, Watermarks, and Disclosure
- The ECI has expanded its earlier directions on AI-generated and deepfake content to ensure greater transparency during election campaigns.
- Previously, the ECI had instructed parties to remove deepfake posts within three hours of detection and label altered media as “AI-Generated,” “Digitally Enhanced,” or “Synthetic Content.”
- Under the new advisory, the ECI now mandates that:
- Labels or watermarks must cover at least 10% of the screen area for visuals or the first 10% of duration for audio clips.
- The label in video content must appear at the top of the screen.
- Each AI-generated item must clearly name the creator or responsible entity in its metadata or caption.
- Parties must maintain detailed internal records of all AI-generated materials, including creator identity and timestamps, for verification when required by the ECI.
- These stricter norms mark a decisive step to curb misleading synthetic media and preserve integrity in political communication.
ECI’s Deepfake Guidelines Mirror Draft IT Rules on AI Content Labelling
- ECI latest advisory on AI content labelling aligns closely with the draft amendments to the Information Technology (IT) Rules, 2021, proposed by the Ministry of Electronics and IT earlier this week.
- While the IT Rules are still in the draft stage, they outline similar obligations for social media platforms, including:
- Requiring users to declare if uploaded content is synthetically generated.
- Mandating platforms to use automated tools or verification mechanisms to confirm the accuracy of these declarations.
- Ensuring prominent AI labels or notices on all verified synthetic or AI-generated content.
- Non-compliance could result in loss of “safe harbour” protections, meaning platforms could be held legally liable for hosting unlabelled or misleading AI-generated content.
- The draft rules also define synthetically generated information as content that is artificially created, modified, or altered using computer resources in a way that appears authentic or real.
- Both the ECI and IT Ministry’s frameworks share the same goal — to combat deepfake misuse and preserve authenticity and trust in online and political communication.
AI and Threat to Democracy
- AI while transformative, poses serious risks to democratic systems, particularly through deepfakes, misinformation, algorithmic bias, and voter manipulation.
- These technologies can distort truth, erode public trust, and undermine free and fair elections.
- Deepfakes and Political Disinformation
- AI-generated deepfakes can create highly realistic fake videos or audio, spreading false narratives about political leaders.
- During the 2023 Slovakian elections, deepfake audio clips mimicking opposition leaders surfaced just days before voting, potentially swaying voter opinion.
- Manipulation of Public Opinion through AI Algorithms
- AI-driven recommendation systems on platforms like Facebook, YouTube, and X can amplify polarising or false content, creating echo chambers that manipulate voter perception.
- The Cambridge Analytica scandal showed how algorithmic profiling and data analytics were used to micro-target voters during the 2016 U.S. presidential election and Brexit referendum.
- Loss of Authenticity in Political Discourse
- AI tools can automate fake news creation, clone voices, and generate speeches, making it harder for citizens to distinguish between real and synthetic communication. This dilutes informed public debate.
- In 2024, AI-generated robocalls mimicking U.S. President Joe Biden’s voice urged voters to abstain from a primary election — a clear attempt at voter suppression.