As deepfake videos and AI-generated misinformation flood social media, India’s Ministry of Electronics and Information Technology (MeitY) is gearing up to introduce a dedicated Artificial Intelligence (AI) Act. This comprehensive legislation, inspired by the Information Technology (IT) Act, 2000, aims to create a robust framework for regulating synthetic content and preventing digital deception.
The move comes amid growing concerns over the misuse of AI tools that can seamlessly alter videos, images, and audio to spread falsehoods, manipulate public opinion, or harm individuals. Official sources indicate that the government will finalize draft rules after public consultations wrap up on November 6.
However, to shield these measures from legal scrutiny, a full-fledged AI Act will follow, ensuring rules don’t overstep the bounds of primary law.”Rules are secondary and cannot exceed the parent legislation,” explains cyber law expert Pavan Duggal.
“Without a dedicated AI law, provisions targeting deepfakes could face challenges in court.” Duggal emphasizes that the IT Act’s structure—starting with core principles and expanding via rules—serves as the ideal blueprint, allowing the new Act to evolve with rapid technological advancements.
At the heart of the proposed regulations is Rule 3(1), which places strict obligations on intermediaries like social media platforms and apps. Any service enabling users to create, modify, or share AI-generated content must embed clear identifiers:
Visible Labels: A permanent, non-removable marker stating the content is artificial, covering at least 10% of the screen for videos or images.
Audio Requirements: An audible disclaimer in the first 10% of clips.
Metadata Embedding: Digital tags that survive editing or sharing.
These labels aim to alert viewers instantly, curbing the viral spread of deceptive material. Failure to comply could invite penalties, mirroring the IT Act’s enforcement mechanisms.
The urgency stems from real-world incidents. From fabricated celebrity endorsements to political smear campaigns, deepfakes have exploded in accessibility. Tools powered by generative AI, once confined to experts, are now user-friendly apps downloadable on smartphones. MeitY officials highlight how this democratizes deception, eroding trust in online information.
Once enacted, the AI Act will provide a flexible foundation. “Just like the IT Act has been amended over decades to address email fraud, data privacy, and cybercrimes, the AI law can incorporate future rules on emerging threats like AI-driven scams or autonomous systems,” an official noted.
This isn’t India’s first brush with AI regulation. Earlier advisories urged platforms to flag deepfakes, but they lacked teeth. The new Act signals a proactive shift, aligning India with global efforts—think the EU’s AI Act or U.S. executive orders on synthetic media.For tech giants and startups, compliance means investing in detection tools and watermarking tech.
Users, meanwhile, gain a layer of protection in an era where “seeing is no longer believing.”As consultations close, stakeholders from Big Tech to civil society are weighing in. The final AI Act could redefine digital authenticity in the world’s largest democracy, proving that in the battle against AI-fueled fakery, legislation might be the ultimate truth serum.
Naorem Mohen offers compelling insights on Artificial Intelligence and Cryptocurrencies. Explore his content on Twitter: @laimacha.