The Indian government has introduced significant new regulations targeting AI-generated and synthetic media, with a strong emphasis on combating deepfakes, misinformation, impersonation, and related harms.
On February 10, 2026, the Ministry of Electronics and Information Technology (MeitY) officially notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
These changes will take effect from February 20, 2026.The amendments formally introduce the concept of “synthetically generated information” (SGI) into India’s digital regulatory framework. SGI is defined as any audio, visual, or audio-visual content that has been artificially created, generated, modified, or altered using computer algorithms or AI tools — in such a way that it appears realistic, authentic, or indistinguishable from genuine material depicting real people or events.
This broad definition clearly encompasses deepfake videos, AI-manipulated voices, face-swapped images, synthetic audio clips, and other forms of fabricated or heavily altered media that can easily deceive viewers.
Key obligations for social media platforms and intermediaries
Under the revised rules, major platforms, including Instagram, YouTube, Facebook, X, and others face several new responsibilities.
Mandatory labelling
All SGI must be clearly and prominently labelled to indicate its synthetic or AI-generated nature. Labels should be visible and hard to miss. Where technically possible, platforms must also embed persistent metadata or unique digital identifiers (provenance information) that allow tracing the content’s origin. Once applied, these labels and metadata cannot be removed, hidden, suppressed, or altered by users or the platform itself.
User declarations
When uploading content, users will be required to declare whether the material has been created or modified using AI tools. Platforms are expected to implement reasonable verification mechanisms, including automated detection tools to check the accuracy of these declarations.
Periodic reminders
Intermediaries must periodically (at least once every three months) remind users about the rules concerning synthetic content, the importance of accurate declarations, and the legal consequences of violations.
Failure to properly implement labelling, verification, or due diligence measures can result in platforms losing safe harbour protection under Indian law, exposing them to direct liability.
Sharper content removal timelines
The government has dramatically shortened the deadlines for acting on problematic content:For content flagged through lawful government orders or court directives, platforms must now remove or disable access within 3 hours, a steep reduction from the previous 36-hour window.
Other moderation response times have also been tightened in certain categories (for example, shrinking from 15 days to 7 days or from 24 hours to 12 hours, depending on the type and severity of the violation).
The 3-hour rule applies particularly to serious violations, including deepfakes used for fraud, harassment, defamation, election interference, non-consensual intimate imagery, child sexual abuse material, or other clearly unlawful purposes.
Why now? Rising concerns over misuse
The move comes against the backdrop of growing alarm over the malicious use of generative AI tools. Deepfakes and synthetic media have already been linked to financial scams, character assassination, political disinformation, revenge porn, morphed child abuse imagery, and impersonation of public figures.
By explicitly defining SGI, mandating transparency through labels, enforcing user accountability, and imposing ultra-fast takedown obligations, the government aims to make it much harder for misleading or harmful synthetic content to spread unchecked on Indian social media.
Platforms now have roughly ten days (until February 20, 2026) to update their systems, moderation workflows, upload interfaces, and user notification mechanisms to comply with the new regime.
This represents one of India’s most decisive regulatory steps yet to address the challenges posed by rapidly advancing AI technologies in the online space.
Naorem Mohen is the Editor of Signpost News. Explore his views and opinion on X: @laimacha.

