Earlier this month, the government notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, further amending the IT Rules 2021, in a bid to regulate “synthetically generated information”. The intent is to tackle the rise of misinformation through the use of Artificial Intelligence (AI) deepfakes. The amendment raises certain concerns.

One of the most significant changes under the amended Rules is the compressing of deadlines in Rule 3(1)(d), requiring intermediaries to take down content upon receipt of a court order or on being notified by the government or government agency within three hours (down from 36 hours). At the same time, other deadlines have also been made tighter — user complaints regarding content that is obscene, violative of privacy, harmful to children, or impersonating another person must be resolved within 36 hours (down from 72 hours). Content that prima facie depicts the partial nudity/nudity of the user must be taken down within two hours (down from 24 hours).
It is worth mentioning that these amendments have been introduced in the IT Rules 2021 under the guise of regulating the proliferation of AI content and deepfakes. However, these starkly tighter deadlines are not restricted to deepfake/AI content. Instead, they apply to all content hosted by social media intermediaries.
This approach precludes any meaningful review by humans and virtually automates the take-down process by intermediaries. Thus, in the name of requiring intermediaries to exercise “due diligence”, the 2026 amendments further incentivise censorship by the former, without the attendant safeguards of a prior hearing or reasoned orders.
Interestingly, this proposal of shortening the deadline was never part of the original proposed amendments on which public feedback was sought in 2025. Nor has the government published the stakeholder responses to the proposed amendments.
Hence, the sudden introduction of these new provisions in the 2026 amendments remains unexplained. This lack of public consultation is also visible in the new sub-clauses that require intermediaries to exercise due diligence when it comes to synthetically generated information to prohibit the depiction of an event or a person “in a manner that is likely to deceive”.
A similar approach has been adopted in the definition of synthetically generated information itself, which includes artificially or algorithmically generated information that appears to be “real, authentic, or true” or depicts an individual or event that is “likely to be perceived as indistinguishable” from the natural person or event. Any such content that is obscene, invasive of privacy, indecent, or vulgar must be taken down by intermediaries.
Both the definition of synthetically generated information as well as the accompanying due diligence obligations are vague and leave it to the intermediaries to decide on and label such content. Furthermore, they do not have a carve-out for content created for parody or satire purposes vis-à-vis content intended to spread misinformation The role of satire in promoting healthy democratic debate, and its protection under the free speech clause under Article 19(1)(a) of the Constitution, has been consistently acknowledged by courts all over the country (even if not always implemented in practice).
Undoubtedly, the problem of deepfakes and misinformation is real. But, as noted by the Bombay High Court in the Kunal Kamra case (while striking down the 2023 IT Rules amendment establishing fact-checking units), using vague terms such as “fake”, “false”, or “misleading” leaves the matter to the unguided discretion of the fact-checking units. The 2026 amendments similarly lack a guiding principle on how to classify content as synthetically generated information and vest intermediaries with virtually untrammeled powers.
Finally, the amendments oblige significant social media intermediaries (such as YouTube, Meta, or Twitter) to take reasonable and proportionate technical measures to “verify” the correctness of user declarations regarding the use of AI content, failing which, they would lose safe-harbor protections. This further pushes intermediaries to act as proactive censors and take down information that they consider to be wrongly labeled. As the final arbiter of what constitutes “synthetically generated information”, their commercial interests will always be weighed in favor of preserving their safe-harbor protection.
There is general consensus that AI-generated misinformation is harmful. But it is not clear that it is more harmful or widespread than other sources of misinformation online, necessitating such a sledgehammer approach. Whether these amendments actually protect our security or violate our freedoms remains to be seen. I am not optimistic.
Vrinda Bhandari is a lawyer, specializing in technology and privacy, practicing in Delhi. The views expressed are personal
