New Delhi: The Ministry of Electronics and Information Technology (MeitY) on 10 February formally notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, bringing synthetically generated information (SGI), including deepfakes, under India’s digital regulatory framework.
Under the amended rules, online intermediaries are required to ensure that content created, generated or altered using artificial intelligence is clearly identified for users. This includes AI-generated text, images, audio and video. Platforms must disclose such content either through visible labels, embedded metadata or other technical means, and inform users when material has been synthetically created or modified. The obligation applies irrespective of whether the content is user-generated or produced using AI tools integrated into the platform itself.
Compared with the draft rules released for public consultation, the final notification significantly narrows the scope of content that must be flagged. While the draft defined SGI broadly to include any content “artificially or algorithmically created, generated, modified or altered,†the notified rules focus more sharply on synthetic content that is likely to mislead users, impersonate individuals or misrepresent facts. This shift reflects a harm-based approach aimed primarily at curbing deceptive deepfakes rather than routine digital edits.
A major change from the draft framework is the reduction in takedown timelines. Social media platforms and other intermediaries are now required to remove or disable access to unlawful content within three hours of receiving a government or court order, down from the 36-hour window proposed earlier. MeitY has said this is intended to enable faster action against harmful and misleading content, particularly deepfakes.
The amendments also offer greater flexibility on compliance. Earlier drafts had proposed stricter, visible labelling requirements and even discussed a 10 per cent prominence threshold for AI disclosures. The final rules drop such prescriptive design mandates, instead adopting a principle-based standard. Disclosures must be “clear, prominent and visible,†without specifying size, placement or format. Platforms are also required to ensure that labels or disclosures are not removed, obscured or manipulated in ways that could mislead users.
The notification follows sustained feedback from industry bodies, including the Internet and Mobile Association of India (IAMAI), Nasscom and the Business Software Alliance. These groups had warned that the draft rules were overly broad and risked covering legitimate or benign uses of AI, alongside harmful deepfakes. IAMAI members include major technology companies such as Google, Meta, Amazon, Apple, Netflix, Jio and Airtel.
MeitY has clarified that the amended framework is technology-agnostic and applies uniformly across use cases, without carving out exemptions for specific AI tools. Enforcement will continue through existing mechanisms under the IT Rules, including grievance redressal systems and government takedown directions, with penalties applicable under the Information Technology Act for non-compliance.
Rohit Kumar, Founding Partner at the public policy firm The Quantum Hub (TQH), said: “The amended IT Rules mark a more calibrated approach to regulating AI-generated deepfakes. By narrowing the definition of synthetically generated information, easing overly prescriptive labelling requirements, and exempting legitimate uses like accessibility, the government has responded to key industry concerns – while still signalling a clear intent to tighten platform accountability. That said, the significantly compressed grievance timelines – such as the two- to three-hour takedown windows – will materially raise compliance burdens and merit close scrutiny, particularly given that non-compliance is linked to the loss of safe harbour protections.â€



































































Discussion about this post