Centre Notifies IT Rules Amendment: 3-Hour Takedown Deadline for AI Content

Effective February 20, 2026, new rules mandate clear labels for AI media, faster illegal content takedowns, and the loss of safe harbor for non-compliant platforms.

The Indian government has notified amendments to the Information Technology (IT) Rules, 2021. Effective February 20, 2026 the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 introduce timelines for content removal and mandate labelling of AI-generated content, marking a shift in how digital platforms operate in India.

Understanding the IT Rules Amendment

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 notified by the Ministry of Electronics and Information Technology introduces a framework to tackle the proliferation of synthetically generated media. Under the Information Technology Act, 2000, these amendments aim to curb misinformation, protect user dignity, and strengthen platform accountability in an era dominated by generative artificial intelligence.

The amendments define “synthetically generated information” as audio, visual, or audio-visual content created or modified using computer resources in a manner that appears real or authentic. This legal definition brings clarity to what constitutes AI-generated content and places it within the regulatory ambit of the IT Act.

Need to Curb Deepfakes and Synthetic Content

The amendments come in response to growing concerns over the misuse of AI-generated content. India has witnessed several incidents where deepfakes have been used to create non-consensual intimate imagery, spread misinformation during elections, and facilitate financial fraud through voice cloning.

High-profile cases, such as manipulated videos of public figures and celebrities, have highlighted how digital platforms can be exploited to damage reputations and violate personal dignity. The rapid advancement of generative AI tools has made it easier for malicious actors to create photorealistic synthetic media that is indistinguishable from authentic content.

The unchecked spread of deepfakes poses threats to:

  • Public trust in digital media and authentic information
  • National security through fabricated statements attributed to public officials
  • Democratic discourse through election-related misinformation
  • Individual privacy and dignity, particularly for women and minors

These concerns necessitated a regulatory response from the Ministry of Electronics and Information Technology to establish accountability mechanisms under the Information Technology Act and protect citizens from the harms of synthetic media manipulation.

Key Features of the Amendment

Reduction in Takedown Timelines

The most transformative aspect of the IT Rules Amendment is the compression of content removal timelines. Digital platforms must now act within 3 hours when they receive a valid takedown order from a court or government authority for content deemed illegal under laws relating to sovereignty, public order, or morality. This is a reduction from the earlier 36-hour window.

For sensitive categories such as non-consensual intimate imagery and deepfakes featuring nudity, the compliance window is tighter at 2 hours. The government justifies this timeline by arguing that tech companies possess the technical capacity for faster action, and that delays enable harmful content to go viral.

Mandatory Labelling of AI-Generated Content

All AI-generated content must now be labelled “prominently” before being published on any platform. While the initial draft proposed that 10% of image space carry such labels, the final rules have adopted a more flexible “prominent and visible” standard following feedback from industry stakeholders.

Digital platforms are required to:

  • Seek user disclosure on whether uploaded content is synthetically generated
  • Deploy technical measures to verify if content is AI-generated
  • Proactively label content when disclosures are absent
  • Remove non-consensual deepfakes

The rules exclude routine editing tools such as smartphone camera touch-ups, colour correction, and noise reduction from the definition of synthetic media. This ensures that everyday digital enhancements do not trigger compliance burdens.

Impact on Digital Platforms and Safe Harbour Protection

Under Section 79 of the Information Technology Act, intermediaries enjoy “safe harbour” protection, meaning they are not held liable for user-generated content as long as they exercise due diligence. The IT Rules Amendment introduces a recalibration of this protection.

If a platform knowingly permits, promotes, or fails to act against unlawful synthetic content, it will be deemed to have failed in exercising due diligence. This triggers loss of safe harbour, exposing the platform to civil and criminal liability as a publisher of content.

Conversely, platforms that proactively remove harmful AI-generated content using automated tools will be protected from wrongful removal lawsuits. This mechanism is designed to encourage platforms to err on the side of caution when moderating synthetic media.

Compliance Burden and Operational Challenges

The Ministry of Electronics and Information Technology has given digital platforms ten days between the notification (February 10) and enforcement (February 20) to align their systems with the new mandates. This implementation window has raised concerns within the tech industry.

Determining the legality of content within 2 to 3 hours poses operational challenges. Platforms must now rely on automated moderation systems, which increases the risk of over-censorship. Legitimate content including satire, parody, and news material may be mistakenly flagged and removed to avoid penalties.

Administrative and Governance Dimensions

The IT Rules Amendment also introduces administrative changes. It rolls back an earlier rule that limited each state to appointing only one officer for issuing takedown orders. States can now designate multiple authorised officers, addressing the needs of populous states and decentralising enforcement.

For a takedown order to be valid under the Information Technology Act, it must come from an officer not below the rank of Joint Secretary (for central/state governments) or Deputy Inspector General of Police (for law enforcement). Every order must specify the legal basis, the nature of the unlawful act, and the URL or location of the content.

To ensure accountability, all takedown directions must be reviewed monthly by an officer not below the rank of Secretary, providing a check against arbitrary use of emergency removal powers.

Balancing Innovation with Constitutional Rights

The IT Rules Amendment sits at the intersection of technological innovation and constitutional freedoms. While faster removal of non-consensual deepfakes strengthens the Right to Privacy and dignity under Article 21, the compressed timelines raise concerns about Freedom of Speech under Article 19(1)(a).

Way Forward

For the IT Rules Amendment to succeed, India must develop:

  • Clearer Illegality Standards: Structured guidance for digital platforms on what constitutes prohibited synthetic content, along with standardised digital takedown protocols.
  • Independent Oversight Mechanism: An appellate or review authority to examine takedowns and protect against abuse of state power.
  • Strengthening AI Detection Tools: Investment in AI detection systems under India’s AI mission to support platform compliance.
  • Harmonisation with Data Protection Laws: Ensuring consistency between the IT Rules and the Digital Personal Data Protection Act regarding privacy and consent standards.
  • Capacity Building for States: Training authorised officers in cyber law and AI governance to ensure informed decision-making.

Conclusion

The IT Rules Amendment represents India’s regulatory stance on AI-generated content. By mandating labelling, embedding metadata, and reducing takedown timelines to 3 hours, the Ministry of Electronics and Information Technology has altered the responsibilities of digital platforms under the Information Technology Act.

These changes reflect a response to the global deepfake crisis and the erosion of digital trust. However, the success of this framework will depend on calibrated enforcement, technological readiness, and institutional safeguards against overreach. 

Master Digital Age Governance & Technology Trends with VisionIAS Comprehensive Current Affairs →


IT Rules Amendment 2026 FAQs

1. When do the new IT Rules amendments take effect in India? 

Ans. February 20, 2026.

2. What is the takedown deadline for illegal AI content under the new IT Rules? 

Ans. 3 hours for most content; 2 hours for non-consensual intimate imagery.

3. What is synthetically generated information under IT Rules 2021?

Ans. AI-created audio, visual, or audio-visual content appearing real or authentic.

4. Do platforms need to label AI-generated content in India?

Ans. Yes, all AI-generated content must be prominently labeled before publication.

5. What is safe harbour protection under Section 79 of IT Act? 

Ans. Immunity from liability for user-generated content when exercising due diligence.

UPSC Prelims Syllabus


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *