New AI Rules: The Complete Creator Guide

Effective Date: February 20, 2026

The Government of India has notified the IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026. These rules introduce strict compliance measures for “Synthetically Generated Information” (SGI) to tackle deepfakes and misinformation. If you create content using AI, you are now legally obligated to follow these new norms.

✅ Safe (No Label Needed)
  • Color Correction: Grading footage or fixing lighting.
  • Noise Reduction: Cleaning up audio background noise.
  • Retouching: Removing minor blemishes or dust spots.
  • Animations: Cartoons or 3D art that clearly looks fake.
  • Accessibility: Text-to-speech for subtitles/reading.
🚨 Must Label “AI Generated”
  • Face Swapping: Putting one face on another body (Deepfakes).
  • Voice Cloning: AI mimicking a real person’s voice.
  • Realistic Video: AI clips (Sora/Veo) that look like real life.
  • Lip Syncing: AI altering lip movement to match new audio.
  • Event Fabrication: Showing a person doing something they never did.

1. The “Rapid Removal” Rule ⏱️

Platforms (like YouTube, Instagram, X) are now under immense pressure. The time they have to remove illegal content has been drastically cut.

3 HRS For Deepfakes/Fake News
(Down from 36 hours)
2 HRS For Intimate Images (NCII)
(Down from 24 hours)

Impact on You: Because platforms face liability if they miss this 3-hour window, their automated systems will be extremely aggressive. If your video is flagged as a deepfake, it will likely be deleted instantly to stay within the legal limit.

2. Labels & Metadata (The “Check Box”) 🏷️

The new rules legally define “Synthetically Generated Information” (SGI) as any content created or altered by a computer that appears authentic.

Mandatory Requirement:
All intermediaries (platforms) must ensure that AI-generated content is “prominently labelled”. They must also embed metadata (digital fingerprints) into the file that identifies it as synthetic. It is illegal to tamper with or remove this label/metadata.

3. Are You Liable? (Important!) ⚖️

Yes. The rules shift responsibility to the user (uploader) as well.

⚠️
User Declaration:
When you upload a video, you will be asked to legally declare if it contains AI. If you lie (e.g., upload a deepfake without checking the box), and the platform detects it, they can:
  • Suspend or terminate your account.
  • Disclose your identity to law enforcement in severe cases.

What about “Safe Harbour”?

Platforms (Intermediaries) generally have “Safe Harbour” protection (Section 79 of the IT Act), meaning they aren’t blamed for what users post. However, under the 2026 amendment, if they fail to label AI content or miss the 3-hour takedown, they lose this protection. This is why they will not hesitate to ban non-compliant users.


Discover more from Dostified

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from Dostified

Subscribe now to keep reading and get access to the full archive.

Continue reading