New Delhi: The Ministry of Electronics and Information Technology (MeitY) has issued a directive requiring AI platforms to obtain approval before deploying any unreliable or under-tested artificial intelligence (AI) models in India. The advisory encompasses AI, generative AI models, large language models, and algorithms that are currently under testing or deemed unreliable.
The advisory mandates explicit permission from MeitY before these models are made available to Indian users. The directive also stipulates that platforms must clearly label potential inaccuracies in model outputs and secure user consent, possibly through pop-ups. Non-compliance with the advisory may lead to the enactment of a law.
According to the directive, AI providers have to inform users about inherent biases or inaccuracies in model outputs, using consent pop-ups and clear labelling to indicate limitations. Moreover, it prohibits models from perpetuating discrimination or compromising election integrity. Failure to embed metadata for identifying deepfakes and synthetic media is also highlighted.
This advisory appears to be a response to increasing instances of AI biases concerning race, gender, and other factors.































































Discussion about this post