New Delhi: Meta, the social media giant, recently unveiled a new policy aimed at combating misleading content on its platforms. Starting 2024, advertisers will be mandated to disclose any digital alterations within their advertisements.
Under the new policy, advertisers must reveal instances where a social, electoral, or political advertisement incorporates a photorealistic image or video, or convincingly realistic audio that has been digitally manipulated to depict an individual saying or doing something they did not actually say or do.
Moreover, any pictures or videos portraying a non-existent lifelike person, a fictitious event, or digitally manipulated footage of a genuine event must also be reported on the platform.
In an official statement from Meta, the company articulated, “We’re announcing a new policy to help people understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI.”
The policy also covers manipulated media representing a lifelike event that allegedly occurred but is not an accurate representation, either through imagery, video, or audio recording.
Once an advertiser discloses that the content has been digitally altered, Meta will append the requisite tags and information to the ad, which will also be reflected in the platform’s ad library.
Meta has clarified that advertisers are not obliged to disclose inconsequential or immaterial alterations, such as image size adjustments, image cropping, colour correction, or image sharpening unless these changes are material to the claim, assertion, or issue presented in the ad.
Should advertisers fail to adhere to the new policy’s disclosure requirements, Meta reserves the right to reject their ads. Repeated non-compliance may lead to penalties against the advertiser.
The official announcement states, “As always, we remove content that violates our policies, whether it was created by AI or a person. Our independent fact-checking partners review and rate viral misinformation and we do not allow an ad to run if it’s rated as False, Altered, Partly False, or Missing Context.”
This development comes amidst global criticism of social media platforms for their role in the proliferation of misinformation and fake news, often facilitated by AI tools.
In India, the Ministry of Electronics and Information Technology (MeitY) recently issued an advisory to all social media platforms, reminding them of their legal obligations to promptly identify and remove misinformation.
This move was prompted by the viral spread of a deepfake video featuring the actor Rashmika Mandanna. Many Indian celebrities have called for a specific policy and legal action in response to this issue.
Also Read More –Â
Discussion about this post