
On Wednesday, the social media giant Meta unveiled a new regulation requiring advertisers to report digital change in advertisements beginning in 2024, an attempt to reduce false information on its platforms. This policy requires organizations to disclose whether they generate advertisements using AI or any other digital methods. Meta announced that the advertisers would have to provide notice if their fake or changed advertisements show actual individuals saying or doing things that they didn't, or if they digitally create a real-looking person who doesn't exist.
The guideline also requires required disclosure for any modified images, videos, or audio recordings of purportedly actual events to be disclosed, even if they do not accurately represent the event. A month after Facebook's parent company announced that it would begin to expand advertisers' access to AI-powered advertising tools that can instantly create backgrounds, image adjustment, and variations of ad copy in response to a simple word command. Meta announced policy updates, including a ban on political marketers utilizing generative AI ad tools.
Meta will add the necessary tags and metadata to the advertisement whenever the advertiser discloses in the advertising flow that the material has been digitally created or amended. This data will also show up in the platform's ad library.
Furthermore, advertisers running these ads, according to Meta, are not required to disclose anything that has been digitally modified or created unless it's relevant to the topic, claim, or advertisement. According to the Meta site, "This may include cropping, resizing, colour correction, or sharpening images unless such changes are consequential or material to the claim, assertion, or issue raised in the advertisement."
If advertisers do not disclose the details of their campaigns under the new policy's requirements, Meta will prohibit their advertising. Advertisers may face penalties if they fail to disclose information regularly.
This discovery comes amid global criticism directed towards social media platforms for using of artificial intelligence (AI) to spread false information and misinformation. All social media platforms in India received recent advice from the Ministry of Electronics and Information Technology (MeitY), reminding them of their legal responsibility to quickly identify and delete misinformation. Actor Rashmika Mandanna's deepfake video went viral on the internet, prompting the activities. Several well-known Indians demanded a unique policy and legal action over the matter.
Since last month, Meta's chief policy executive, Nick Clegg, has been preventing its user-facing Meta AI virtual assistant from producing photo-realistic representations of public personalities. He stated that generative AI use in political advertising is "clearly an area where we need to update our rules."
Editorial: Sally (Anh) Ngo,
November 14, 2023
Share This Post On
0 comments
Leave a comment
You need to login to leave a comment. Log-in