A Meta is working to detect and label AI-generated images from other companies, not Facebook, Instagram e Threadsas the company pushes to report people and organizations that actively want to scam people.
The company already labels AI images generated by its own systems. The goal is that the new technology, which is still being built, will create momentum for the industry to combat AI counterfeiting. In a blog written by senior executive Sir Nick Clegg, Meta says it intends to expand its labeling of AI fakes in the coming months.
However, Professor Soheil Feizi, director of the Reliable AI Lab at the University of Maryland, argues that the system could be easy to circumvent.
They can train their detector to flag some images specifically generated by some specific models. But these detectors can be easily bypassed by some light processing on the images and can also have a high false positive rate. So I don’t think it’s possible for a wide range of applications, he told BBC.
Meta acknowledged that its tool will not work for audio and video, despite these being the media on which much of the concern about AI spoofing is focused. The company says it is instead asking users to label their own audio and video posts, and may impose penalties if they don’t comply.
This Monday (5), Meta’s Supervisory Board criticized the company for its policy on manipulated media, calling it incoherent, lacking persuasive justification and inappropriately focused on how the content was created.
The Supervisory Board financed by Meta but independent of the company. The criticism was in response to a decision regarding a video by US President Joe Biden. The video in question edited existing footage of the president with his granddaughter. As it was not manipulated using artificial intelligence, the video was not removed. The Council agreed that the video did not violate Meta’s current rules on fake media, but said the rules should be updated.
Additionally, the company is developing tools to identify these types of markers when used by other companies such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock in their AI image generators.
As the distinction between human and synthetic content becomes blurred, people want to know where the border is. People often encounter AI-generated content for the first time, and our users have told us they appreciate the transparency around this new technology. So it’s important that we help people know when the photorealistic content they’re seeing was created using AI, said Clegg.
The executive also talked about the idea of placing more notable labels on digitally created or altered images, videos or audio that create a particularly high risk of misleading the public about an important issue.
AI deepfakes have already entered the US presidential election cycle, with robocalls of what is believed to be an AI-generated deepfake of US President Joe Biden’s voice discouraging voters from turning out for the Democratic primary in New Hampshire.
Nine News in Australia last week also faced criticism for altering an image of Victorian Animal Justice party MP Georgie Purcell to expose her midriff and alter her chest in an image broadcast on the evening news. The network blamed automation in Adobe’s Photoshop product, which features AI imaging tools.
*Capa photo: Mundissima/Shutterstock
Follow Adnews on Instagrame LinkedIn. #WhereTransformationHappens