Advances in artificial intelligence have made it increasingly difficult to differentiate between what is real and fake. This dystopian issue of modern times has generated a high proliferation of sexually explicit images of women and girls on the internet.
Recently, X (Twitter) went through a series of difficulties to combat the spread of fake explicit images of pop star Taylor Swift, leading Elon Musk’s social network to temporarily block users from searching for images of the American singer on the platform.
In view of this, the Supervisory Board of Metacompany that owns the InstagramFacebook, WhatsApp and Threads, is reevaluating how platforms are dealing with two specific cases. The first is of an AI-generated image of a naked woman, resembling an Indian public figure, posted on an Instagram account that only shares fake images of Indian women. The second is a Facebook group dedicated to sharing AI creations and showed a fake image of a naked woman, resembling an American public figure, with a man touching her breast.
Initially, Meta only removed the image of the American woman for violating the bullying and harassment policy, which prohibits “photoshops or derogatory sexualized drawings”, but left the image with the Indian woman, changing this decision only after the intervention of the board.
In an official statement, Meta acknowledged the cases and committed to implementing the council’s decisions.
While Mark Zuckerberg’s company continues to have internal problems blocking these derogatory images, some industry leaders are already calling for legislation that would criminalize the creation of deepfakes (term designated for this type of assembly) harmful and that requires technology companies to prevent such uses of their products.
*With information from Reuters
Follow Adnews on Instagram e LinkedIn. #WhereTransformationHappens