Meta's Oversight Board is currently reviewing how the company handled deepfake pornography, especially with concerns about artificial intelligence contributing to the increase in fake explicit content used for harassment. On Tuesday, the Board announced that they will be evaluating Meta's response to two AI-generated explicit images of female public figures from the United States and India. This evaluation aims to determine if Meta has effective policies and practices in place to address such content and if these policies are consistently enforced globally.
The threat of AI-generated pornography has become a concerning issue in recent months. Celebrities like Taylor Swift, US high school students, and women worldwide have all experienced this form of online abuse. Generative AI tools are now widely available, making it faster, easier, and cheaper to create these fake images. Additionally, social media platforms allow for the rapid spread of these harmful images.
According to Meta Oversight Board Co-Chair Helle Thorning-Schmidt, deepfake pornography is a growing problem that contributes to gender-based harassment online. It is increasingly being used to target, silence, and intimidate women, both in the digital world and in real life.
Adobe Stock
Related article
Nonconsensual deepfake porn puts AI in spotlight
Thorning-Schmidt, who is the former prime minister of Denmark, mentioned that Meta moderates content more quickly and effectively in some markets and languages than in others. The goal is to examine if Meta is fairly protecting all women worldwide by analyzing cases from the US and India.
Meta’s Oversight Board consists of experts in areas like freedom of expression and human rights. It acts as a Supreme Court for Meta, allowing users to appeal content decisions on the company’s platforms. The board provides recommendations to Meta on how to handle specific content moderation decisions and offers broader policy suggestions.
The board will review an AI-generated nude image resembling a public figure from India, shared on Instagram by an account known for sharing AI-generated images of Indian women.
A user reported the image as pornographic, but the report was automatically closed when Instagram did not review it within 48 hours. The user appealed Instagram's decision to keep the image up, but the report was closed again without review. After the Oversight Board informed Meta about taking up the case, the company acknowledged its mistake in allowing the image to remain and removed it for violating bullying and harassment rules, as per the board.
In the second case, an AI-generated image of a nude woman being groped was shared in a Facebook group dedicated to AI creations. The image was created to resemble a well-known American public figure, as mentioned in the caption.
Previously, another user had posted the same image, leading to the involvement of policy experts who determined it violated rules against bullying and harassment, specifically for "derogatory sexualized photoshop or drawings." The image was then added to a database that automatically detects and removes reposted rule-breaking images, resulting in the removal of the second user's post.
The Oversight Board is currently conducting a review and is inviting the public to share their thoughts on deepfake pornography. Comments can be submitted anonymously. The focus is on understanding how this type of content can negatively impact women and how Meta has addressed posts containing AI-generated explicit images. The deadline for public comments is April 30th.
Editor's P/S:
The rise