Editor’s Note: This article was first featured in the “Reliable Sources” newsletter. If you want to stay updated on the changing media landscape, you can subscribe to the daily digest here.
Big Tech is currently working quickly to tackle the issue of A.I.-generated images flooding social media platforms. The goal is to prevent these machine-created images from adding to the misinformation circulating online.
TikTok announced that they will start labeling A.I.-generated content. Meta, the parent company of Instagram, Threads, and Facebook, also revealed their plan to label such content. YouTube has implemented rules requiring creators to disclose if their videos are A.I.-created for proper labeling. (Interestingly, Elon Musk's X has not mentioned any intentions to label A.I.-generated content.)
With less than 200 days left until the crucial November election, and with technology advancing rapidly, the three biggest social media companies have laid out their strategies to help their billions of users distinguish between content created by machines and humans.
OpenAI, the creator of ChatGPT and DALL-E, announced plans to release a tool that can identify bot-generated images. They also revealed a collaboration with Microsoft to launch a $2 million fund to combat election-related deepfakes. These initiatives aim to prevent misinformation and protect democracy.
Silicon Valley's actions reflect a recognition that technology created by industry giants can pose a significant threat to the spread of accurate information and the integrity of democratic processes.
Apple issued a rare apology for its ad that depicted a massive hydraulic press smashing the tools of human creativity, like paint, books and music.
Apple issued a rare apology for its ad that depicted a massive hydraulic press smashing the tools of human creativity, like paint, books and music.
From Apple
Related article
Welcome to the AI dystopia no one asked for, courtesy of Silicon Valley
A.I.-generated imagery has already proven to be particularly deceptive. Just this week, an A.I.-created image of pop star Katy Perry supposedly posing on the Met Gala red carpet in metallic and floral dresses fooled people into believing that the singer attended the annual event, when in fact she did not. The image was so realistic that Perry’s own mother believed it to be authentic.
“Didn’t know you went to the Met,” Perry’s mom texted the singer, according to a screen shot posted by Perry.
“lol, mom the AI got you too, BEWARE!” Perry replied.
While the viral image did not cause any serious harm, it is easy to see how a fake photo could potentially mislead voters, especially right before a significant election. This could lead to confusion and possibly even sway the outcome in favor of one candidate over another.
Despite numerous warnings from experts in the field, the government has not taken any action to put protective measures in place for the industry. As a result, Big Tech is left to regulate the technology on its own to prevent misuse by malicious actors. (What could possibly go wrong?)
The success of industry-led efforts in stopping the harmful spread of deepfakes still needs to be proven. Social media giants have strict rules against certain content, but they have a history of not effectively enforcing them, resulting in malicious content spreading before being addressed.
This track record doesn't instill much confidence, especially as A.I.-generated images continue to flood the information space, especially with the upcoming critical U.S. election on the horizon.
Editor's P/S:
The article exposes the urgent need for addressing the proliferation of AI-generated images on social media, which pose significant threats to the spread of accurate information and the integrity of democratic processes. Big Tech companies such as TikTok, Meta, and YouTube have announced plans to label AI-generated content, while OpenAI is developing tools to identify bot-generated images. However, the government's inaction has left the industry to self-regulate, raising concerns about the effectiveness of these measures. The viral AI-created image of Katy Perry highlights the potential for misinformation and deception, underscoring the need for stringent regulations and enforcement to combat the harmful spread of such content.