Biden Campaign Gears Up to Battle Election Deepfakes: Exclusive CNN Report

Biden Campaign Gears Up to Battle Election Deepfakes: Exclusive CNN Report

The Biden 2024 campaign is taking proactive measures to combat misleading AI-generated content, forming a specialized task force to strategize against potential disinformation risks Through innovative legal approaches and leveraging existing laws, they aim to safeguard the election process from the unregulated realm of AI

President Joe Biden's 2024 campaign has formed a specialized task force to develop strategies for combating deceptive AI-generated images and videos that could potentially disrupt the election. Comprised of the campaign's leading legal experts and outside advisors, including a former senior legal advisor to the Department of Homeland Security, the task force is focused on drafting legal responses and creating innovative legal theories to combat potential disinformation efforts. They are examining potential actions President Biden could take in scenarios such as the emergence of a fake video depicting a state election official falsely stating that polls are closed, or an AI-generated image portraying Biden encouraging non-citizens to illegally vote by crossing the US border.

The goal is to create a "legal toolkit" that will enable the campaign to swiftly address any situation involving political misinformation, particularly AI-generated deepfakes - persuasive audio, video, or images created using artificial intelligence tools.

Arpit Garg, deputy general counsel for the Biden campaign, explained that the aim is to have a variety of resources readily available so that the campaign can effectively handle different scenarios. This will include templates and draft pleadings that can be used to file in US courts or with regulators internationally to combat foreign disinformation actors.

The campaign has recently launched an internal task force known as the "Social Media, AI, Mis/Disinformation (SAID) Legal Advisory Group," as part of a larger effort to combat various forms of disinformation, according to TJ Ducklo, a senior adviser to the Biden campaign who spoke with CNN.

The group, led by Garg and the campaign's general counsel Maury Riggan, along with outside volunteer experts, has already begun drafting legal theories and conducting research for future use, Garg mentioned. Their goal is to have enough prepared to conduct a campaign-wide tabletop exercise in the first half of 2024.

The struggle highlights the extensive legal gray area surrounding AI-generated political speech, and the challenges policymakers face in addressing the threat it may pose to the democratic process. In the absence of clear federal legislation or regulation, campaigns like Bidens are being compelled to take action into their own hands, attempting to develop ways to counter images that may inaccurately depict candidates or others saying or doing things they never actually did.

Leveraging old laws to fight a new threat

Without a federal prohibition on political deepfakes, the Biden campaign's legal team is exploring potential strategies to leverage existing voter protection, copyright, and other regulations to urge or force social media and similar platforms to take down misleading content. Additionally, the campaign is examining the possibility of utilizing new disinformation laws in the European Union, particularly if a disinformation campaign originates from or is hosted on a platform based in the EU. The EU's Digital Services Act, which enforces stringent transparency and risk-reduction mandates on major tech platforms, could result in substantial fines for violations.

The group is drawing legal inspiration from a recent case involving a Florida man who was convicted under a Reconstruction-era law for spreading false claims on social media about voting. This law criminalizes conspiracies to deprive Americans of their constitutional rights and has been used in human trafficking cases in the past. They are also looking at a federal statute that makes it a misdemeanor for a government official to deny a person their constitutional right to vote. Garg mentioned that US election law currently prohibits campaigns from fraudulently misrepresenting other candidates or political parties, but whether this applies to AI-generated content is still up in the air. In June, Republicans on the Federal Election Commission blocked a move to clarify this, and the agency is still considering the matter without reaching a decision.

The Republican National Committee expressed concerns about the potential misuse of AI in political campaign communications, stating that a proposed oversight of political deepfakes would exceed the FEC's authority and raise constitutional concerns. On the other hand, the Democratic National Committee has called for the FEC to address intentionally misleading uses of AI, citing the technology's ability to fabricate hyperrealistic media that could mislead voters.

Lack of guardrails around AI

Despite growing concern about AI among members of Congress, US legislators are still in the early stages of addressing the issue and have not yet made progress in finalizing any AI-related laws. Senate Majority Leader Chuck Schumer has organized a series of private hearings for lawmakers to become better informed about the technology and its consequences, covering areas such as the impact of AI on workers, intellectual property, and national security. These discussions are ongoing.

Schumer has indicated that he may move to expedite a bill focused on AI and elections before addressing the broader impacts of the technology. However, he has stressed the importance of a thorough process, noting that results may take months, not days or weeks. Another proposal in September, put forth by a bipartisan group of senators, aims to prohibit the deceptive use of AI in political campaigns, but has not seen progress. Without the prospect of clear regulations on the horizon, Biden's team must confront the threat head-on.

Since the 2018 midterm elections, some anti-disinformation campaigns have involved coordination with DNC officials. However, the rapid growth of advanced AI tools over the past year has made AI a unique factor in the 2024 race, according to Hany Farid, a digital forensic expert and professor at the University of California, Berkeley, who spoke with CNN.

In response, tech companies like Meta, the parent company of Facebook and Instagram, have implemented restrictions and requirements for AI in political speech on their platforms. Meta recently announced that it will prohibit political advertisers from using the company's new artificial intelligence tools for generating text, backgrounds, and other marketing content. Additionally, any political advertiser utilizing deepfakes in ads on Facebook or Instagram will be required to disclose that fact.

AI technology raises concerns beyond creating fake video and audio. Darren Linvill, a professor at Clemson University's Media Forensic Hub, stated that AI can be utilized to generate large quantities of articles and online comments intended to either support or attack a political candidate.

In a report released on Thursday regarding potential threats leading up to the 2024 election, Meta's security team cautioned about the potential for AI to be utilized by malicious groups to produce "greater amounts of persuasive content." However, the report also expressed hope that advancements in AI could assist in identifying and thwarting coordinated disinformation campaigns.

The Meta report outlines the challenges faced by social media platforms in dealing with the deceptive manipulation of AI. It emphasizes that while foreign interference using AI-generated content is widely condemned, the use of AI by authentic political groups and domestic voices can blur the line between what is considered acceptable and what is not.

Meta pointed specifically to an advertisement released by the RNC in April that used AI to create deepfake images imagining a dystopian United States if Biden were reelected.