Millions of people are turning to platforms like TikTok and Instagram to gain real-time understanding of the Israel-Hamas war, which is in its first week. Recently, trending search terms on TikTok reflect the desire for frontline perspectives, with users actively searching for "graphic Israel footage" and "live stream in Israel right now." Internet users want unfiltered and raw accounts of this brutal conflict in their desperate attempt to comprehend it.
In their search, they have been successful in finding videos of Israeli children grappling with the harsh reality of death, as well as Gazans who are left stunned amidst the ruins of their destroyed homes. However, this hunger for a closer look into the war has also opened the door for disinformation peddlers, conspiracy theorists, and propaganda artists. This worrying development is being highlighted by regulators and researchers who recognize the serious threat these malign influences pose to public discussions about the war.
A TikTok video, reviewed by CNN and seen by over 300,000 users, propagated conspiracy theories about the origins of the Hamas attacks, falsely claiming that they were orchestrated by the media. In another TikTok video, with over 100,000 views, a clip from the video game "Arma 3" was shown with the caption "The war of Israel." (Comments on this video indicated that some users had seen the footage circulating during Russia's invasion of Ukraine.)
However, it's not just TikTok that's involved. On platform X (formerly known as Twitter), a post with over 20,000 views was flagged by London-based social media watchdog Reset for being misleading. This post purported to show Israelis staging civilian deaths for cameras. Another post on X, viewed 55,000 times, featured an antisemitic meme depicting Pepe the Frog, a cartoon that has been co-opted by far-right white supremacists. On Instagram, a widely shared video, depicting parachuters dropping into a crowd and captioned "imagine attending a music festival when Hamas parachutes in," was debunked over the weekend. In reality, the video showcased unrelated parachute jumpers in Egypt. (Instagram later labeled the video as false.)
The TikTok logo is seen on a mobile device.
Jaap Arriens/NurPhoto/Shutterstock
EU officials warn TikTok over Israel-Hamas disinformation
European Union officials have issued warnings to several social media platforms, including TikTok, Facebook (owned by Meta), YouTube, and others, regarding the presence of misleading or illegal content related to the ongoing war. These platforms have been reminded by the officials about the potential for substantial fines if they are found to have violated EU content moderation laws in the future. Additionally, US and UK lawmakers have urged these platforms to effectively enforce their rules against offensive and unlawful content. Imran Ahmed, the founder and CEO of the Center for Countering Digital Hate, disclosed to CNN that his organization has observed a surge in attempts to spread misinformation surrounding the conflict since the violence in Israel commenced.
Ahmed warned that relying on social media as a source of information is highly prone to severe misinformation. He further mentioned that various actors, ranging from foreign adversaries of the US to domestic extremists, internet trolls, and "engagement farmers," are taking advantage of the social media battleground for their personal or political interests.
"According to Dan Brahmy, CEO of Cyabra, an Israeli social media threat intelligence firm, individuals with malicious intentions have been strategically manipulating and deceiving others on social media platforms. In a video uploaded on LinkedIn, Brahmy emphasized the importance of not sharing content if its trustworthiness is questionable. It is essential to exercise caution and verify before spreading information."
Upticks in Islamophobic and antisemitic narratives
Graham Brookie, Senior Director of the Digital Forensic Research Lab at the Atlantic Council in Washington, DC, stated to CNN that his team has observed a similar occurrence. This trend encompasses a surge of terrorist propaganda originating from reliable sources, as well as content illustrating explicit violence, misinformation, and blatant falsehoods, and speech promoting hatred - especially an increase in narratives that are Islamophobic and antisemitic in nature.
According to Brookie, the majority of the most extreme content has been circulating on Telegram, an messaging application that lacks comprehensive content moderation mechanisms and has a format that enables the swift and efficient dissemination of propaganda or graphic materials to a substantial, devoted audience. However, much like how TikTok videos are frequently duplicated and shared on other platforms, content shared on Telegram and other more marginal websites can effortlessly find its way onto popular social media platforms or engage curious users from prominent sites. (Telegram did not respond to a comment request.)
Schools in Israel, the United Kingdom, and the United States are advising parents to remove social media apps from their children's devices due to concerns about Hamas sharing disturbing videos of hostages who were recently captured. These videos, which may contain graphic images of dead or injured individuals, including children, have already been widely circulated on platforms such as Facebook, Instagram, TikTok, and X.
In addition, the tech watchdog group Campaign for Accountability released a report on Thursday that identifies several X accounts sharing apparent propaganda videos featuring Hamas symbols or directing users to official Hamas websites. Earlier in the week, X received criticism for presenting unrelated videos as actual footage from the war and for a post by owner Elon Musk encouraging users to follow accounts that had previously shared misinformation. Musk's post has since been deleted, and the videos were labeled using X's "community notes" feature.
Certain platforms are better equipped to handle these threats compared to others. According to experts on misinformation, widespread layoffs in the tech industry, including those in the ethics and safety teams of some social media companies, pose a risk of leaving the platforms ill-prepared at a crucial time. Additionally, a significant amount of content related to the war is being disseminated in Arabic and Hebrew, which tests the platforms' ability to moderate non-English content. Enforcement of regulations in non-English content has historically been weaker than in English-language content.
Sharing uncertain information not only fails to assist people but also causes significant harm, fostering a pervasive lack of trust in what they see.
Imran Ahmed, CEO of the Center for Countering Digital Hate, highlighted the progress platforms have made in communication and information sharing mechanisms over the years. However, these platforms have never faced a test like the present one, as stated by Brian Fishman, co-founder of trust and safety platform Cinder and former leader of Facebook's counterterrorism efforts. Fishman emphasized that platforms with well-established teams will be pushed to their limits, while those without such teams will be pushed even further.
Linda Yaccarino, CEO of X, stated in a letter addressed to the European Commission on Wednesday that the platform has successfully identified and eliminated numerous Hamas-affiliated accounts. Additionally, the company is actively collaborating with various third-party organizations to prevent the dissemination of terrorist content. Yaccarino further emphasized that they have taken prompt and proactive measures to eliminate content that violates their policies, such as violent speech, manipulated media, and graphic material. Following its prior warning regarding disinformation and illegal content associated with the war, the European Commission officially launched an investigation into X on Thursday.
A laptop keyboard and Twitter X account on Twitter X displayed on a phone screen.
Jakub Porzycki/NurPhoto/Shutterstock
Elon Musk's X contributes to uncertainty at the start of the Israel-Hamas conflict.
According to Meta spokesperson Andy Stone, in response to the initial attacks by Hamas, the company has set up a dedicated operations center with a team of experts, including individuals fluent in Hebrew and Arabic. Their main task is to closely monitor and address the rapidly changing situation. Our teams are working non-stop to ensure the safety of our platforms, take appropriate action against content that violates our policies or local laws, and collaborate with fact-checkers in the region to minimize the spread of misinformation. We will continue these efforts as the conflict progresses.
YouTube has removed numerous videos since the start of the attack, and is actively monitoring for content that goes against its policies, such as hate speech, extremism, and graphic imagery. In addition, the platform primarily displays videos from mainstream news organizations when users search for war-related subjects.
Snapchat's misinformation team is closely monitoring content originating from the region to ensure adherence to the platform's community guidelines. These guidelines specifically prohibit misinformation, hate speech, terrorism, graphic violence, and extremism.
TikTok did not respond to a request for comment on this story.
Switch off the engagement-driven algorithms
Under the new EU law called the Digital Services Act, large tech platforms are now required to comply with content-related regulation. This includes preventing the spread of false information, tackling algorithmically recommended content that leads users down unproductive paths, and safeguarding user mental health. However, in this polarized climate, platforms that exercise too much control over moderation may face user backlash and accusations of bias.
The algorithms and business models of these platforms, which typically prioritize content that generates high engagement, can be exploited by individuals with malicious intents who create content to take advantage of this system, according to Ahmed. Additionally, certain product choices, like X's decision to allow users to purchase a subscription for a blue "verification" checkmark that boosts post visibility through algorithms, and removing headlines from links to news articles, can further influence users' perception of a news event.
Schools are urging parents to delete social media apps from their children's phones.
Rouzes/iStockphoto/Getty Images
Parents urged to delete their kids' social media accounts ahead of possible Israeli hostage videos
Ahmed urged platforms to deactivate engagement-driven algorithms, stating that it is crucial to take action now to prevent geopolitical instability and potential harm to Jews and Muslims. Despite social media companies' efforts to filter out offensive content, users' persistent demand for raw and intimate accounts of the situation between Israelis and Palestinians presents a challenging dilemma for these platforms.
"Platforms are facing a demand dynamic wherein users desire up-to-date and detailed content or information regarding events, such as terrorist attacks," stated Brookie.
This dynamic accentuates the business models of social media and the pivotal role that companies play in meticulously customizing their users' experiences. The algorithms, which often face widespread criticism for promoting controversial, divisive, and provocative content elsewhere, now seem to be providing users with precisely what they desire in this particular context.
Be very cautious about sharing
Ahmed and Brookie emphasized that while being emotionally connected to a situation may create a sense of intimacy, it does not guarantee authenticity or objectivity. This distinction is crucial, especially in light of the current proliferation of misinformation on social media platforms.According to Brookie, although social media may give the illusion of presenting real and truthful information, individual stories and combat footage often fail to provide the wider context and perspective that journalists, research organizations, and social media moderation teams employ in order to gain a more comprehensive understanding of a situation. Brookie believes that users should be able to engage with the world and access the most up-to-date and accurate information about any event without having to sift through all the negative content associated with it on an individual basis.
The messy information ecosystem can be further exacerbated by a culture prevalent on social media platforms. This culture often encourages users to share information about crises as a means of expressing their personal stance, regardless of their level of knowledge on the subject. This can result in even well-intentioned users unknowingly spreading misleading information or emotionally-charged content that seeks to gain attention or generate revenue.
Ahmed advises exercising caution when sharing during major world events. There are individuals who aim to manipulate and deceive, spreading falsehoods and fostering hatred or misinformation. Sharing content that one is uncertain about does not help people; in fact, it can cause harm and contribute to a growing sense of mistrust in what is seen online.