Meta accused of restricting pro-Palestine speech by human rights group

Meta accused of restricting pro-Palestine speech by human rights group

Meta, the social media giant, faces criticism from Human Rights Watch for allegedly restricting pro-Palestine speech The nonprofit claims that Meta repeatedly removed or limited content supporting Palestine, even when it didn't violate their rules This comes amidst ongoing scrutiny of Meta's moderation practices during the Israel-Hamas war

Human Rights Watch criticized Meta for consistently removing or limiting content in support of Palestine or Palestinian human rights, even if it did not violate the company's rules. The report highlights worries about cases where Meta removed "peaceful" pro-Palestine content despite no policy violations. The group also urges Meta to be more transparent about its policies and moderation decisions, especially in regards to government takedown requests and exceptions made for newsworthy content that violates its policies.

Human Rights Watch has called on Meta to allow protected expression, such as discussions about human rights abuses and political movements, on its platforms. The organization emphasized the importance of Meta consistently enforcing its policies for all users.

Although Human Rights Watch gave general descriptions of the removed content, they did not provide detailed information or screenshots of the hundreds of posts they claimed were removed or restricted.

The organization reported discovering over 1,000 pieces of pro-Palestine content that, according to them, did not breach Meta's guidelines but were still limited or deleted in October and November 2023.

Examples of this content included posts featuring images of injured or deceased individuals in Gaza hospitals, as well as comments expressing sentiments like "Free Palestine" and "Stop the Genocide." Additionally, the group cited an incident where a user attempted to post a comment consisting solely of a string of Palestinian flag emojis, only to receive a warning from Instagram indicating that her comment "may be hurtful to others."

The group does not claim in the report that pro-Palestine supporters faced over-enforcement more than other groups.

Meta accused of restricting pro-Palestine speech by human rights group

Meta (formerly Facebook) corporate headquarters is seen in Menlo Park, California on November 9, 2022.

Josh Edelson/AFP/Getty Images

The Oversight Board criticized Meta for unnecessarily removing two Israel-Hamas videos, stating that the Human Rights Watch report does not accurately represent Meta's commitment to protecting speech related to the conflict.

Meta stated that the report failed to consider the challenges of enforcing their policies on a global scale during a rapidly evolving, highly divisive, and intense conflict. This has resulted in an increase in reported content. The company emphasized that their policies aim to provide a platform for all voices while maintaining safety. They admitted to making mistakes but denied the accusation of deliberate and systemic suppression of specific voices. They also dismissed the claim that 1,000 examples out of a vast amount of content constitute evidence of systemic censorship, labeling it as misleading.

Ongoing scrutiny of moderation around the war

Thursdays report is just the latest scrutiny that Meta and other social media companies have faced over their handling of content related to the Israel-Hamas war.

Following a decision by Meta's Oversight Board earlier this week to reverse the company's initial choice to take down two videos connected to the conflict, which the board deemed as containing crucial information about human suffering on both sides, the company has faced criticism. This criticism arose after Meta and other platforms were under scrutiny earlier in the conflict for not removing potentially harmful or misleading content, emphasizing the delicate balance the company must strike: swiftly removing enough content to prevent potential harm, without excessively enforcing its rules in a manner that restricts free expression.

The conflict is made more difficult by the fact that there is not always consensus on what constitutes harm. For instance, a recent report by Human Rights Watch criticized Meta's removal of comments and posts containing the slogan "from the river to the sea, Palestine will be free". This phrase is seen by some as a call for peace and coexistence between Israelis and Palestinians, but others view it as anti-Semitic, anti-Israel, and potentially violent.

Furthermore, Human Rights Watch disagreed with Meta's decision to include Hamas in its Dangerous Organizations and Individuals policy, citing the United States government's designation of the group as a terrorist organization. They argued that Meta should instead rely on "international human rights standards."

The report states that the US list includes political movements with armed wings, such as Hamas and the Popular Front for the Liberation of Palestine. It added that Meta should publish the full list of organizations covered by its dangerous organizations policy. The enforcement of this policy effectively bans many posts that endorse major Palestinian political movements and suppresses discussion around Israel and Palestine.

Hamas has been responsible for significant, deadly violence against Israelis and opponents in Gaza. Its attack on October 7 resulted in the loss of more than 1,200 lives. Hamas is designated as a terrorist organization by the United States, the European Union, and Israel.

The report released on Thursday discusses Metas Dangerous Organizations and Individuals policy, noting that while it understandably prohibits incitement to violence, it also imposes broad restrictions on ambiguous speech categories, including praise and support for dangerous organizations, primarily based on the United States governments designated lists of terrorist organizations for definition.

Meta accused of restricting pro-Palestine speech by human rights group

A Messenger mobile app displayed on a smartphone screen alongside that of Facebook, Instagram, X, WhatsApp, YouTube, TikTok, Threads, on August 15, 2023.

Jonathan Raa/NurPhoto/Shutterstock

The Israel-Hamas conflict highlights the way social media presents a distorted version of reality. The report states that Meta's enforcement policy effectively censors posts that support significant Palestinian political movements and stifles conversations about Israel and Palestine. It also suggests that Meta should make the complete list of groups and individuals affected by the policy public.

In August, Meta revised its Dangerous Organizations and Individuals policy to permit references to such groups and individuals within the context of social and political discourse. The company also announced plans to release an updated version of the policy in the first half of the following year, following a review of its definition of "praise" for dangerous organizations in September.

Human Rights Watch conducted a review by requesting emails from Facebook and Instagram users containing screenshots and other evidence of their content being removed or restricted.

The reports were received from over 60 countries in various languages, with a focus on English. The majority of the messages expressed support for Palestine or Palestinians in a peaceful manner. Human Rights Watch excluded cases where claims of unjustified removal could not be substantiated or where the content could be interpreted as inciting violence, discrimination, or hostility.

The report raised concerns about Meta's "heavy reliance" on automation for content moderation, citing examples such as the automatic removal of pro-Palestine comments and important videos related to the Israel-Hamas war. The Oversight Board also criticized the company's use of automated systems for moderation.

The videos were restored by Meta after the board chose to review them, ahead of this week's decision.

"We value both freedom of expression and the safety of our users," the company stated in a recent blog post.

In October, Meta informed CNN that it had set up a special operations center with a team of experts, including fluent Hebrew and Arabic speakers, to closely monitor and address the rapidly changing situation. Additionally, the company stated that it was collaborating with third-party fact checkers in the area.