Meta says it will end fact-checking: A boon or a setback for Palestine?

Critics warn that the shift could fuel hate speech and disinformation targeting marginalised communities, including Palestinians.

A December BBC investigation confirmed earlier findings of Facebook's severe restrictions on Palestinian news outlets' reach during Israel's war on Gaza. / Photo: AA
AA

A December BBC investigation confirmed earlier findings of Facebook's severe restrictions on Palestinian news outlets' reach during Israel's war on Gaza. / Photo: AA

Meta will eliminate third-party fact-checkers and increase the visibility of political content on its platforms, Mark Zuckerberg announced on January 7.

The Meta CEO said this shift will result in “dramatically” reduced censorship on Facebook, Instagram, and Threads.

In recent years, his platform has faced mounting criticism for alleged political censorship, particularly concerning its handling of content about Israel’s ongoing war on Gaza.

Meta’s new policy mirrors the approach taken by Elon Musk on micro-blogging site X, which makes use of a “community notes” system to allow selected users to add contextual notes to posts they find misleading or lacking context.

However, experts worry that the “community notes” system could exacerbate the “already existing” hate speech and incitement targeting marginalised communities, including Palestinians, potentially fueling an uptick in violent targeted attacks.

“Meta's paid ad system has a history of tolerating harmful content, including xenophobic, Islamophobic, and GBVO-related ads, which disproportionately harm vulnerable groups, including Palestinians, by spreading hatred and disinformation,” Palestinian digital rights defender Mona Shtaya tells TRT World, adding that these actions must be understood within the broader context.

“Meta's framing of regulation efforts as “censorship” while championing free expression is a tactic to prioritise profits over safety, undermining the ability of marginalised communities to safely express their views or resist oppression on the platform.”

In December 2023, Human Rights Watch published a detailed report documenting more than 1,050 instances of takedowns and suppression of Palestine-related content posted on Instagram and Facebook, including about human rights abuses, between October and November 2023.

The tech giant received significant backlash for suppressing political debate about self-determination of the Palestinian people when it continually removed posts containing the slogan “From the river to the sea, Palestine will be free,” post-October 7, 2023.

In September 2024, Meta’s Oversight Board overturned this policy, ruling that the slogan does not inherently violate the company’s content guidelines.

Similarly, the Arabic term “shaheed,” meaning “martyr,” had previously been subject to a blanket ban. In July 2024, Meta lifted this ban, adopting a more nuanced approach that considers the context of its use.

Read More
Read More

Digital suppression of Palestine: Over 500 violations documented in a month

“Up until now, we have been using automated systems to scan for all policy violations, but this has resulted in too many mistakes and too much content being censored that shouldn’t have been,” Zuckerberg said on January 7.

Yet, his announcement made no mention of these prior firestorms that made headlines, accusing the tech giant of political bias in favour of Israel.

Sidestepping any acknowledgement of the controversies surrounding Palestine-related content, Zuckerberg zeroed in on plans to lift restrictions on topics such as “immigration” as well as “gender” and “gender identity” during his announcement, in what critics view as his way of appeasing the Trump administration and its conservative policies.

,,

We want to undo the mission creep that has made our rules too restrictive and too prone to over-enforcement. We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate.

“It’s not right that things can be said on TV or the floor of Congress, but not on our platforms,” the Meta CEO said after listing the topics the new policy would impact.

“The recent elections also feel like a cultural tipping point towards once again prioritising speech, so we’re going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms,” he added.

Loading...

Brendan Nyhan, a political scientist at Dartmouth College, described Meta’s policy shift as “a pattern of powerful people and institutions kowtowing to the president in a way that suggests they’re fearful of being targeted.”

In his statement, Zuckerberg also announced that Meta’s content moderation team would relocate from blue-state California to red-state Texas.

Meta has collaborated with dozens of fact-checking organisations globally, including 10 in the US, where the new rules will first be implemented.

Fact-checking partners have pushed back against Zuckerberg’s claims, noting that their role has only been to add context and information, while decisions to remove content have always rested with Meta.

Meta first introduced fact-checking in December 2016, following Trump’s election, to address criticism over the spread of “fake news” on its platforms.

Over the years, the tech giant has collaborated with more than 100 fact-checking partners operating in over 60 languages —including independent fact-checking organisations, news outlets, and non-governmental groups specialising in media literacy, the future of which now hangs in uncertainty.


Route 6