- Web Desk
- Today
No more fact-checking on Facebook – what does it mean for you?
-
- Syeda Masooma
- Jan 08, 2025
Meta announced on Monday that it is slashing its content moderation policy, including its US fact-checking program, for all its platforms including Facebook.This move appears to be a major shift towards conforming with the policies of president-elect Donald trump, and comes just a little over a week before he steps in the Oval Office again.
But what does this step really mean for the billions of global users, especially as politics across the world increasingly becomes polarised? More so, what does it mean for more than 50 million Meta users in Pakistan?
Also read: Meta abruptly ends US fact-checks ahead of Trump term
Hum News breaks down the possible consequences of the move.
What does the discontinuation mean and does Zuckerberg endorse it?
Discontinuing professional fact-checking on all its platforms including Facebook and Instagram, and replacing it with a system similar to Elon Musk’s X (formerly Twitter). X relies on ‘community notes’ as a way for users to fact-check an argument. These notes are displayed under a post, or a tweet, sharing alternate opinions or stating the verified facts related to the claims made in the original content of a post.
Meta further announced that it would reverse its 2021 policy of reducing political content across its platforms. The company will rather adopt a more personalised approach, allowing users greater control over the amount of political content they see on Facebook, Instagram, and Threads.
AFP currently works in 26 languages with Facebook’s fact-checking program, in which Facebook pays to use fact-checks from around 80 organizations globally on its platform, WhatsApp and on Instagram. In that program, content rated “false” is downgraded in news feeds so fewer people will see it and if someone tries to share that post, they are presented with an article explaining why it is misleading.
Meta CEO Mark Zuckerberg announced the move while acknowledging that it might lead to more harmful content on the platform. However, he also believes that this is a necessary trade off to preserve the freedom of opinions.
“Fact checkers have been too politically biased and have destroyed more trust than they’ve created… What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas, and it’s gone too far,” he stated.
Meta’s new Chief of Global Affairs Joel Kaplan concurred with Zuckerberg and told media that while Meta was well-intentioned in bringing fact checkers on board, it seems that there was “too much political bias in what they choose to fact check and how.”
Zuckerberg has been seen making efforts to reconcile with Trump since his election in November, including donating one million dollars to his inauguration fund. Trump has been a harsh critic of Meta and Zuckerberg for years, accusing the company of bias against him. The Republican was kicked off Facebook following the January 6, 2021, attack on the US Capitol, though the company restored his account in early 2023.
Hate content policy updated
While the broader media is debating the impact of no more fact-checking, Meta has also quietly updated its hate content policy – and that does not bode well either. One of the changes in the new policy reads “We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like “weird.”
A report by CNN said that users are now allowed to, for example, refer to “women as household objects or property” or “transgender or non-binary people as ‘it,’” according to a section of the policy prohibiting such speech that was crossed out.
That said, the platform reiterated plans to continue to clamp down on “targeted bullying and harassment, as well as incitement of violence.”
What does it mean for the users?
Will this de facto opening up of speech lead to a more equitable platform for everyone to voice their thoughts and opinions without being forcefully quietened? Or will it lead to a never seen before surge in undesirable content, and plethora of mis- and dis-information?
For now, it’s anybody’s guess! While some users are hailing it as the right move towards freedom of speech, others – primarily the groups already suffering from social marginalisation – are afraid that it will lead to even more hatred.
Also read: OpenAI chief Sam Altman denies sister’s sexual abuse accusations
The same can be said about those who spend money on the platform. The ball can swing either way – for instance if the ‘freer’ speech leads more users to join Meta and increases engagement, then the advertisers might also feel more inclined to pour in more money in Facebook. Conversely, if the discourse on the platform starts to incline in one particular way – let’s say pro-China or pro-Palestine, without the platform clamping down on it, the advertisers might feel that the platform is not worth the investment.
What does it mean for Pakistan?
At the beginning of 2024, there were over 60 million Facebook users in Pakistan. With Tiktok taking away some users and Twitter/X ban sending some users back to Facebook, the number is likely to be the same in 2025. This removal of fact-checking will also impact the information that Pakistani users consume on the platform.
According to some reports, Pakistan Tehreek-e-Insaf (PTI) spent north of Rs150 million on social media during Election 2024 campaign. With this level of influence of social media on local politics, the removal of professional fact-checking in favor of community member’s holding each other responsible is bound to have an impact – likely a negative one.
The government of Pakistan chose to block access to X. The next ban might be on Facebook and Instagram. Only time will tell.