Meta Removes Fact-Checkers, Pledges to Curb ‘Censorship’
Meta Revamps Content Moderation Policies, Scraps Fact-Checkers for Community Notes
Meta has announced a series of sweeping changes to its content moderation policies across Facebook and Instagram, replacing third-party fact-checkers with user-generated “community notes,” a system similar to the one employed by Elon Musk’s X. CEO Mark Zuckerberg made the announcement on Tuesday, describing the shift as a move to curb what he called excessive censorship and political bias.
“Fact checkers have been too politically biased and have destroyed more trust than they’ve created,” Zuckerberg said in a video statement. He argued that efforts to make the platforms more inclusive had instead led to the silencing of diverse opinions.
The policy overhaul, which comes just before President-elect Donald Trump takes office, signals Meta’s shift toward a more hands-off approach to content moderation. Zuckerberg acknowledged that the changes would likely result in an increase in harmful content on the platforms but maintained that the benefits of reducing accidental censorship outweighed the risks.
A Shift in Meta’s Leadership and Approach
The announcement follows several other significant moves by Meta that indicate a shift in its ideological stance. Trump ally and UFC CEO Dana White was recently appointed to Meta’s board, along with two other directors. The company also plans to donate $1 million to Trump’s inaugural fund and has expressed interest in actively participating in tech policy discussions under the new administration.
Joel Kaplan, Meta’s newly appointed Chief of Global Affairs and a prominent Republican, described the changes as a response to societal and political pressures over the past few years. He credited the incoming administration, which he characterized as defenders of free expression, for creating an environment conducive to such policy shifts.
Reversing Course on Fact-Checking
Meta’s decision to end partnerships with third-party fact-checkers marks a stark departure from its prior content moderation practices. The fact-checking initiative, launched in 2016, aimed to combat misinformation after the company was accused of enabling foreign interference and disinformation during the U.S. presidential election. Over the years, Meta had introduced automated systems, safety teams, and the Oversight Board to address content moderation challenges.
Now, however, Zuckerberg has chosen to follow Musk’s lead by relying on user-generated context labels through community notes. The automated systems for detecting policy violations will also be scaled back to focus solely on illegal and high-severity content, such as terrorism, child exploitation, and fraud. Other issues will require user reports to trigger reviews.
Balancing Free Expression and Harmful Content
Zuckerberg acknowledged the inherent tradeoffs in the new approach. While fewer innocent posts and accounts would be mistakenly removed, the platform is likely to catch less harmful content. Additionally, Meta will roll back restrictions on sensitive topics like immigration and gender identity and reduce limits on political content in users’ feeds.
To address concerns about bias, Meta plans to relocate its trust and safety teams from California to Texas and other U.S. locations. “I think that will help us build trust to do this work in places where there is less concern about the bias of our teams,” Zuckerberg said.
The moderation changes reflect a broader ideological pivot within Meta’s leadership as the company seeks to align itself with the incoming administration’s emphasis on free speech. However, critics warn that this shift could lead to a resurgence of misinformation and harmful content on Meta’s platforms.