Meta will end its eight-year partnership with independent American journalists, including PolitiFact, to identify false information and hoaxes on its platforms.
Meta’s content moderation approach will resemble X’s Community Notes model, Meta CEO Mark Zuckerberg said.
In a five-minute video posted Jan. 7, Zuckerberg cited the political environment after President-elect Donald Trump’s victory and a desire to return to “free expression” on Meta platforms, including Facebook, Instagram and Threads. The move is part of a multipronged plan to change how Meta moderates content that also includes changing its own content filter settings.
“We built a lot of complex systems to moderate content. But the problem with complex systems is they make mistakes,” he said. “Even if they accidentally censor just 1% of posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship.”
Zuckerberg, who met with President-elect Donald Trump at his Mar-a-Lago resort after his victory, said the “recent elections also feel like a cultural tipping point” toward prioritizing speech. “The fact-checkers have just been too politically biased and have destroyed more trust than they have created, especially in the U.S.,” Zuckerberg said.
Neil Brown, president of the Poynter Institute, the journalism nonprofit that owns PolitiFact, said Zuckerberg’s statement was disappointing. Meta sets its own tools and rules, he said, while PolitiFact and other fact-checking outlets offered independent review and showed their sources.
“It perpetuates a misunderstanding of its own program,” Brown said of Zuckerberg’s statement. “Facts are not censorship. Fact-checkers never censored anything. And Meta always held the cards. It’s time to quit invoking inflammatory and false language in describing the role of journalists and fact-checking.”
Meta’s decision affects contracts it has with 10 fact-checking partners in the U.S. Its website shows it has similar business arrangements with fact-check organizations in about 119 countries, extensively in Europe, Brazil and India. But only the U.S. program was changed by Tuesday’s announcement. U.S. fact-checkers were in conversation throughout the day about how they may continue to help users navigate information on Meta and other social media platforms.
PolitiFact was one of the original news organizations that partnered with Facebook to launch the company’s Third Party Fact-Checking program in December 2016.
From the start, Meta determined the scope of its program. For example, Meta exempted speech from politicians from fact-checking.
Zuckerberg repeated “censorship” concerns throughout his video, but Meta rarely removed content from its platform. When it did, it was Meta’s decision, not fact-checking outlets’ decision. The program’s third-party fact-checkers have never had the power to remove content from Meta’s platforms, a fact that Meta makes clear in its online program description: “Fact-checkers do not remove content, accounts or Pages from Facebook,” it says, bolding the words for emphasis.
Through the program, Meta surfaces potential misinformation circulating on its platforms based on what it describes as community feedback and other signals, including how fast the content is spreading and how people are responding. It also uses keyword detection during major news events or when certain topics are trending.
“For example,” Meta wrote on its website, “we’ve used this feature to group content about COVID-19, global elections, natural disasters, conflicts and other events.”
Fact-checkers review the information Meta surfaces and, using their own organization’s editorial standards for what merits fact-checking, chooses claims to check. Fact-checkers then produce original reporting using primary sources, interviews, public data and analysis. They use these fact checks to review and rate the accuracy of claims trending on Meta platforms.
Once third-party fact-checkers have fact-checked a piece of Meta content, Meta reduces the content’s distribution “so that fewer people see it.”
As Meta writes, “We notify people who previously shared the content or try to share it that the information is false, and apply a warning label that links to the fact-checker’s article, disproving the claim with original reporting.”
Meta also uses artificial intelligence to fan out fact checks, screening its own platforms for similar claims and attaching the third-party fact checks to the content based on those matches.
The program includes an appeals process by which people whose posts are flagged with fact checks may appeal the fact check rating decision and, if merited, correct the post and have the fact check rating removed.
So far this week, PolitiFact has fact-checked Meta posts, including:
- A False claim from Threads that “22 states will not be certifying the election.”
- A False Jan. 4 claim from Facebook that “a second attack in New Orleans has been uncovered, police are searching.”
- A viral image that falsely claimed to show a child who was “found” by police in numerous locations, including Semmes, Alabama; Wayne County, West Virginia; and Baldwin County, Georgia.
In its description of the third-party fact-checking program, dated June 1, 2021, Meta touted its success:
“We know this program is working and people find value in the warning screens we apply to content after a fact-checking partner has rated it. We surveyed people who had seen these warning screens on-platform and found that 74% of people thought they saw the right amount or were open to seeing more false information labels, with 63% of people thinking they were applied fairly.”
This article was originally published by PolitiFact, which is part of the Poynter Institute. This story will be updated.
Comments