By:
January 7, 2025

After eight years of working with professional journalists to flag misinformation on its platforms, Meta will turn that task over to users.

The tech giant — which owns Facebook, Instagram and Threads — announced Tuesday that in the United States, it will switch to a crowdsourced fact-checking model similar to X’s Community Notes system. Under such a system, participating users will be able to suggest notes to be displayed next to misleading posts.

“We’ve seen this approach work on X – where they empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see,” chief global affairs officer Joel Kaplan wrote in a post announcing the change.

But experts are skeptical. Many fact-checkers say that X’s Community Notes program is ineffective, in some cases even furthering the spread of misinformation. And questions remain about how exactly Meta’s version will work — details that will ultimately determine its efficacy.

Opinion | Meta will attempt crowdsourced fact-checking. Here’s why it won’t work

“Just because in the past, we’ve seen some success in Community Notes, that doesn’t necessarily mean that any system that Facebook happens to roll out is going to be successful,” said Jennifer Allen, a post-doctoral researcher at the University of Pennsylvania who has studied misinformation and crowdsourced fact-checking.

Academic research into X’s Community Notes and crowdsourced fact-checking is split on the model’s effectiveness. Much of it predates the changes made to X under Elon Musk, who acquired the company in 2022 and has since changed moderation policies and cut off access to certain data. “The research has not totally caught up with the current state of X,” Allen said.

But fact-checkers say that they’ve noticed misinformation go unchecked on X. Science Feedback, a fact-checking organization in the U.S. that was part of Meta’s program, analyzed X posts from the 2024 European Parliament elections. It found that out of the 894 tweets that professional fact-checkers identified as containing misinformation, only 11.7% had a Community Note attached.

A separate analysis by Poynter and Faked Up into Community Notes made on Election Day in the U.S. found that only a small percentage of notes were rated as helpful.

Meta shared in its announcement that its version of Community Notes will require people “with a range of perspectives” to agree on a note before it can be published, just as X’s program does. On X, notes are only made public if they get enough votes from people of different points of view.

This requirement ostensibly reduces bias — one issue Meta claimed to have with its third-party fact-checking program — but experts say that in practice, it is difficult for users of different ideologies to reach a consensus. As a result, misinformation about politics or other controversial topics often goes unchecked on X.

“The facts don’t care about if there is a consensus about them or not,” said Maarten Schenk, co-founder and chief technology officer of Lead Stories, a U.S.-based fact-checking organization that operates in multiple languages. “The shape of the Earth remains exactly the same whether social media users can form a consensus about it or not.”

Schenk and other fact-checkers noted the discrepancy between Meta’s stated goal of reducing bias and its decision to switch to a system so vulnerable to political biases.

“There have been studies that show that users are far more likely to label and rate their political opponents than their co-partisans in the United States,” said Alexios Mantzarlis, the director of the Security, Trust and Safety Initiative at Cornell Tech and a former director of the International Fact-Checking Network. “So if the stated ambition is to reduce partisanship or introduce some unbiased approach to fact-checking, there is no evidence that this will do it.”

It is unclear who will be able to participate in Meta’s Community Notes program. Currently, all users are allowed to join a waitlist to be notified whether they are eligible to participate once the program is available.

Those users may or may not have the expertise needed to evaluate potential misinformation. Additional issues arise if Meta allows users to remain anonymous, as X does. There are no consequences for writing or voting for an inaccurate note when users are anonymous, compared to journalists who must put their reputation on the line when they publish a fact-check, Schenk said. He added that anonymity means less transparency.

“With fact-checkers, you know exactly who they are, who funds them, what their methodology is, which sources they use because they are required to disclose all of that,” Schenk said.

Crowdsourced fact-checking models do have some benefits, according to experts. Users may trust crowdsourced fact-checkers more than professional journalists, especially in the current political climate.

Meta’s third-party fact-checking program was not “particularly effective,” Allen said. The program was limited in scale, and it could take several days — well after a piece of misinformation went viral — for a fact-check to be published. Meta also exempted politicians from being fact-checked, further hampering the program’s efficacy.

A Community Notes approach could lead to more widespread coverage and faster fact checks, Allen said, though that is dependent on how exactly the new program is implemented.

Currently, Meta is only implementing changes to its fact-checking programs in the U.S., where it has partnerships with 10 organizations. But fact-checkers outside the U.S. said they believe the decision has implications for everyone.

Several pointed to the rhetoric used by Meta and CEO Mark Zuckerberg in announcing the decision as being dangerous. Meta’s announcement frames its third-party fact-checking program as a tool for censorship that is fraught with bias. But fact-checkers say it was Meta that decided whether or not to censor posts and that accusations of bias are untrue and run counter to the company’s previous messaging.

“Other (fact-checking) organizations might have a difficult time, particularly those working in difficult countries where they are under duress and under harassment from their own governments,” said Carlos Hernández-Echevarria, deputy director and public policy coordinator of Spanish fact-checking outlet Maldita.es. “And now the Meta CEO basically has validated all the harassment they have received through the years.”

Tai Nalon, executive director and co-founder of Brazilian fact-checking site Aos Fatos, pointed out that in a video announcing the change, Zuckerberg made reference to several other regions in the world. At one point, Zuckerberg says Meta will “work with President Trump to push back on governments around the world.”

“Although the decision is currently limited to the U.S., the attack expressed by Meta’s CEO, Mark Zuckerberg, regarding what he called ‘secret courts’ that promote censorship of the platform in Latin America — a false claim — indicates that Brazil is a key focus of the company’s concerns,” Nalon said.

In the same announcement sharing the changes to its American fact-checking programs, Meta stated that it would also be making adjustments to its algorithm. One of those changes is a reversal of its 2021 decision to reduce the amount of political content users see.

Those changes to the algorithm may have a bigger impact on users’ feeds, more so than any change to Meta’s fact-checking programs, according to Joshua Tucker, a professor of politics at New York University and the co-director of the university’s Center for Social Media and Politics. Adjustments to Meta’s ranking algorithm could affect whether fact-checked content even reaches users.

Allen said she is more worried about Meta’s new leadership quietly implementing policy changes like adjustments to the algorithm than she is about the fact-checking changes. Kaplan, the author of Tuesday’s announcement and a longtime Republican lobbyist, was appointed to his position last week.

“There are different signals that they use to kind of gauge whether content is high quality or informative,” Allen said. “They might roll back some of those safeguards that are not as well-publicized as fact-checking but are maybe doing some of the heavy lifting of reducing the spread of harmful content.

“If there’s those changes that are happening under the hood under this new leadership, that’s something we wouldn’t always know about, but I think that could be a bigger, more meaningful shift in terms of what people are actually seeing.”

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Angela Fu is a reporter for Poynter. She can be reached at afu@poynter.org or on Twitter @angelanfu.
Angela Fu

More News

Back to News

Comments

This site uses Akismet to reduce spam. Learn how your comment data is processed.