Tuesday’s internet-shaking announcement that Meta was ending its third-party fact-checking program and instead implementing an untested crowdsourced solution in its place was not too surprising.
I didn’t think it would come this soon, but tech leaders have been eyeing the innovative and cheap system since then-Twitter rolled out Birdwatch — now X’s Community Notes — in 2021. I’ve also been watching the platform and spent countless hours digging through Community Notes data to determine crowdsourced fact-checking, in the form being proposed, doesn’t work.
Based on Meta’s press release, it appears it’ll closely mimic X’s system, but seemingly without the positive angle of total transparency. So I can only guess the same flaws that plague Community Notes will now replace the tried-and-tested fact-checking that was happening on Facebook and Instagram.
I’ve waded through the endless fields of generative AI slop on both platforms to know that this change is a scary prospect.
Based on my three years of analyzing X’s Community Notes, here’s why Meta’s plan is doomed to fail:
- The algorithm used to pick which “fact checks” appear on posts requires agreement from “a range of perspectives.” In a hyperpolarized world, it’s nearly impossible to get two sides to agree on anything, let alone facts that debunk political misinformation. On X, less than 9% of proposed notes end up with that agreement. And very, very few of those address harmful political and health misinformation. The scale promised by crowdsourced fact-checking is a mirage.
- Many proposed and public Community Notes contain misinformation themselves. And my analyses have found that users are very bad at flagging posts that are actually fact-checkable — largely tagging opinions or predictions — and use biased sources, or other X posts, to support their findings.
- While some research has shown people trust crowdsourced fact checks, and that the idea has promise, it’s still very much an experiment. An analysis I did with Alexios Mantzarlis, director of the Security, Trust & Safety Initiative at Cornell Tech, showed that Community Notes was ineffective on Election Day. It’s irresponsible to roll out a product like that — “over the next couple of months” — on such massive platforms like Facebook and Instagram.
Despite my criticism, I remain a big believer in crowdsourced fact-checking — but as one spoke in a real trust and safety program, which was how it was originally envisioned at Twitter.
If Meta is truly following X’s example, it will greatly exacerbate the misinformation problem on Facebook and Instagram. Take one look at your X feed today. Is it more factual than it was three years ago?
A crowdsourced fact-checking solution is only as effective as the platform, owners and developers behind it. And it appears Meta is more interested in “more speech” than it is in tackling misinformation.
Read more: Does crowdsourced fact-checking work? Experts are skeptical of Meta’s plan
Comments