On December 15, 2016, Facebook made an unexpected decision.
A mere month after CEO Mark Zuckerberg ridiculed the suggestion that fake news was endemic on his platform, the company announced it needed debunking help. So it turned to independent fact-checking organizations.
The premise was promising: Independent fact-checkers would be given access to a dashboard on Facebook, where they could see what posts users were flagging as potentially false. They would fact-check them and, should one be proven false, its future reach in News Feed would be decreased, a fact check would be listed under related articles and users who shared it would be informed.
Facebook’s rapid public pivot likely followed a similarly accelerated internal process, too — which resulted in a rocky start for the fact-checking partnership.
“There wasn’t enough planning that went into the project when it was announced in December of 2016,” said Eugene Kiely, director of Factcheck.org — one of Facebook’s first partners — in an email. “The method of notifying fact-checkers of suspicious content was primitive and not particularly effective. We didn’t even get any funding for the project until mid-2017.”
“However, there have been tremendous improvements over time and more changes are in the works, so at this point, it is a very valuable and effective partnership.”
RELATED ARTICLE: How Facebook deals with misinformation, in one graphic
Since launching this project, Facebook has made it a cornerstone of its fight against misinformation. Both Zuckerberg and COO Sheryl Sandberg have mentioned it in Congressional testimony. It has expanded to 35 partners in 24 countries. And fact-checkers say it’s helped them find claims to check, with some estimates finding that there is less misinformation on the platform now than two years ago.
But there’s still much to learn about how Facebook’s fact-checking project has worked out in practice. (Disclosure: Being a signatory of the International Fact-Checking Network’s code of principles is a necessary condition for joining the project. IFCN Director Alexios Mantzarlis also helped launch the project.)
While the company has shared more details about the partnership, detailed data on its results have not yet materialized (despite our better hopes). To get a better sense of the results of Facebook’s fight against misinformation, we surveyed 19 of its existing fact-checking partners, analyzed some of the top Facebook stories of the year and reached out to more than 35 academics interested in Social Science One, a partnership that will give researchers access to Facebook data on the fact-checking initiative.
What we found is that, while fact-checkers generally agree the project has been a net positive, there’s still a lot of work to do. (This echoes a statement released by five fact-checking partners on Thursday.)
What fact-checkers think
The fact-checking organizations with access to Facebook’s fake-spotting dashboard are a varied bunch. They range from international newswires like the Agence France-Presse to nongovernmental fact-checking organizations like Chequeado in Argentina.
Still, 19 of Facebook’s current fact-checking partners responded to our anonymous survey, just over half the total number. We do not presume that they are representative of the full group, but their responses provide a previously under-reported look at how fact-checkers view their work on the social network.
Responses indicate that fact-checkers have flagged tens of thousands of links to false or misleading content, are discreetly satisfied with the relationship as a whole — but don’t think it has been a game-changer. And there’s a broad consensus among them that Facebook should do more when it comes to sharing information with the public.
There is a great variety in the number of links flagged as false by each fact-checker, ranging from fewer than 50 to more than 2,000. This is, in part, a reflection of the varying duration of these partnerships, with some fact-checkers working with the tool since 2016 and others onboarded in the past few months.
If the numbers hold across all fact-checkers and not just those surveyed, then we estimate that anywhere between 30,000 and 40,000 links to false content — possibly many more — have been flagged as part of the partnership. While this figure is a tiny component of the total content shared on Facebook, it would provide plenty of data to measure how fact checks affect the spread of corresponding falsehoods on the platform.
When asked why they joined the partnership, most fact-checkers offered a variety of reasons. For many, this was an opportunity to reach audiences where they were and reduce the reach of misinformation in a way that aligned with their mission. The financial incentive is also attractive.
Judged by their own objectives, fact-checkers appear moderately satisfied with the partnership, rating it on average a 3.5 out of 5. If this were a Yelp review, the restaurant wouldn’t be a must-eat but also not somewhere you would risk food poisoning.
They appear similarly satisfied (3.5 out of 5) with the payment they receive from Facebook for their work — while precise amounts are not generally public and vary across partners based on the work done, Factcheck.org disclosed receiving a palindromic $188,881 from Facebook in fiscal year 2018.
Fact-checkers are less convinced that the partnership has helped their organizations find claims that they otherwise wouldn’t have surfaced as rapidly (3 out of 5). And they are uncertain about whether it has helped them reduce the reach of viral hoaxes (2.9 out of 5), which is a central plank of the social network’s communication about what the partnership should achieve.
The most critical question for partners remains that they believe the company is not telling the public enough about how the partnership works. On average, agreement with the statement “Facebook provides enough information about this partnership with the public” was a measly 2.2 out of 5.
One fact-checker noted that Facebook “should do a better job in telling us and the public how they have used our work to punish bad actors on the platform.”
Others hope that Facebook will expand the partnership to WhatsApp, the encrypted messaging app it acquired in 2014. That platform has been dogged by misinformation around the world, particularly in Brazil, India and Nigeria.
“Fighting misinformation is an ever-evolving problem and takes a multi-pronged approach from across the industry,” said Meredith Carden, head of news integrity partnerships at Facebook, in an email to Poynter. “We are committed to fighting this through many tactics, and the work that third-party fact-checkers do is a valued and important piece of this effort — we love collaborating with them in our shared goal.”
Data, data, data
For most of the duration of their partnership with Facebook, the only figures fact-checkers could point to when it came to their impact were “80 percent” and “three days.”
The first is the average decrease in reach of a post once it’s flagged as false by a fact-checker (Facebook confirmed in an email to Poynter that the figure is still accurate). The latter is how long that process takes on average. Both were obtained by BuzzFeed News in October 2017 from a leaked email.
That limited information has long dogged both the relationship between Facebook and its fact-checking partners and its public perception.
This time last year, fact-checkers told Poynter they were concerned by the lack of transparency from Facebook about how their work has affected the spread of misinformation on the platform. At the Global Fact-Checking Summit in June, product manager Tessa Lyons promised the company would do better.
And recently, fact-checkers started getting personalized reports from Facebook that directly quantify their work.
In one such report, which Poynter obtained from one of the company’s fact-checking partners, Facebook lists several more detailed data points, including: how many users have received notifications for sharing false content, the proportion of users who didn’t share something once it was flagged as false and how many notifications pages received for posting misinforming content.
The data relates to the work that each fact-checker submits via Facebook’s dashboard and offers a three-month view of how that work affected the spread of corresponding misinformation. (Poynter could not publish the contents of the report or who supplied it on the record.)
Still, not all fact-checkers have started receiving those reports. And no big-picture data quantifying how successful Facebook’s fact-checking project has been at limiting the spread of misinformation has been shared with the public.
A study from Stanford University in September found that user interactions with content flagged as fake news fell precipitously since December 2016 (other recent studies had similar findings). According to rudimentary analyses from Poynter and BuzzFeed News, while individual fact checks do appear to limit the future reach of false posts, the aggregate isn’t quite as encouraging.
Facebook told Poynter in an email that it’s looking to share more statistics externally in the new year. A quick look at data from BuzzSumo, a social media metrics tool, revealed mixed results for 2018.
To see which stories were the most engaging on Facebook in 2018, we searched keywords for some of the top political events — including “Parkland” and “caravan” in the U.S., “atentado Bolsonador” (to surface news about the Brazilian president-elect who was stabbed during the campaign) and “gilets jaunes” (“yellow vests”) in France. We found that, while most posts in the top 10 are from mainstream news sites, misinformation, dubious satire and hyperpartisan content still broke through.
For example, the ninth most-engaging article about the migrant caravan in 2018 came from The Daily Wire and claimed that one-third of the migrants were sick with HIV, tuberculosis and chicken pox. (Snopes and PolitiFact both rated that mostly false.)
When searching for stories related to the high school shooting in Parkland, Florida, Poynter found that one false claim about survivor Emma Gonzalez broke the top 10, amassing nearly 500,000 engagements as of publication — despite PolitiFact rating it as false.
More recently, of the top 10 stories about the “yellow vests” protests in France, at least two were dubious. Adrien Sénécat, a journalist at Le Monde’s Les Décodeurs, told Poynter in an email that one of the stories was “misleading” satire and one article was republished from a hyperpartisan site that dabbles in conspiracy theories.
In Brazil, the top 10 stories about the stabbing of president-elect Jair Bolsonaro did not include blatant hoaxes, and one fact check from Boatos.org made the list.
These are rough observations, gleaned from a quick BuzzSumo search — but they indicate that misinformation with massive reach can still slip past Facebook’s fact-checking project. The prospect of a more systematic analysis of Facebook’s fact-checking partnership is on the horizon, but it has to wait for the time-intensive process of academic research.
That’s where Social Science One comes in. The project, which announced its partnership with Facebook in April, promises to publish more information about how fact-checking and misinformation function on the platform. Facebook will provide the data; academics will do the research.
This came after months of requests from the fact-checking and academic community, eager to understand if and how flagging false news on Facebook was having an effect.
Applications for Social Science One proposals closed in November. Poynter reached out to more than 35 academics interested in misinformation, and those who responded saying they submitted to Social Science One said they didn’t want to talk about their proposals until the winners were announced.
Nate Persily, a professor at Stanford Law School who’s helping run the project, said the partnership will most likely announce the winners of this year’s request for proposals in January.
“We’re getting proposals from around the world,” he told Poynter. “This is both the beauty and the challenge of our endeavor here, which is that Facebook data, if analyzed, could answer some of the great questions of human society.”
Persily said the design of Social Science One is an answer to Facebook’s Cambridge Analytica problem, in which the private data of millions of users was used without their consent for political goals. Winning researchers will view Facebook data in a secure online dashboard, and then publish their findings — free from any NDAs or financial pressure, since Social Science One is funded by a variety of independent foundations.
“While it is inherently difficult to work with a company that is under more intense scrutiny than any other company in the world right now,” Persily said, “I have not seen them put up obstacles in our way that are motivated by image concerns.”
The way forward
When one of Facebook’s fact-checking partners, The Weekly Standard, flagged a ThinkProgress article as false in September, all hell broke loose. The dispute centered over an apparently semantical question: How literally should people take ThinkProgress’ headline that Supreme Court nominee Brett Kavanaugh “said he would kill Roe v. Wade?”
But the debacle highlighted some important questions about the role of Facebook’s fact-checking project: What is it really for? Is it to clean up the junky viral hoaxes about sharks swimming up interstates? Or to target inaccurate information in all its guises?
Academic analysis of the tens of thousands of links that have already been flagged should at least be able to answer how the product has been used by fact-checkers to date — what content has been downgraded and to what extent. Fact-checkers see another reason to remain involved; Thanks to Facebook, they can do more work.
“The greatest benefit is having the resources to do more fact-checking,” Factcheck.org’s Kiely said. “In March, we hired a second person for the Facebook project, and at this point, we are churning out a lot of good stories that debunk misinformation on important subjects.”
The problem is improving the tool to weed out posts that have nothing to do with news claims and notifying fact-checkers in a timely manner during breaking news. Kiely said he’d like it if Facebook could improve its notification process so misinformation about things like the 2020 election and mass shootings don’t go unchecked for long periods of time.
“We continue to add new defenses to our holistic approach, like the expansion of fact-checking to photos and videos, new techniques like similarity detection that increase the impact of fact-checking and improvements to our machine learning models that can help us detect more kinds of false content and bad actors more efficiently,” Carden said. “Still, we know this is a highly adversarial issue and will require a long-term investment to which we’re committed.”
Then there are concerns about the general ability of the project to scale to the sheer amount of misinformation on Facebook.
“I would like to see the tool continue to get more efficient at filtering the right kind of questionable items to us to fact-check,” said Derek Thomson, head of France 24’s Observers, which was among Facebook’s first non-U.S. fact-checking partners. “I have a concern about the scale of it. I think that we will always have a hard time dealing with the sheer volume of false and questionable information online, and we’re going to end up seeing armies of fact-checkers out there doing this work.”
To date, the best chance at getting an accurate picture of how fact-checking and misinformation operate on Facebook would appear to be Social Science One. And while the project has been slow to sift through paper proposals (Persily said it has been like “rocket speed” for an academic timetable), what next month’s winning research proposals glean could change the future of the tech company’s fact-checking partnership.
“We want to make sure that we have the confidence of the public and the research community in ensuring that we do this the right way,” Persily said. “If we get it right, then it’s going to open all kinds of potential research out there. So we need to make sure that we do it right rather than doing it fast.”
In the meantime, Thomson said he’s looking forward to receiving the kinds of personalized data reports that other Facebook fact-checking partners have started to get. But until the company starts releasing project-wide data, it’s impossible to definitively measure the success of the partnership.
“Something I raise with Facebook every time we talk to them is that we would like to have a better sense of the impact that the tool is having on Facebook users,” he said. “I know it’s very hard to provide hard numbers for the number of people who are seeing related stories on an item that’s been flagged, but I would love to have an idea of the progression of the impact.”
When asked where he thinks Facebook’s fact-checking partnership will be in one year’s time, Kiely said that, for him, all eyes are on the 2020 U.S. election.
“This project would not exist if not for the flood of misinformation that circulated on Facebook during the 2016 campaign,” he said. “It would be foolish not to apply the lessons of the last two years to the 2020 campaign cycle.”