Not so many years ago, fact-checking went hand-in-hand with elections reporting and political journalism. With the rise of social media, though, fact-checkers have spent more and more time debunking online misinformation, viral memes and other hoax content.
That shift has raised an important question for those who analyze and follow the work of fact-checkers: Has online misinformation reduced the amount of attention from fact-checkers to elections fact-checking and the fact-checking of government?
Professor Lucas Graves of the University of Wisconsin attempts to answer that question in his latest paper, “From Public Reason to Public Health: Professional Implications of the ‘Debunking Turn’ in the Global Fact-Checking Field,” written with colleagues Valérie Bélair-Gagnon and Rebekah Larsen.
“What practitioners call ‘debunking,’ once a minor focus, now dominates the agenda of leading outlets and accounts for the bulk of fact-checks produced worldwide, driven in part by commercial partnerships between fact-checkers and platform companies,” they wrote.
Graves is one of the foremost chroniclers of the fact-checking community, both in the United States and internationally. His 2016 book, “Deciding What’s True: The rise of fact-checking in American journalism,” charted the rise of the U.S. fact-checkers, while his subsequent scholarship has looked at the growth of fact-checking worldwide and what it means for the globalization of journalism.
Graves had a recent online conversation with Angie Drobnic Holan, director of the International Fact-Checking Network, about his findings and their implications for the fact-checking community.
Angie Drobnic Holan: Thanks for talking with the IFCN, Lucas. I think we can all agree that online debunking has come to dominate fact-checking in a way it didn’t in our earlier years. How did you document this shift, and what years would you say it really got going?
Lucas Graves: The shift has been unmistakable to anyone who follows fact-checking, but it’s tricky to document because we don’t have perfect data. I think the most striking evidence comes from Thomas van Damme’s brilliant master’s thesis analyzing five years of worldwide ClaimReview data, which shows the spike beginning 2017 and 2018, with debunking shooting up to about two-thirds of global output in 2020. That sample overrepresents partners in Meta’s Third-Party Fact-Checking Program program, but it’s clear that 3PFC partners have driven much of the overall growth in fact-checking, especially in Asia. It says something that AFP launched its fact-checking unit just in 2017 and is the biggest fact-checker in the world today.
You see the same shift looking at how many long-standing political fact-checkers have evolved in recent years. For instance, by my quick unscientific count just now, only about 21 of the last 100 fact checks on PolitiFact focus on public figures. It’s the same story at Full Fact, Africa Check, and others — debunks dominate the day-to-day output, though that’s partly because they can be published much more quickly.
Holan: What did fact-checkers tell you about the pros and cons of platform partnerships? And did you get any sense of the roles that different platforms played, such as Meta vs. YouTube vs. TikTok?
Graves: Fact-checkers have a very nuanced view of these partnerships. Nobody is under any illusions about what motivates globe-spanning tech firms to work with them, and as a rule fact-checkers want more transparency about what the algorithms are doing with their work and what the long-term impacts are. But fact-checkers from very different parts of the world all mentioned how rewarding it is to debunk some hoax and know that it has an immediate, concrete effect in making that post less visible. That doesn’t happen when you check politicians or major media outlets.
Views definitely vary when it comes to the different platform companies. Some fact-checkers are enthusiastic about working with TikTok, which is more aggressive than Meta in taking down fakes. But TikTok is even less transparent, and the partnerships can seem closer to private consulting arrangements, where fact checks in some cases are never even published. As one person told me, though this is not in the paper, “Is it still journalism if it’s B2B?” — meaning a private, business-to-business service.
People also have strong views of the level of commitment different platforms are making. Obviously there’s been concern for a long time about YouTube’s relative inaction, which prompted the open letter last year. Just last week I heard again from a veteran fact-checker how much better it is to have “meaningful cooperation,” i.e. a working partnership, than even the huge grant that Google and YouTube came up with. Fact-checkers want to partner in ways that make their work more effective.
Holan: Do you have a sense of whether politicians don’t get fact-checked as much with the growth of online misinformation? Or has growth among the fact-checkers meant that more is fact-checked all the time?
Graves: This is a really important question but a hard one to answer, because the landscape has changed so profoundly. I think it’s unquestionable that, so far, revenue from the partnerships has helped to support other activities — checking public figures and also research, media literacy, policy work, etc. But some outlets that used to do political fact-checking have abandoned it, others have made it secondary, and this fuels the perception that debunking has pulled the field away from the work that used to define it.
I think the best way to say it is that the movement’s center of gravity has shifted. Look at how much the agenda at GlobalFact has changed over the last several years — there are relatively few conversations about strategies or impacts of political fact-checking. Not none, but far fewer than four or five years ago.
Holan: Do you sense the field of online debunking maturing or changing? Is fact-checkers’ work with platforms here to stay, or is it a season in the life of fact-checking that will eventually change?
Graves: It’s clear that fact-checkers see cause for concern in some of the recent program changes, and especially in what looks like a rhetorical shift from the tech companies. The fact-checkers who depend on these contracts are all thinking about ways to diversify. This is probably the single most important question the community is facing today, as you brought up in the final session at GlobalFact in Seoul.
As I’ve heard so often over the years, including from fact-checkers in the U.S., it’s deplorable that the shape of platforms’ disinformation programs around the world depends so much on partisan fights in Washington. You would hope that the benefit would be self-evident to people who use search engines and social networks — and therefore to the companies themselves — of having well-designed programs that rely on independent expertise to help identify potentially harmful fakes and falsehoods.
Holan: Talking with fact-checkers over the years and attending our many GlobalFact conferences, what is your sense of the fact-checking community’s maturity? Are we a fully mature field or are we still growing up and learning lessons? Or maybe both?
Graves: What strikes me most is the sincere commitment to building and defending the community that still comes through when I talk to fact-checkers. Growing pains are inevitable, and the disagreements get sharper as the stakes get higher. That’s interesting and important, because that’s where the different visions for what fact-checking can be come out. But nobody is doing this work for selfish reasons, and everyone I speak to seems to recognize that where views differ, those are good-faith differences about the best path forward. I’m very curious to see how the regional networks develop, and how the role of the IFCN evolves along with them.
Holan: What do you think of the rapid growth of research about misinformation? What are the key insights from this field that fact-checkers should keep in mind?
Graves: Honestly it’s rare for published academic research on disinformation to have any immediate practical relevance for fact-checkers — the incentives just don’t line up. For instance, your survey earlier this year showed there’s a real desire for research on audiences for fact-checking and on different formats, two areas that haven’t been studied much. But even where there is relevant research, typically a lot of additional questions have to be answered before drawing a line to specific steps a fact-checker can take.
I’ll give you an example. At GlobalFact we saw a preview of an interesting article that’s just been published, finding some evidence that people are more receptive to fact checks that challenge their political beliefs when the source is listed as an AI. This is original research with practical implications — perhaps fact-checkers who rely on some form of AI in their work should highlight that. But does the effect hold up outside of the United States? Could it vary depending on the subject area? What happens as people learn more about the limits of AI? As the authors note, this is a new finding that only opens the door to future studies.
I think — and I know that a lot of fact-checkers think — that the most promising avenue is research partnerships where academics and practitioners work together to define the research questions and shape the study. A good recent example is the four-country study, carried out in partnership with three fact-checking outlets, that showed (once again) that fact-checking works to increase factual accuracy.
“But fact-checkers from very different parts of the world all mentioned how rewarding it is to debunk some hoax and know that it has an immediate, concrete effect in making that post less visible.”
Interesting, but I have it on Good Authority(TM) that fact-checking has nothing to do with censorship.