Fact-checking might help politicians determine which claims to drop from speeches.
That’s according to a currently unpublished study, in which Stanford University Ph.D. student Chloe Lim analyzed how fact-checking affected the rhetoric of presidential candidates in the 2012 and 2016 U.S. presidential elections.
What she found was that, once a fact-checking organization rated a politician’s claim false, that politician was 9.5 percent less likely to repeat the claim. Lim found her results were especially pronounced for the 2016 campaign, with Hillary Clinton dropping false claims with a 14.5 percent probability and Donald Trump with a 9.2 percent probability.
Take that, post-truthers.
“This study shows that news organizations can affect candidate behavior and hold politicians accountable by evaluating the accuracy of what they say in public,” she told Poynter in an email.
In one example, Trump claimed twice in five speeches that Clinton wanted to cut Medicare and Social Security benefits. After being fact-checked in mid-October, he didn’t repeat the claim in any of the following speeches that Lim analyzed. The same held true for a cherry-picked Clinton claim that former president Barack Obama’s administration created 15 million new private-sector jobs.
The study adds to the relatively sparse literature on the effect of fact checks on politicians, including one 2014 study from research duo Brendan Nyhan and Jason Reifler that found state legislators who were warned about the possibility of being fact-checked were less likely to receive a false rating from (Poynter-owned) PolitiFact.
“Effects of fact-checking have only been assessed anecdotally or in experiments on politicians holding lower-level offices such as U.S. state legislators,” said Lim, who has done research on American fact-checkers in the past. “This study focuses on the effects of fact-checking on presidential candidates.”
To measure the effect fact checks had on politicians, Lim gathered 374 speeches — 67 from Clinton, 105 from Barack Obama, 77 from Mitt Romney and 112 from Trump — during campaign rallies, debates and conventions between August and Election Day in order to get the most speeches within the shortest time period. That way, other factors wouldn’t affect her analysis of fact checks.
Then, she created a dataset of 292 fact-checked and 142 unchecked statements, the former derived from direct quotes covered by PolitiFact, Factcheck.org or The Washington Post Fact Checker. The latter represented direct, fact-checkable quotes that fact-checkers didn’t cover.
“My hypothesis was that, if the downward trend in the probability of a fact-checked statement being made is an artifact of something that is unrelated to fact-checking, then we should be able to observe a similar trend among statements that were not fact-checked,” Lim said.
She paired each fact-checked statement with one or two unchecked ones that were similar in topic, frequency or score on ClaimBuster, a tool that automatically flags fact-checkable claims created by computer scientists at the University of Texas at Arlington. Then, Lim measured the difference between those groups’ ability to limit future repetition of false claims.
She found that fact-checked claims were much less likely to be repeated — and the difference was statistically significant. The study posits a few different theories as to why this is the case.
First, fact-checkers track politicians who repeat false claims. A good example is The Fact Checker’s “Recidivism Watch” column.
“When candidates are repeatedly accused of lying and of refusing to correct their claims even after learning that these claims have been debunked by fact-checkers, they may lose support from voters,” the study reads. “In addition, candidates might worry that negative ratings from fact-checkers may cause donors or other political elites to withdraw their endorsements.”
Second, personality may play a role. Presidential candidates may fear being called a liar in public.
Third, it’s possible that candidates are genuinely unaware when they’re lying. In that case, they’d willingly correct themselves after fact-checkers point out their falsities.
Despite its optimistic findings for fact-checkers, there were some limitations to Lim’s study.
It didn’t parse out which outlets were the most effective at limiting the repetition of false claims by presidential candidates, or which types of claims were least likely to be repeated because there was an insufficient sample size — particularly for The Fact Checker. It’s also possible that politicians could have repeated some fact-checked claims on social media or in speeches that Lim didn’t analyze.
So will the study’s findings hold up in the long term? A researcher who’s currently reviewing the article for a journal (and felt uncomfortable going on the record as such) told Poynter it’s a notoriously tough topic, but the fact that someone’s asking about it is more important than one study’s generalizability.