In just over four months, at least 50 different journalists or politicians have declared this a “post-truth” era.
The central premise of this argument: Voters have become so blinded by partisanship that they reject facts that contradict their own beliefs. To prove this, most analysts have pointed at polling or research that raise legitimate questions about the impact of fact-checking (most, but not all: one relied on anecdotes of his Uncle Lenny).
One of the most important pieces of research on the relationship between facts and partisan beliefs was published in 2010 by Brendan Nyhan, now at Dartmouth College, and Jason Reifler, now at the University of Exeter.
Among other things, Nyhan and Reifler found that “conservatives who received a correction telling them that Iraq did not have [Weapons of Mass Destruction] were more likely to believe that Iraq had WMD.” Rather than improving understanding, fact-checking reinforced the mistaken belief. Nyhan and Reifler billed this the “backfire effect.”
A new paper, however, suggests the “backfire effect” may be a very rare phenomenon. The study, presented at the annual meeting of the American Political Science Association this summer, was conducted by Ethan Porter at George Washington University and Thomas Wood at Ohio State University.
Porter and Wood showed 8,100 subjects corrections to claims made by political figures on 36 different topics. Only on one of the 36 issues (the misperception that WMD were found in Iraq) did they detect a backfire effect. Even then, a simpler phrasing of the same correction led to no backfire. They conclude that “by and large, citizens heed factual information, even when such information challenges their partisan and ideological commitments.” (Nyhan and Reifler have touched on these new findings in a literature review.)
Google Scholar notes that the 2010 Nyhan/Reifler study has been cited a healthy 462 times; the backfire effect has informed analysis of how journalists should correct misperceptions, including on these pages.
The existence of the “backfire effect” isn’t just a research opportunity for political scientists. It is a question that goes to the very heart of how public debate is conducted. Below is a transcript — edited for clarity — of a Q-and-A conducted with Porter and Wood about the findings of their study.
What led you to study the “backfire effect?”
Porter: There’s been this common perception that we are in a “post-truth” political era; that the American public is immune to factual information about politics given the intensity of their partisan commitments. The original backfire study has always stood out to us as the furthest end of this particular point of view.
Originally, when Tom and I designed this study, we anticipated identifying backfire effects across the political and ideological spectrum. We thought we would see which issues would prompt liberals to backfire and which issues prompted conservatives to backfire. But as time went by and we conducted study after study, we found that no one was exhibiting backfire across any issue.
Wood: Ethan and I were in grad school when the Nyhan/Reifler paper came out, and for the first time we were reading this hugely provocative claim that factual interventions can compound factual ignorance, which has all sorts of implications for democracy and the marketplace of ideas. It was such an important claim that we were both drawn to do work in this area.
Do your findings offer any evidence that we inhabit a “post-truth” world?
Wood: For decades and decades — since at least “The American Voter” — we’ve been familiar with the idea that we are differentially responsive to facts. That is, American voters know which group of facts tends to cohere with their side’s political interests and which tends to cohere with their opponents’ interests.
We definitely see this in our sample’s responses. People are extra happy to adopt a factual correction when they have the opportunity to contradict the other team. And that’s a very reliable finding. We’re not suggesting that we cast aside decades of research on motivated reasoning and people seeking out facts that are consistent with their ideological commitments.
But we have definitely not found any consistent evidence of factual backfire despite months of work on thousands of subjects. By and large, folks across the political spectrum were happy to move, at least some of the way, consistently with a factual intervention.
Porter: I think one way to caricature the current consensus is that people are resistant to factual information and we live in a “post-truth world” where empirical claims have no validity. But Tom and I would be equally skeptical of the caricature that could result from our paper which would view Americans as flawless, rational machines that just incorporate new facts magically.
That is not the case. There’s still differential responsiveness, people still have their political beliefs. It’s just that the picture may not be as dire as is commonly painted.
You chose to concentrate on correcting implicit information in political claims rather than explicit. The difference between what a politician literally says and the context s/he may be alluding to sometimes makes fact-checking controversial. Why did you make this choice?
Wood: We had to be so expansive and include corrections of implications because we wanted to test so many different issues. We found cases where the obvious clear logical implication of a statement was to give a faulty understanding of some factual circumstance even if it literally didn’t make a wrong claim.
Critics of the paper often brought to our attention the statement we used for Hillary Clinton on the level of gun violence. She talks about an “epidemic of gun violence” that “knows no boundaries.” This is clearly a strange way to talk about a time series that’s decreased by half over the last 20 years.
That said, we have separately tested literal corrections and there is no difference in factual responsiveness.
Porter: Just to get back to the genesis of the paper: When we were identifying issues around which we could design corrections, we did so in the supposition that we would find backfire. This amazing thing happened where we didn’t find it at all.
Wood: And we went to incredible lengths to find backfire. We were doing these experiments in the height of the presidential primaries and the middle of the debates. We were taking the most controversial people and the most controversial statements.
We tested allusive statements, thinking that the easiest way to trigger backfire was to correct a co-ideologue for an allusion rather than a literal statement. The fact that their pattern remains the same speaks to the elusiveness of the backfire effect.
What is it then about WMD in Iraq that may have generated the backfire effect in 2010?
Porter: The Iraq War may have been a singular event in American political life: An issue that generated intense feeling, that endured for years, was driven by an intensely partisan presidential administration in the midst of increasing political polarization.
Maybe that issue was just unusually or even uniquely capable of generating backfire. If that’s true then it just goes to show that backfire is not a generalizable behavioral or attitudinal phenomenon.
Wood: The first thing that I would say is that the same way Ethan and I encourage skepticism of our work, we should also have a modicum of skepticism about the original backfire effect.
We were able to replicate backfire with a convoluted survey item; we weren’t able to replicate backfire on this WMD issue with a simpler survey item. Perhaps a simpler question and a less idiosyncratic sample than undergraduate students wouldn’t have led to observing backfire even in 2004 on the issue of WMDs.
What, if any, lessons should fact-checkers draw from your work?
Wood: I would just say: don’t place facts on too high a pedestal. This is only one component of the way that average people come to hold political preferences. For instance, we didn’t see any differences on policy preferences among corrected and uncorrected groups.
So there will still be voters holding weird policy preferences that are in contravention of factual circumstances. We can make the factual intervention and they’ll cohere with the facts but they may still have the preferences they had beforehand. So there’s still a ton of work for fact-checkers to do in improving factual awareness. But they should know that that is not the end of debate.
Porter: I think that’s a really important point. What our work shows is that people do accept new information, but we have no evidence that this then affects their downstream policy attitudes.
The authors of the original study shared your findings even if they didn’t align with theirs. To what extent were they involved in the design of the study and how did they react?
Porter: Nyhan and Reifler have been great. We read nightmare stories of senior scholars rejecting critiques or challenges made by junior scholars. They have been nothing but models of what you’d want a senior scholar to do when their prior work is challenged. They have been great. They weren’t involved at all in the design of the paper though — we sent it to them after we’d completed the study.
Wood: The headline is: They’ve been amazing. As soon as the paper was submitted at (the American Political Science Association), to the panel that Nyhan was chairing, it was accepted. He gave us fantastic notes, he’s been incredibly supportive of the work. As you said, his 19,000 Twitter followers have been shown this paper. He has been a model of academic openness on this question.
Is it accurate to summarize your findings in the headline “Voters are resistant, but not immune, to factual correction?”
Wood: I think that’s half the finding. The other half is… I’m hesitant to use the word “unicorn,” but there’s something about backfire… I have seen so many videos of people making a soufflé and I have tried for years but I cannot get a soufflé to rise.
Backfire is not quite like that, but it does feel like something that you should write to your friends and be excited about when you observe it. Backfire is very unusual, and I don’t think it should be something that affects the way fact-checkers work.