A new study has found what many of us have always thought to be true: We are more likely to accept correction from people we know than strangers.
The study, conducted by researchers at Cornell, Northeastern and Hamad Bin Khalifa universities, looked at corrections made on Twitter between January 2012 and April 2014 to see how fact-checking is received by people with different social relationships.
The headline finding is that those who follow or are followed by people who correct their facts are more likely to accept the correction than those who are confronted by strangers.
The researchers ultimately isolated 229 “triplets” where the person sharing a falsehood responds to a correction by a second tweeter. Corrections made by “friends” resulted in the person sharing a falsehood accepting the fact 73 percent of the time. Corrections made by strangers were accepted only 39 percent of the time.
Put simply: when we’re wrong on Twitter, we’re more likely to own up to it if someone we know corrected us.
“If there’s a common community, I think people are aware that (fact-checking) matters. If there isn’t a common community, then I think people are extra wary on Twitter,” Drew Margolin, an assistant professor at Cornell and one of the study’s authors, told Poynter. “It may (also) be the case that the high-profile nature of Twitter makes people shy away from admitting that they’re wrong.”
The study cites two studies as being similar in scope: one by Adrien Friggeri, Lada Adamic, Dean Eckles and Justin Cheng about rumor cascades online, and one by Jieun Shin, Lian Jian and Kevin Driscoll François Bar about corrections on Twitter during the 2012 U.S. election. The former found that cascades (shares of fake memes and other misinformation) run deeper in social networks than reshares and can propagate even after being tagged as debunked, albeit lowering the likelihood they’re shared. The latter found that Twitter served as a useful conduit for spreading political rumors in similar groups of people, which ultimately did not self-correct.
So what does the latest study mean for fact-checkers? Margolin said organizations should focus on making more human connections with their audiences in order to increase the likelihood that their work is well received. That could take place by either working to debunk hoaxes in private WhatsApp groups or holding face-to-face seminars with people in a specific coverage area (i.e. PolitiFact’s upcoming visits to cities like Mobile, Alabama, and Tulsa, Oklahoma).
“The idea that it’s actual people who could have a relationship with you, instead of just some sort of machine, is really important,” Margolin said. “That suggests, ‘What is the goal or intent of this correction? Who is behind this, why are they doing it?’”
Despite the positive conclusions of the study, there are a few notable limitations. For starters, it only analyzed interactions on Twitter — arguably one of the least personal social media platforms — which makes it harder to definitively extrapolate the findings. Additionally, there was no mechanism by which the researchers could tell whether someone was purposely ignoring a correction or if they just hadn’t seen it, as well as how that affected their thinking about the subject later on.
"Rejection of a fact, the truth of a claim, was rare in a pure form and hard to meaningfully distinguish from the rejection of the social behavior being corrected,” the study reads.
“We only have cases where people say that they’re willing to state they’re wrong,” Margolin added. “We don’t really have a good model for, ‘What is my probability for sharing (fake memes) in general?’ It might be the case that I’m statistically less likely to share that fake meme again.”
He said the study’s conclusions are fairly intrinsically generalizable, but that a future inquiry that could shed light on the effect of interpersonal fact-checking on social media would be an examination of a person’s tweeting habits over time after they’ve been corrected on a specific issue. While it would likely take several months, if not years, it would help fact-checkers get a better sense of how community-specific corrections affect audience behavior — especially on a platform like Facebook, which keeps all its user data in one place.
Margolin said he’s also working on a study about what motivates people to share misinformation such as viral memes online.
“Caring about being accurate is not necessarily people's primary concern all the time,” he said. “If I can get a lot of likes sharing something that my friends think is cool, am I really going to think this is going to impact an election?”
As a larger, point, Margolin sees this latest study as a starting point for determining to what extent fact-checking is effective in certain social contexts on specific platforms.
“The interesting question is, I think, we might have an overly ambitious view of how much fact-checking needs to accomplish,” he said. “If it gets people to think twice about spreading something … in a lot of ways that might be good enough — we don’t know.”