December 3, 2019

One big concern for fact-checkers is that we are shouting into the void. How do we know anyone is listening as we write our fact-checks?

The simple answer is to track if anybody is shouting back. But after two weeks researching this topic with the UK-based fact-checking organization Full Fact in their London office, I’ve found out that it’s more complicated than that.

I applied for the 2019 IFCN Fellowship Program after a surreal incident happened in Kenya, in July 2019. I wrote a fact-check for PesaCheck about the amount of interest-free loans the Kenyan Ministry of Youth, Public Service and Gender Affairs had disbursed as part of their Women Enterprise Fund, which supports women-owned businesses.

In a public event, the cabinet secretary quoted an amount that turned out to be false (though it is unclear whether this was an error by the reporter who wrote the story). Soon after publishing the fact-check, the Ministry tweeted a press release that clarified the issue and confirmed my article was correct.

That’s been the biggest response I’ve had to any of my fact-checks and it got me wondering about that void.

I came into Full Fact, as an IFCN’s fellow, to answer three broad questions: What makes fact-checking work? Is PesaCheck fact-checking effectively? What could be done to increase the impact of PesaCheck fact-checks?

These are broad in focus so I broke them down into trying to figure out the actual steps to take pre and post-publication as we try to define the type of impact we want our fact-checking to have.

So, what did I find out?

Before clicking publish, there are a few things that fact-checkers can tweak that may determine what responses, reactions and feedback they will get from their fact-check.

First, be deliberate about impact goals to meet. Is it feedback from journalists or politicians? Retweets and shares on social media? Website views? Outlining these beforehand and identifying measurement indicators for each of these goals can take a huge weight off post-publication. It can be as simple as a spreadsheet that tracks measurement indicators each week or month. Over time, actionable trends will emerge.

Keep it simple. When delving into the research — which often means reading 500-page PDFs, scrolling through percentages and statistics — it can be hard to sift through all the information to pick what is important to readers and presenting that in as easy a way as possible. But it’s critical for people to understand the fact-check and make informed decisions.

Headlines matter. For a fact-check, they need to be decisive, informative and clear. “Headlines should be engaging and get straight to the issue that people care about,” said Tom Phillips, Full Fact’s Editor. “In a way that headlines in other forms of journalism don’t necessarily have, (in the fact-checking world) they have to be as informative as possible within themselves”.

Instead of “Are 30% of children in Kenya suffering from malaria?” a better headline would be “No, it is not true that 30% of children in Kenya are suffering from malaria.”

Another thing to consider is rating systems. They’ve been a free-for-all; some fact-checking organizations have four categories under which their fact-checks can fall —  such as true, mostly true, false and mostly false. Others have up to eight categories. The question is, rating system or no rating system?

Full Fact researcher Dr Dora-Olivia Vicol and research manager Amy Sippitt say that the jury is still out on rating scales. “This is an important area to understand, particularly given that other research has found that readers increasingly expect writers to adjudicate, and not simply provide a difference of opinion,” they wrote in a research paper about fact-checking the 2019 UK elections.

There’s no right answer, but it’s worth noting that visuals aid in fact checks but can confuse the reader if done wrongly.

OK, the fact-check has been published. Now what?

If the fact-check was about something that was misreported in the news, it is good to ask for corrections from the publication through its corrections submission process or by contacting the editorial team. However, do not overwhelm them with several messages per day. This could lead to a bad relationship when you should be aiming to have a cohesive, helpful relationship. Use your judgment on what misrepresentations are worth an editor’s note.

“Twitter is obviously a bit of a bubble,” said Ross Haig, the Head of Communications at Full Fact. “There’s a very distinct audience there so the way we approach it is not as a means to approach the widest possible audience. Instead, it’s a way for us to get quick timely bits of information out to a pool of primarily journalists and political audiences.”

Take a similar approach with statistics organizations when they publish data that is confusing and is, therefore, being misunderstood. With statistics, it is easy to come to false conclusions and approaching the source of the data can help clear confusion.

“One of the things we consistently see is that people are misusing statistics. Why is that? Sometimes it’s because the way they’re presented is confusing and it’s not surprising that someone would get it wrong,” said Beki Hill, a Policy Officer at Full Fact.

In other words, going for the root causes of misinformation is more effective.

It is good to check back in with audiences periodically. This can be as simple as adding a reader satisfaction button at the end of articles, conducting user surveys through online forms or holding focus groups with audience members to gauge the overall impression fact-checks are having on the public.

Finally, keep track of the impact goals and the measurement indicators to track them. Designate a member of the team who will be in charge of updating the tracker and producing a summarised report on progress being made or areas of improvement. This will create an avenue to start taking decisive action in impact areas. At Full Fact, Charlotte, the Operations Manager keeps tabs on the impact numbers and makes sure they are consistently updated.

Every step of the pre and post-publication process counts towards how impactful the fact-check will be. Through a joint collaboration with Africa Check and Chequeado to research how fact checks can be optimised to make the most impact, Sippitt and Vicol are keen on finding answers to some of these questions that have not yet been explored yet and look to have more information and guidance for fact-checkers on some of these issues surrounding impact.

“For us, research and impact tracking is seeking to look at what impacts we want to be having, what behaviours and systems we want to improve, and how we can be more effective in reaching those impacts,” said Sippitt. “It’s important that we learn about what’s not working well as well as what is working well, and that’s what drives our desire for evaluation,” she added.

Overall, there will always be things that cannot be quantified. For example, Full Fact received a phone call from a mother who was confused about whether to vaccinate her children or not. She told them that she ultimately decided to because of a debunking article they did on the subject. It can also come in the form of audience participation in the fact-checking process. During my time at their office, they received a political party’s leaflet that had been annotated with the incorrect things about it in biro pen by an anonymous person.

My greatest takeaway? Corrections, not just fact-checking. As fact-checkers, we should not be shouting into the void, but actively trying to get citizens, journalists, politicians and others to shout back.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate

More News

Back to News