With the flood of misinformation saturating social media, there are too many claims and not enough human fact-checkers to do the tedious work required. Can automation help with fact-checking?
Fact-checking organizations around the globe have been developing systems that automate or use artificial intelligence to streamline and accelerate elements of the fact-checking process. On Friday, GlobalFact 9, an annual fact-checking conference held this year in Oslo, Norway, hosted a panel to discuss the future of automated fact-checking.
The five-person panel was moderated by Lucas Graves, an associate professor at the University of Wisconsin and author of the book “Deciding What’s True: The Rise of Political Fact-Checking in American Journalism.”
“Everybody understands that in order to fight misinformation at scale, some kinds or some forms of automation are essential,” Graves said in his opening remarks. “And that’s why so many fact-checkers themselves have played a really important role in developing and deploying and trying out these tools and technologies since 2015. At the same time, it’s sort of past time for us to have a really clear understanding of what tools are actually available and usable today.”
Panelists described the projects they are working on, each attempting to automate different aspects of the fact-checking process, from finding a claim to checking it live on TV.
Bill Adair, Duke University professor and creator of PolitiFact, began by sharing strides his team at the Duke Reporters’ Lab has made in live fact-checking. The program they developed, called “Squash,” uses a large database of claims, called ClaimReview, to detect what a politician says in a video, match it to a previously published fact check, and display the relevant fact check onscreen.
“It sort of fulfills the dream that a lot of us have always had of instant fact-checking,” said Adair. “You’re watching a live event, somebody says something, and you serve up a related fact check.” Adair briefly highlighted some limits, citing voice-to-text errors and the program sometimes lacking a preexisting, matching fact check.
Kate Wilkinson, senior product manager at United Kingdom-based Full Fact, and Pablo Fernández, executive director of the Argentine-based Chequeado, discussed a three-part tool they developed that addresses numerous stages of the fact-checking process. Their collaborative work, funded by a grant from Google, allowed them to explore the role of machine learning and AI in fact-checking.
First, Wilkinson said, they created a claim detection tool, “which allows users to really select the media that they want to monitor. And on a 24-hour basis, it scrapes those sentences and displays just the sentences which contain a factual claim.” A fact-checker can then filter those sentences by who said them, and by the subject of the claim.
They also established a matching tool that can search for online repetition of a claim that has already been checked. But most exciting, Wilkinson said, is their stats-checking tool, which uses AI to find relevant statistics to fact-check a claim. “If we have a tool that lets us know which claims are more likely to be false based on data that we’re accessing, we can then prioritize our attention and maybe not spend time pursuing claims that we may then decide not to fact check because they’re true.”
When all three work in tandem, Wilkinson says it feels like “magic.”
Chequeado spearheaded the Spanish language side of the program, Fernandez said. “Around 500 million people speak Spanish, so it is a big audience that we can reach with automation.”
Chequeado has used the prototype internally to fact-check presidential speeches, but it is not yet available for public use.
Aos Fatos, a Brazilian fact-checking site, used automation to predict “misinformation flows on social media” said Tai Nalon, Aos Fatos’ executive director, who also participated on the panel. “We are able to systematize and organize a huge amount of data regarding not only political speech but also during the pandemic, the COVID-19 misinformation flow.”
Since 2020, Aos Fatos has produced over 50 reports about the spread of misinformation through social media in Brazil. The company is also developing a transcription program that works efficiently in Portuguese.
The final member of the panel, RubĂ©n MĂguez, chief technology officer at Newtral, a Spanish fact-checking site, led the development of a tool called Claim Hunter, which is AI that listens to and transcribes audio and detects statements to check.
Claim Hunter also is monitoring the Twitter accounts of 400 politicians; it sends alerts when a factual statement is shared. “We are saving 90% time in political monitoring,” said MĂguez, which frees up reporters to publish more fact checks. Newtral has begun sharing this technology with other reporters and is testing it in other languages.
The session closed with each panelist sharing their concerns and hopes for the future of automated fact-checking.
Wilkinson highlighted the importance of collaboration among fact-checkers to understand common challenges and share solutions. Nalon reiterated the need for long-term funding. MĂguz pushed for more bilingual technology. Fernandez cited the importance of user interface and functionality for people using this new technology in their day-to-day work. Adair emphasized the need for more fact checks and brought up the challenge of reaching people.
“In most of our countries, we have a large segment of people who are not getting fact checks, who don’t want fact-checking, (and) who are resistant to truth,” he said. “And we need to think about automated fact-checking that gets to them, too.”