June 23, 2020

Global Fact 7’s second day featured a series of panels focusing on the relationship between fact-checkers and technology.

The first panel looked at how working with technology companies has affected the business model of fact-checking organizations.

The second focused on a media tagging system that 160 outlets around the world have used to highlight the most important information in a fact-check.

The third panel was focused on how to improve user experience with fact-checking labels.

The fourth was a discussion about whether fact-checkers who have been assessing political claims can also use specific tools to debunk online hoaxes.

Here are some Day Two highlights:

[the_ad id=”667826″]

Risks to fact-checkers/fact-checking at Internet scale

 

Moderator: Phoebe Arnold | Full Fact, Researcher

Panelists:

  • Eric Mugendi | Pesa Check, Managing Editor
  • Govindraj Etjiraj | Boom, Founder
  • Gülin Çavuş | Teyit, Editor-in-chief
  • Nishant Lalwani | Luminate, Managing Director
  • Tijana Cvjetićanin | Istinomjer, Fact-checking and Research Coordinator

More than money, the five panelists agreed the sustainability of fact-checking on the internet comes down to increased transparency from social media companies. To develop new projects and search for new funders, they first need to better understand what social media needs and how fact-checking impacts the ecosystem.

Nishant Lalwani, managing director at global philanthropic organization Luminate, said fact-checking organizations around the world need more data from companies like Facebook and WhatsApp to study the efficacy of their work. Multiple panelists agreed and said that the public deserves more transparency from companies, too. More data on how fact-checking policies work would benefit many stakeholders.

“What kind of market strategy do they have we’re talking about disinformation, when do they remove content, or what kind of leader do they choose to protect or attack?” asked Gülin Çavuş, editor-in-chief of Turkish fact-checking network Teyit.

Tijana Cvjetićanin, research coordinator at Istinomjer from Bosnia and Herzegovina, said a lack of transparency around  Facebook’s Third-Party Fact-Checking Program has put some fact-checkers at risk for targeted harassment.

“A lot of actors perceive you as someone who has power, while you are not really that powerful,” Cvjetićanin said, adding that Facebook could do more to protect fact-checkers from harassment.

Facebook requires its fact-checking partners to be vetted and certified by the IFCN’s Code of Principles before applying. You can read about the whole process here. Since 2018, Fact-checkers Legal Support Initiative has been available to support fact-checkers at risk.

All the panelists acknowledged partnering with Facebook has been a net positive for the fact-checking community. PesaCheck Managing Editor Eric Mugendi said Facebook’s third-party program has enabled his organization to expand its work teaching media literacy.

India’s BOOM founder Govindraj Etjiraj said tech tools are an important way to scale fact-checks, but that fact-checkers shouldn’t lose sight of their mission to improve the quality of information on the internet.

“That’s really what matters at the end of the day, and everyone who can be a partner in our effort to do this is most welcome,” Etjiraj said.

[the_ad id=”667872″]

 

ClaimReview and MediaReview — Expanding the hidden infrastructure of fact-checks

 

Speakers:

  • Bill Adair | Duke Reporters Lab, Director
  • Joel Luther | Duke Reporters Lab, ClaimReview/MediaReview Project Manager

The second panel of the day was focused on technology and how a tagging system called ClaimReview can help surface fact-checks in apps and through search engines like Google.

For about half an hour, Bill Adair, the founder of the Pulitzer Prize-winning website PolitiFact and Knight Professor of the Practice of Journalism and Public Policy at Duke University, explained how easy it is to install and start using the system.

“ClaimReview tells the platforms what a fact-check is all about,” he said while flipping through slides that showed, for example, how YouTube has just started to add fact-checks on some of its videos.

In the second half of the panel, Joel Luther, also from the Duke Reporters’ Lab, introduced the audience to Media Review, a tagging system that is currently being tested by U.S fact-checkers and aims to point out false images and false videos on apps and search engines, just like ClaimReview does.

“The first draft of Media Review was released in October. Then, it was shared with IFCN’s signatories for comments,” said Luther.

“We are testing Media Review in the United States during this summer and we plan to share worldwide during this summer too,” added Adair.

While ClaimReview already pops up in Google Search, making it easy for users to identify what has been fact-checked and see the final rating given by a fact-checking organization, the company hasn’t announced yet it will use Media Review. The fact-checking community, however, sounded excited about the idea during the Q&A session.

[the_ad id=”667826″]

 

Stop signs: Does the public understand the ways fact-checkers flag and present misleading video and images?

 

Moderator: Mark Stencel | Duke Reporters Lab, Co-Director

Panelists:

  • Claire Wardle | First Draft News, US Director
  • David Clinch | Storyful, Head, Global Strategic Partnerships
  • Sam Gregory | Witness, Program Director
  • Maarten Schenk | Leadstories, Co-Founder & Fact-checker

Early in this panel, First Draft News’ Claire Wardle pointed out how little is known about this topic.

“We have so little research about what these labels do,” she said. “Those of us who design labels… we tend to trust our gut.” She emphasized that platforms like Twitter, which recently gained international attention for attaching labels to U.S. President Donald Trump’s tweets, could help in this effort by sharing their data with researchers.

Leadstories co-founder Maarten Schenk welcomed the academic community’s help after showing the trial and error process of how his organization tweaked its labels.

“We don’t really have academics on staff to do audience testing or research so any offers of help with that we would greatly appreciate it,” Schenk said.

He described Leadstories’ process of adding labels to make their fact-checks easily digestible on social media. Schenk observed anecdotally that a simple red “FAKE” label made certain readers feel attacked, so Leadstories softened its labels to give them more context about why that content was flagged. Now, on the top of some LeadStories images, users see words like “Out of context” or “Photoshopped.”

“That’s often the difference between, ‘oh yeah, well I’ll look into that,’ versus, ‘you damn liberal fact-checker,” Schenk said, imitating an imagined audience member’s response.

Wardle added the limited research that is available has mostly been conducted in laboratory settings in the United States, leaving fact-checkers blind to the impacts of labeling in the broader global context.

The panel also discussed the different ways technology platforms are automating the way they label misinformation. Sam Gregory, program director with the nonprofit media group Witness, argued these automated solutions often try to apply United States context to the global community. He argued this has detrimental effects on human rights groups, and gave the example of those documenting war crimes in Syria.

“One of the things that has been most disturbing is seeing the repeated ways in which that footage gets taken down by automated processes,” Gregory said. He advocated for journalists to consider how this type of automation impacts communities beyond the U.S.

David Clinch from Storyful argued “context is key,” saying it’s important to show the audience how fact-checkers vetted a piece of misinformation. He said technology companies and journalists need to have symbiotic relationships, each improving the work of the other.

The elephant in the room: Fact-checking vs verification

 

Moderator: Lucas Graves | University of Wisconsin, Associate Professor

Panelists:

  • Giovanni Zagni | Pagella Politica, Director
  • Katie Sanders | PolitiFact, Managing Editor
  • Fergus Bell | Fathm, Co-Founder and CEO
  • Sophie Nicholson | AFP, Deputy Head of Fact-Checking

The fourth panel raised a controversial question in the fact-checking community: Can the same fact-checking team assess the veracity of a political content and debunk phony rumors — or should there be two specialized groups in each organization for these different tasks?

Giovanni Zagni, editor-in-chief of the Italian fact-checking organization Pagella Politica, started the conversation sharing his clear opinion. A few months ago, he launched Facta, a branch of Pagella Politica exclusively dedicated to online falsehoods. Now he sees huge differences between the way his two teams work.

“Political fact-checking and online debunking demand two very different training, two very different expertise, and they reach different audiences too,” he said.

Zagni emphasized that those who follow Pagella Politica are usually willing to engage with more complex explanations than those who follow Facta and only want to know whether an image is false.

Katie Sanders, from PolitiFact (United States), and Sophie Nicholson (AFP), however, went into a different direction, under the moderation of professor Lucas Graves (University of Wisconsin-Madison). For both editors, political fact-checking and online debunking are merging.

“We don’t need to split our team. We are capable of doing both. We just have to prioritize,” said Sanders, who visited the Wayback Machine and noticed that PolitiFact has always been debunking content.

“We are seeing an increase in the overlap,” added Nicholson. “It is common to see a claim made by a politician becoming a meme.”

Nicholson’s team has 80 fact-checkers and is active in 14 languages.

Fergus Bell, from Fathm, gave balance to the panel. He introduced himself as a “verifier” and said he has learned a lot from the International Fact-Checking Network community. The goal, he suggested, is to see more blending happening.

“We need to learn more from each other. We (verifiers) don’t have a visual database, as the ones you have been able to build. And we could teach you how to deal with visual mis/disinformation. We should find more ways to work together.”

Many interesting questions came from the audience regarding the impact of Facebook’s Third Party Fact-Checking project in the rise of online debunking. Speakers, however, praised the initiative. They said it brought attention to the importance of online misinformation, it helped fact-checking organizations hire more people and it also helped political fact-checks reach new audiences.

 

Harrison Mantas is a reporter for the International Fact-Checking Network covering fact-checking and misinformation. Reach him at hmantas@poynter.org or on Twitter at @HarrisonMantas

Cristina Tardáguila is the associate director of the International Fact-Checking Network and the founder of Agência Lupa. She can be reached at ctardaguila@poynter.org.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Harrison Mantas is a reporter for the International Fact-Checking Network covering the wide world of misinformation. He previously worked in Arizona and Washington D.C. for…
Harrison Mantas
Cristina Tardáguila is the International Fact-Checking Network’s Associate Director. She was born in May 1980, in Brazil, and has lived in Rio de Janeiro for…
Cristina Tardáguila

More News

Back to News