The problem with measuring fakery on smaller platforms
Misinformation on Twitter and Facebook feeds seemed to have been somewhat contained in the recent U.S. midterm elections compared to the 2016 presidential campaign. But that doesn’t mean fakery has been banished from the internet.
In the leadup to the midterms, Facebook groups, which remain hidden from the public eye, were responsible for seeding a variety of political conspiracies. In countries like Brazil and Nigeria, encrypted messaging app WhatsApp is a primary vehicle for misinformation. And sites like Gab provide a safe haven for people who have been banned from larger, more mainstream networks.
Around the world, misinformers are migrating to private groups, chats and fringe sites to avoid detection by journalists and tech companies. So how can fact-checkers adapt to monitor them?
Maarten Schenk at the fact-checking project Lead Stories has developed a way to track what’s being shared the most on Gab. Last month, he built a script that automatically pulls in public posts into his Trendolizer platform and measures what’s going viral by adding up the number of reposts, comments and upvotes. While the site is basically one big echo chamber, it can prove useful for tracking how hoaxes move to other platforms.
RELATED ARTICLE: Here’s why fighting fake news is harder on WhatsApp than on Facebook
“I basically look for what’s trending, and usually the stuff on Gab isn’t worth debunking because it’s not getting much traction anyway,” Schenk told Daniel. “It’s only once it starts jumping into these Facebook groups that it starts to explode.”
Earlier this year, researchers at the Federal University of Minas Gerais in Brazil developed a method of monitoring content on WhatsApp during the Brazilian election. It pulled data from public political groups and displayed them in an online dashboard that fact-checkers and journalists could access.
Fabrício Benevenuto, the associate professor behind the project, said he could see it being useful in future elections, such as February’s contest in Nigeria. But that method was still pretty limited, only pulling from 347 public groups. And until WhatsApp takes more direct action against misinformation, it’s unlikely that the status quo will change.
So tell us: How are you monitoring smaller or more closed platforms for misinformation? Email factchecknet@poynter.org with your tips and we’ll compile them into a future article.
A new podcast about misinformation
There are a lot of big questions in the ongoing battle against misinformation. Starting today, we’re tackling them in audio form.
Over the next three weeks, the IFCN is releasing a limited-run podcast about fact-checking and fake news. In each of the three episodes, we talk to fact-checkers, journalists and experts around the world to try and answer one big question about the industry. Listen to the promo today and subscribe on Spotify, Stitcher, Google Play and TuneIn (Apple Podcasts coming soon!)
This is new
- WhatsApp is releasing "Share Joy, Not Rumours" anti-misinformation ads on Indian TV ahead of state elections.
- Last week, Daniel reported that Nigeria is the next battleground for election misinformation. This week, the president there had to deny that he was a clone. (Related: Is this Washington Post tweet a little too cute? And read this by media literacy pro Mike Caulfield on the limitations of fake news “recognition” strategies.)
- The European Union is asking the platforms for monthly reports on foreign disinformation campaigns.
Show and tell
- Here’s how Comprova used WhatsApp’s business API to fact-check misinformation about the Brazilian election.
- One strategy for studying the spread of misinformation on social media: Create your own deceptive bots.
- BuzzFeed News’ Craig Silverman has some tips for how to tell if a smartphone app could be fraudulent.
The Bad Place
- U.S. Vice President Mike Pence deleted a picture he tweeted of himself with a Florida deputy wearing a QAnon patch, saying he didn’t want to amplify the conspiracy theory.
- Twitter suspended an account with hundreds of thousands of followers that had been active for years purporting to be Russian president Vladimir Putin
- Foreign Policy published an in-depth piece about how WeChat has become a hub for Islamophobic conspiracy theories.
LOL
- Some guy is scamming Instagram users by saying he can sell them verification. Matt Navarra called him out.
- Behold: Vladimir Trump.
- The baby’s name was Abcde, not Abdce. OK.
A closer look
- What does France’s new law against misinformation actually say — and what does it mean for the rest of the EU? Alexander Damiano Ricci delved into the measure and its implications.
- Reuters published an investigation about how an Iranian agency used at least 70 websites to disseminate propaganda in a variety of countries — including the United States.
- The Conversation published a great analysis of how far-right conspiracy theories become memes and get covered by the mainstream media.
Help us improve this newsletter
We’re revamping The Week in Fact-Checking for 2019 — including the name. Tell us what you want it to be like by filling out this survey.
9 quick fact-checking links
- The New York Times fact-checked the new broadway show, “The Lifespan of a Fact.”
- Forbes tested out the Annenberg Public Policy Center’s new media literacy video game, which is aimed at helping children tell real from fake news online.
- These viral images of Sydney during a thunderstorm are still fake.
- Full Fact is opening its live fact-checking tools to other journalists.
- The IFCN is looking for journalism researchers or professors with a strong understanding of fact-checking to serve as assessors for its code of principles.
- That Mars sunset picture is genuine, but several years’ old.
- More than 20 fact-checkers around the world teamed up to fact-check the G20 summit in Buenos Aires.
- Happy 15th birthday, Factcheck.org! PolitiFact founder Bill Adair also wrote a tribute to the fact-checking site.
- On Twitter, Syracuse University professor Emily Thorson asks: “Thought experiment: if Facebook required users to pay a small amount of money (let’s say 1 cent) to post a link, would we see more or less misinformation posted?” What do you think?
Until next week,