May 30, 2019

Factually is a newsletter about fact-checking and accountability journalism, from Poynter’s International Fact-Checking Network & the American Press Institute’s Accountability Project. Sign up here.

The Pelosi fake: Some basic facts

Fact-checkers and other players in the truth-telling business have been preoccupied in the past week by an altered video of Nancy Pelosi.

The story, originally covered in The Washington Post, involves a slowed-down video designed to make Pelosi, the speaker of the United States House of Representatives, appear to slur her words and struggle to speak.The implication is that she was either drunk or somehow otherwise impaired.

The manipulated video, a smear posted on a Facebook group called Politics WatchDog, was spread around the platform millions of times and even tweeted by President Donald Trump’s personal lawyer, Rudy Giuliani. He later deleted it but appeared in a successive tweet to defend his original.

The effect of such fakes is hard to quantify. Pelosi is a powerful politician who can probably shrug it off, although, as BuzzFeed News noted, these fakes will likely never go away. Officials like her are used to these kinds of fakes, which frequently get posted online worldwide. Moreover, its most receptive audience will be people who want to believe it’s real, don’t care and just spread it anyway — or don’t know any better.

The level of acrimony over the video, however, suggested this one was different. Why?

The video fed into a cauldron of issues already bubbling away at the intersection of politics, social media and misinformation: The virality of the meanest kinds of content, Facebook’s unsatisfying (to many) response, the video’s amplification in Trump world and the fact that it was so easy to make — and spread.

In other words, it’s a complex brew which is itself subject to misinformation. As such, we’re providing here some answers to basic questions about the episode.

Facebook has a partnership with fact-checkers to check stuff like this. Didn’t it work?

Actually, the system worked as it is supposed to. According to Facebook’s relationship with independent fact-checking sites, once a post is rated as false, its future distribution in News Feed is decreased, a fact is appended below it and users who try to share it are warned that it it has been debunked.

After five fact-checking sites confirmed that the video was manipulated, the post was labeled with a warning that there was “additional reporting” from fact-checkers, and links to them.

How Facebook deals with misinformation, in one graphic

One question is whether that happened quickly enough, and whether this video manipulation was so blatant that Facebook even needed to rely on the fact-checking community, whose processes necessarily take time, to make its warning. One potential way to address that, suggested by our former colleague Alexios Mantzarlis, would be a rapid-response team at Facebook that takes action quickly when posts reach a certain engagement velocity.

“Additional reporting” seems like pretty vague language for such a blatant fake. What’s up with that? Why not just take it down?

Facebook’s answer to that came from Monika Bickert, vice president for product policy and counterterrorism, in questioning from CNN’s Anderson Cooper. She said the company’s way of dealing with misinformation is primarily through its partnership with fact-checkers.

“We think it’s important for people to make their own informed choice about what to believe,” she said.

Bickert made a distinction between this kind of content and a “riot or threat of violence” that would require immediate removal of such content. She also noted that the conversation on social media had turned to the fact that the video was manipulated — not what it says about Pelosi. Others see it differently. As The New York Times’ Charlie Warzel wrote this week, “the dominant political narrative of the past two days has focused squarely on Speaker Pelosi’s health.”

Pelosi, for her part, said Wednesday that the company was “lying to the public” by not taking down the video.

Regardless, there’s clearly a question here of whether Facebook should change the language it uses to label posts rated as false by fact-checkers. As Casey Newton wrote in his newsletter for The Verge on Tuesday, the company could have written a warning about the Pelosi fake in plain English: “This video has been distorted to change its meaning.”

Why is this even the platforms’ problem? If users post stuff that then goes viral, is that the platform’s responsibility?

There is an emerging debate now about the degree to which social media platforms should be regulated for content posted by third-party users. This is not likely to result in changes any time soon, given the divided government and wide disagreement over regulation. And, as Daniel wrote yesterday, any solutions should involve multiple stakeholders — not just media companies and Silicon Valley.

Meanwhile, Facebook’s efforts to remove fake accounts, and its relationship with fact-checkers, are all part of actions the company has voluntarily taken to curb the spread of misinformation.

Is there a larger lesson in this whole episode for journalists and media companies? For example, should they replay the doctored video, even if it’s just for comparison purposes?

The “amplification” question is a good one, and we think this episode will be closely studied as an example of the tough questions the press faces in bringing attention to this kind of misinformation. The original Post story on the video may have broadened its reach, but at the same time, it heightened public awareness of the kinds of misinformation millions of people are exposed to.

The question is whether sunlight in cases like this is a disinfectant — or a propellant. At least on Twitter, early data points to the latter in this case.

. . . technology

  • Last week, Facebook published an update on how well it has been enforcing its community standards. In it, the tech company reported that it had removed more than 2 billion fake accounts between January and March. But BuzzFeed News wrote that there are still more active fake accounts on the platform than ever before.

  • Speaking of which, CNN reported on how an influence campaign used fake Facebook and Twitter accounts to push pro-Iranian talking points in the U.S. They even successfully had letters published in several major newspapers.

  • Wired took stock of the current state of deepfake videos and why they’re still relatively easy to spot. But recent developments in the technology that powers deepfakes could make it easier to create them — and fact-checking is the solution (or perhaps smarter cameras?)

. . . politics

  • Last month, we wrote in this newsletter that Sri Lanka’s shutdown of social media channels in an attempt to cut down on misinformation didn’t actually work. This week, the Agence France-Presse wrote a story confirming that. But Indonesia made a similar move last weekamid post-election unrest.
  • Parliamentary elections were held in the European Union last week — and it wasn’t as rife with misinformation as some predicted. But the BBC still found numerous examples of false or misleading videos that got a wide reach on social media.
  • Also on the election front: 2020 U.S. presidential candidates don’t really know what to do about misinformation, Mother Jones reported. Candidates are now “forced to make a lightning-fast decision of whether or not to respond to a political smear,” it wrote.

. . . the future of news

  • In a comprehensive piece about how widespread internet access is changing the African continent, CNET’s Daniel Van Boom writes that there are also problems, including misinformation. Africa Check said it increasingly spends its time debunking false health information.

  • The BBC interviewed a Macedonian woman who says she was hired to create semi-plagiarized copies of articles originally published in right-wing publications in the U.S. “When I got the call and Marco explained what kind of news site it is, that’s the moment I realized I was going to work for fake news,” she told the network.

  • Turkish fact-checking site Teyit launched a WhatsApp sticker pack for their readers to use when calling out misinformation in their messaging groups. The idea is that stickers make it a little less combative to tell someone that they’re sharing bogus content. In Spain, Maldito Bulo has been using a similar sticker pack.

Each week, we analyze five of the top-performing fact checks on Facebook to see how their reach compared to the hoaxes they debunked. Read more about this week’s numbers, and how fact-checkers’ work stacked up to that altered Pelosi video, on Poynter.org.

  1. Teyit.org: “Photo allegedly shows Ekrem İmamoğlu drinking water during the month of Ramadan” (Fact: 29.9K engagements // Fake: 1.1K engagements)
  2. Factcheck.org: “Photo Shows Woodstock, Not a Trump Rally” (Fact: 23.7K engagements // Fake: 2.8K engagements)
  3. Full Fact: “There is no evidence to suggest Nigel Farage was a member of the National Front.” (Fact: 1.3K engagements // Fake: 2.1K engagements)
  4. Agence France-Presse: “No, journalist Nicholas Casey of the NYT does not appear in this image” (Fact: 899 engagements // Fake: 1.6K engagements)
  5. PolitiFact: “Viral video of Nancy Pelosi slowed down her speech”(Fact: 577 engagements // Fake: 88K engagements)

Last week, elections wrapped up in India. Misinformation has plagued the world’s largest democracy for the past year, including hoaxes about a terrorist attackbogus voter fraud claims and even public lynch mobs.

But not all of it has been doom and gloom.

Boom Live debunked a video of a man throwing dollars down onto a street full of people, purportedly to celebrate the growth of markets as a result of Prime Minister Narendra Modi’s reelection. According to the accompanying Facebook posts, the video was shot in Canada and the man in question was Gujarati, an ethnic group from western India.

But that’s false, Boom reported — the video actually depicts a Detroit-based musician making it rain in Manhattan earlier this month.

What we liked: Election-related misinformation can seem really serious (and it is!), but there is also a lot of junk social media posts out there that play on lighter emotions. Boom debunked this one masterfully by doing a reverse image search on screenshots of the video and analyzing comments on Instagram posts to track down its origin.

  1. The Australian profiled Jessikka Aro, the Finnish journalist who has documented “fake news” coming out of a Russian troll factory in St. Petersburg, Russia. The trolls, it wrote, “were not pleased.” She regularly receives death threats, she told the paper.
  2. Twitter has started showing users more ads. And Craig Silverman at BuzzFeed News found that “one malicious campaign used false articles about Drake and the Weeknd to promote casinos.”
  3. Media Matters, which monitors misinformation from the American right, said this week that conservative and conspiracy sources have dominated abortion-related coverage on Facebook.
  4. Full Fact is hiring a head of product to help oversee the British fact-checker’s growing automated fact-checking team.
  5. Radio-Canada has launched a team focused on debunking misinformation.
  6. After a political scientist made up a Trump quote on Twitter to make a joke, several journalists fell for it. And the president himself noticed.
  7. A Washington Post columnist made a succinct argument for why the platforms need to do more to combat anti-vaccine misinformation: “The outbreak of misinformation online is facilitating literal outbreaks of disease.”
  8. Deutsche Welle profiled Congo Check and the work it’s doing to combat misinformation on social media.
  9. The Los Angeles Times has a fun Q-and-A with Daniel Dale, the Toronto Star fact-checker who’s been counting Donald Trump’s falsehoods.
  10. The International Grand Committee on Big Data, Privacy and Democracy had subpoenaed Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg to come to Ottawa to talk about the spread of disinformation and hate on their platforms. They didn’t show, prompting tough questions for the representatives who showed up instead.

Until next week,

Daniel and Susan

young man holding I Voted sticker
Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Daniel Funke is a staff writer covering online misinformation for PolitiFact. He previously reported for Poynter as a fact-checking reporter and a Google News Lab…
Daniel Funke

More News

Back to News