‘Chasing ghosts’: Fact-checking ephemeral content
This week, Facebook CEO Mark Zuckerberg elaborated on what he calls the company’s “living room” strategy, the idea that the platform’s user experience could soon be more private, more closed and more “ephemeral” — posts that disappear after a certain amount of time.
The move toward ephemerality has been building since Snapchat started the disappearing act in the early part of this decade. At Facebook’s F8 developers conference Tuesday, Zuckerberg described it as part of the company’s move toward making the platform a more private and closed experience.
But what if those disappearing posts carry falsehoods, conspiracy theories or other kinds of misinformation?
Those who fight misinformation say they are concerned that the trend could actually make their jobs harder. A harmful social media post can still do damage even if it lives for only a short time, and, like chasing ghosts, it can be more difficult for fact-checkers to find and correct.
“Taking the conversation to a more private and disappearing model means that journalists, researchers and law enforcement will have nothing to track down the truth or bring justice in a criminal situation,” said Aimee Rinehart who works with the misinformation-fighting organization First Draft, via email. “While people can record a video or take a screenshot on their smartphones, we will now have to rely on the same person to want to share this with journalists, researchers and the authorities.”
Rinehart said toxic communities that spread conspiracies and false information are well-networked on closed platforms like 4chan and other forums, so they can still coordinate their message even if it is short-lived.
On the positive side, a private post might not have the viral potency as a public one, and thus would be less subject to algorithms that push it to as many people as possible and heighten whatever alarm it might generate.
In the past year, Facebook has quadrupled its fact-checking partners
Eric Goldman, co-director of the High Tech Law Institute at the Santa Clara University School of Law, said ephemeral content has a lot in common with traditional word-of-mouth content spread offline.
“Both can spread virally, but only in exceptional circumstances. Most times, they reach only limited audiences,” he said via email.
He agreed that ephemeral content, like word-of-mouth communications, can convey errors that are hard to correct, but said the limited audience can circumscribe the damage. “On balance,” he said, “I think Facebook’s move is promising because it breaks away from Facebook’s current model of rewarding sensationalist viral content.”
Virality, of course, is relative. A non-public or ephemeral post may not reach as many people around the world, but a particularly toxic one can spread through a community fairly quickly — especially if such a post is shared in a group.
On WhatsApp, that’s already happening. Misinformation spreads far and wide on the private messaging platform, which is encrypted end-to-end — meaning not even WhatsApp’s own staff can see what’s being shared where. The only way for fact-checkers to debunk hoaxes is by asking users to send them examples of potentially false information.
Fakery also spreads on more benign platforms, such as Snapchat. Last year, for example, someone made a doctored photo of a Miami Herald story that said a middle school was under threat of a shooting. The photo, which spread among students on Snapchat, came a week after the school shooting at Marjory Stoneman Douglas High School in Parkland, Florida.
At Mediawise, a (Poynter-run) digital literacy project that aims to teach teenagers how to sort fact from fiction, editor Katy Byron said she sees particular concern with the audience she works with.
“These young minds are easily influenced by what they consume online. If they see misinformation re-posted on their friend’s story on Snapchat or Instagram and then it disappears — that really is no different than seeing it in a feed post they can refer to later,” she said via email. “At least in a feed post, you can comment or correct it somehow and that update is visible and traceable. But the bottom line is — both are bad. “
Meanwhile, there is also concern in the verification community about Zuckerberg’s emphasis on closed groups as a way to facilitate the more private experience for users. Such groups are often used by conspiracy theorists or other bad actors to shelter themselves from correction or fact-checking.
Tweeted NBC reporter Brandy Zadrozny: “Not to overreact, but this is terrifying. Facing criticism that Facebook allows people to organize and spread misinformation, dangerous conspiracy, and hateful ideology content in groups, Zuckerberg decides to lock it down even tighter.”
… technology
- Since launching its partnership with fact-checking sites in December 2016, Facebook has expanded the program around the world to combat the spread of misinformation. Last week, it added five new partners in the European Union ahead of next month’s parliamentary elections. That brings Facebook’s total fact-checking partners to 52 in 33 countries — more than quadruple that of this time last year.
- Speaking of Facebook, the company has finally announced which researchers it will work with to study misinformation on the platform. In partnership with Social Science One and the Social Science Research Council, more than 60 researchers from 30 institutions around the world will be given access to previously unseen data about advertising, post sharing and URLs.
- Last week, we reported in this newsletter that cutting off social media platforms in Sri Lanka following several terrorist attacks on Easter didn’t actually limit the spread of misinformation. But a poll from Morning Consult and Politico found that the majority of American voters thought it was the right move.
… politics
- The Washington Post Fact Checker’s conclusion that President Trump had surpassed the 10,000 mark in false or misleading claims since becoming president has prompted some soul-searching in the media. CJR asked whether we are any better at calling out the president’s lies. Wrote The Post’s Margaret Sullivan: “At some point, and we’ve definitely arrived there, the number of presidential falsehoods overcomes the public’s ability to care.”
- Singapore is currently debating legislation that would punish people who post false information with “malicious intent” with fines of up to $740,000 and jail sentences of up to 10 years. In response, academics and legal experts have issued nine statements and letters opposing the bill.
- Democratic presidential candidate Pete Buttigieg this week became the target of a smear involving false allegations of sexual assault. A pair of right-wing provocateurs are being accused of attempting to recruit young Republican men to make the accusations against the South Bend mayor. USA Today has details from a Michigan man who says he was recruited for the scheme by Jacob Wohl and Jack Burkman.
… the future of news
- Fact-checking site Teyit tried to crowdsource one of its debunks during the recent election in Turkey. It learned that a.) Experimenting with a new format during an election period isn’t a great idea, and b.) Twitter isn’t the best platform to try crowdsourcing fact checks.
- In a study published by the Institute for the Future, researchers found that only 14.9% of American journalists surveyed said they had been trained on how to best report on misinformation. And 55% of journalists said they wanted more resources on how to cover it.
- This week, the IFCN launched a new project: a database of more than 500 websites that fact-checkers have labeled as misleading, false or otherwise unreliable. This is an ongoing effort, which we hope will be useful for journalists, researchers and advertisers alike. Learn more about our methodology here and suggest new websites for us to include here.
Each week, we analyze five of the top-performing fact checks on Facebook to see how their reach compared to the hoaxes they debunked. Read more about this week’s numbers, and why users are posting viral videos of themselves burning wood beams, here.
- Full Fact: “This image about vaccine ingredients is extremely misleading” (Fact: 1.9K engagements // Fake: 409 engagements)
- Agence France-Presse: “Notre-Dame: Why these videos of beams that do not burn are irrelevant” (Fact: 1.4K engagements // 55.4K engagements)
- Factcheck.org: “Viral Claim Blurs Marijuana, Gun Policies” (Fact: 1.2K engagements // Fake: 424 engagements)
- India Today Fact Check: “Viral claim comparing nominations of Rahul Gandhi and Modi is misleading” (Fact: 1.1K engagements // Fake: 33.6K engagements)
- Chequeado: “No, the photos of the owner of a bookstore hit because he refused to sell the CFK book are not true” (Fact: 769 engagements // Fake: 15.5K engagements)
The memes are all over social media: pictures of U.S. Rep. Alexandria Ocasio-Cortez (D-N.Y.) and cows, often farting ones.
A lot of these memes are harmless. Some are downright mean. Most are misleading. All are weird.
How did this get started? And what are the facts?
The Associated Press this week took the time to get to the bottom of it.
The notion that Ocasio-Cortez is somehow “anti-cow” in the pursuit of a climate change solution is a case in which an attempt by a politician to take on a serious issue got caught up in some unfortunate staff work. An information sheet about the Democrats’ “Green New Deal” apparently included language referring to cow farts, the AP wrote, as a lighthearted way to characterize the methane that emits from cattle and contributes to climate change.
Memes are potent but difficult to check. Because they’re satire, or humor, checking them can feel overly literal and humorless. And they often have a kernel of truth — some misstep or slip of the tongue by a politician is twisted into an unending source of mockery. Ocasio-Cortez is a particular target for these kinds of attacks, given her presence on social media.
What we liked: The AP had fun with this one, for sure. But reporters Calvin Woodward and Seth Borenstein also interviewed some serious scholars — climate scientists, conservationists — to explain how methane from cows is a real environmental issue. It also explained the political context: making jokes about cow farts is a way for some politicians to avoid a serious conversation about it.
- BuzzFeed News has a new YouTube show called “Trackback” in which reporters explain debunk and explain misinformation on the platform.
- Congratulations to Africa Check founder Peter Cunliffe-Jones — the IFCN’s new senior adviser!
- A bogus press release published on a site falsely claiming to belong to U.S. Rep. Adam Schiff (D-Calif.) actually originated in Russia, BuzzFeed News reported. But it tricked some media outlets.
- After a shooting at the University of North Carolina at Charlotte, several false claims made the rounds. WCNC, the local NBC affiliate, debunked some of them.
- The actress who played Marcia on The Brady Bunch is fighting back against anti-vaxxers who are using a clip from a show as a way to suggest that measles is no big deal, BuzzFeed reported.
- A CNBC story suggests that YouTube’s efforts to stem harmful content has come at a cost to parent Alphabet Inc.
- Africa Check has a new monthly podcast in which it debunks misinformation on WhatsApp. The name? “What’s Crap on WhatsApp?”
- How did the internet become what it is today? Law professor Jeff Kosseff has a new book on the legal shield that has enabled the user-generated internet to prosper. Read Susan’s review in The Washington Post.
- Populists around the world are significantly more likely to believe in conspiracy theories about vaccinations, global warming and the 9/11 terrorist attacks, according to a new study, the Guardian reported.
- Africa Check is calling for entries to its annual African Fact-Checking Awards.
That’s it for this week. Feel free to send feedback and suggestions to factchecknet@poynter.org.