When news broke on Monday that the Notre Dame was on fire, misinformation immediately started flooding social media. And French fact-checkers weren’t far behind.
“When the fire started, I was at home,” said Samuel Laurent, editor of Les Décodeurs, a fact-checking project based at Le Monde newspaper. “I immediately started to look at Twitter because I know, in these cases, that’s where you’ll find the misinformation.”
“We’re kind of used to this.”
Les Décodeurs started debunking rumors about the origin of the fire (no, there’s no evidence it was an attack). CheckNews fired off answers to readers’ questions about the tragedy (no, the fire wasn’t started by a Yellow Vest protester). 20 Minutes debunked photos taken out of context (no, firefighters didn’t save a statue of the Virgin Mary).
And the French weren’t the only ones to jump on the big story.
In nearby Spain, Maldito Bulo published a similar roundup of viral rumors about the tragedy. The newly formed FactCheckEU alliance published a piece on the event, which was shared with other fact-checkers around the world. Even (Poynter-owned) PolitiFact jumped into the media scrum, debunking an Islamophobic hoax about the fire.
All of those fact checks racked up at least several hundred engagements on Facebook — and most of them got more reach than the hoaxes they debunked.
Below is a chart with other top fact checks since last Tuesday in order of how many likes, comments and shares they got on Facebook, according to data from BuzzSumo and CrowdTangle. Read more about our methodology here.
All in all, fact checks debunking hoaxes about the Notre Dame fire generally performed well on Facebook. And that’s good news, considering that misinformation regularly outperforms fact checks on the platform.
But fact-checkers still struggled to contain the spread of hoaxes on Monday. Why?
“Conspiracy-minded goons continue to twist real-time events into nefarious plots in the absence of any facts, and platforms’ viral sharing mechanics help their narratives dominate users’ attention while the truth is still being uncovered,” Casey Newton wrote in his newsletter for The Verge.
Ground zero for that battle is Twitter.
Of the hoaxes on BuzzFeed News’ running list of misinformation about the Notre Dame fire, a format that the outlet uses after most big news stories, all but one were on Twitter instead of Facebook (although one hoax was about Facebook itself). One tweet, which purported to show a video of a Yellow Vest protester in the church (it was just a firefighter), was the basis for several other viral hoaxes in other languages.
https://twitter.com/JaneLytv/status/1118191124630396929?ref_src=twsrc%5Etfw
Another baseless tweet claiming the fire was set deliberately was used as the basis for an Infowars story. Both have since been deleted.
But other hoaxes racked up thousands of likes and retweets, eventually surfacing on mainstream cable news shows in the U.S., BuzzFeed reported in a timeline. And Laurent said most of the conspiracies started on the American right.
“The first stories were that Muslims were cheering at the flames and the church burning, which was actually wrong,” he said. “It was not the French people that were sharing the first fake news — it was really the Americans and right-wing people trying to shape the discourse.”
Those kinds of Twitter-centric hoaxes are typical for breaking news situations, when gaps in information about an ongoing event are filled in by social media users. But for fact-checkers, it presents a real problem.
Unlike Facebook, which partners with fact-checking outlets to debunk and decrease the reach of false content, Twitter doesn’t have a policy strictly aimed at decreasing the reach of false posts. Among the actions that the company does take is removing bogus accounts posing as news organizations.
But that policy can be gamed — and it isn’t applied uniformly.
BuzzFeed reported on Monday that imposter accounts for CNN and Fox News were used to publish bogus claims about the Notre Dame fire. They stayed online for a while because they had the word “parody” in their bios, and Twitter only removed them after BuzzFeed pointed them out. That’s a classic strategy used by some misinformers on Twitter.
Over the summer, I reported on how Twitter hasn’t been proactive about developing anti-misinformation policies that are essential during breaking news situations. Exhibit A is what happened after the school shooting in Parkland, Florida, when Miami Herald reporter Alex Harris was targeted by several imposter tweets that made it look like she was asking eyewitnesses for images of dead bodies.
When she reported it to Twitter, the company responded saying the posts didn’t violate its guidelines.
After the incident, Florida lawmakers called Twitter to Washington to explain how the platform was used to impersonate journalists. And that action didn’t even broach the question of reducing the spread of misinforming content — just enforcing rules that Twitter already has on the books.
Laurent said that, to him, the biggest problem on Twitter following news of the Notre Dame fire was the mix of hate speech with misinformation.
“If you read my account, you probably saw lots of guys saying, ‘We don’t believe you,’” he said. “One of the points of this story is that, if some people want to say this is a terrorist attack, I can — and you can’t tell me it’s otherwise … You can’t really expect them to be rational because they are not here for that.”
Facebook is undoubtedly a key driver of misinformation; it’s where hoaxes regularly get the most reach. And surfacing fact checks doesn’t always preclude the possibility of misinformation; a feature specifically designed to debunk bogus YouTube videos suggested content about 9/11 under videos about the Notre Dame fire.
But until Twitter develops at least a base-level way to enforce its policies and decrease the reach of misinforming posts (perhaps by amplifying work that’s already being done by journalists), bogus content will continue to inundate users following big breaking news events. And fact-checkers will continue to chase them.
“At this point, nothing beats humans,” David Carroll, an associate professor of media design at the New School in New York, told The Washington Post about the YouTube incident.
Regarding YouTube linking suggested videos of 9/11, I wonder if imagery may be to blame over intent––to give the company, or rather the algorithm, perhaps an unwarranted benefit of the doubt. What I mean by that is, for me, seeing the collapse of Notre Dame’s steeple instantly evoked the indelible image of the WTC towers falling––a textbook flashback. Were I wont to relive those torturous moments as many seem to do based on yearly news rebroadcasts and Twitter chants to “Never Forget”, I might look up a video or two from that terrible day, thus raising the profile of said videos on YouTube and linking the two events for Google’s algorithm. Multiply that by a few hundreds (thousands?) of views from tragedy porn addicts among us and you could have an explanation as to why the suggestion began to appear.