Talk about a PR strategy backfiring dramatically.
On Wednesday, Facebook announced it was going to provide an enormous amount of data to academics on its actions against misinformation. It would do so in a privacy-conscious yet ambitious manner.
The same day, the social network held an on-the-record shindig for media and tech reporters in New York City. "Look how we've changed" was the implied message. Here was a reformed, responsible and transparent platform ready to own up to its flaws in public.
It all went south when CNN reporter Oliver Darcy asked a relatively simple question: Why does Facebook allow InfoWars, a regular purveyor of hateful and damaging conspiracy theories, to use the platform as a channel to disseminate its odious misinformation?
"I guess just for being false that doesn't violate the community standards," was one answer from the recently appointed head of News Feed John Hegeman.
The reaction to Hegeman's words, as well as Facebook's later contortionist tweets, has been understandably caustic.
Darcy's question revealed that the new head of News Feed did not seem to have a good answer to a fundamental question about misinformation on his product.
For that reason, it was a good question. Two days later, it’s time for some new questions.
Facebook has already said it won't ban accounts that don't violate its community standards, which explicitly stress that false content doesn't get removed from the social network. Set aside whether this is a good policy for a moment.
Concentrate instead on what Facebook has said it will do about false content since its 2016 launch of a partnership with outside fact-checkers. (Disclosure: Poynter's International Fact-Checking Network helped shepherd this partnership into existence, but has not at any point received funding from Facebook.)
Facebook will downrank false content, annotate it with a fact check and demonetize the offending page.
So the question we should be asking Facebook is: How has this worked out with InfoWars? How often have fact-checkers flagged an InfoWars post as false? How many people fewer were reached because of it? And can InfoWars still advertise and monetize on the platform?
Ironically, the very dataset that Facebook released Wednesday should contain answers to these questions. The answers will arrive at an academic pace, however.
In the meantime, tech and media reporters should push the conversation further by asking these questions of Facebook: Has the third party fact-checking product significantly reduced the reach of those InfoWars posts with demonstrably false content? If not, what is Facebook's strategy to ensure that it does?