Factually is a newsletter about fact-checking and accountability journalism, from Poynter’s International Fact-Checking Network & the American Press Institute’s Accountability Project. Sign up here
Who should police speech online?
A new survey from the Knight Foundation and Gallup found that a majority of Americans (65%) want the internet to be a place of free expression. But eight in 10 people said they don’t trust big tech companies to make the right decisions about what content appears on their sites, and what should be removed.
At the same time, the survey’s respondents, especially conservatives, said they trust the government even less to make these decisions.
The fact that people doubt the government’s ability to regulate internet content hasn’t stopped some politicians from trying. This week, the Trump administration is proposing that Congress scale back a federal law, Section 230 of the Telecommunications Act of 1996, that shields social media companies from legal liability for harmful posts by third parties on their platforms.
The law protects the companies from liability, but at the same time encourages them to be responsible by holding them harmless when they moderate – or take down – content as they see fit, such as when a post violates the platform’s standards.
[the_ad id=”667878″]
Repealing the law completely would change the internet as we know it, because platforms would be legally vulnerable whenever someone posted something on their sites that could lead to a lawsuit. The argument for Congress to do something is that they need to be held legally liable or they won’t change.
How does all this relate to misinformation? Without Section 230, a business might try to hold Yelp liable if someone posts a falsehood in a review that damages the business. If someone drinks a disinfectant and then dies because of a “fake cure” posted on Facebook, the person’s family might try to hold the platform responsible. If someone uses a falsehood on Twitter to incite violence that then injures people, Twitter might be held liable.
The Knight/Gallup survey also asked about Section 230. Those results were mixed. Fifty-four percent said the law has done more harm than good because it has not made the companies accountable. At the same time, almost two-thirds (66%) said they supported keeping the law as is.
The report noted that the Section 230 attitudes were “weakly held and subject to how the question is framed.” That makes sense. Even though it’s not very long, the law isn’t easy to grasp, much less form poll questions around.
Other results were clearer. Amid the coronavirus pandemic, 85% of Americans favored the removal of false or misleading health information from social media and 81% support removing intentionally misleading information on elections or other political issues.
This survey, which was conducted in December and March, is just a snapshot in time. But in the growing debate over “platform accountability” – how to ensure the social media companies act responsibly – the answers provide important data points as society looks for the right solutions.
– Susan Benkelman, API
[the_ad id=”667826″]
. . . technology
- Facebook held an open competition, called the Deepfake Detection Challenge, to find algorithms that can spot AI-manipulated videos. The Verge reported that the results suggest “there is still lots of work to be done” before these automated systems will be able to identify deepfakes.
- The Verge’s James Vincent wrote that the contest’s winning algorithm spotted examples of deepfakes with an average accuracy of 65.18%. “That’s not bad,” he wrote, “but it’s not the sort of hit-rate you would want for any automated system.”
- Twitter said last week that it had removed more than 170,000 accounts tied to the Chinese government — 23,750 accounts that were “highly engaged” in spreading disinformation, and another 150,000 accounts dedicated to boosting those messages by retweeting and liking them
- The platform said it also removed a smaller number of accounts connected to disinformation from Russia and Turkey.
. . . politics
- Fact-checkers rallied around Maria Ressa, CEO and executive director of Filipino news outlet Rappler, who was convicted of cyber libel in a Manila courtroom Monday.
- Ressa and former researcher-writer Rey Santos Jr. could face up to six years in prison for a 2012 article that tied Filipino businessman Wilfredo Keng to human trafficking and drug smuggling.
- The article was published before the current cyber libel law went into effect. However, the court considered a 2014 typo fix to be republication, and extended the statute of limitations from one year to 12.
- Posts on social media falsely accusing CNN of altering the racial identity of crime suspects in news photographs “have become a distinct subclass of misinformation,” Snopes’ David Mikkelson reported. PolitiFact’s Daniel Funke also debunked the accusation.
. . . science and health
- Public health officials recommending mask-wearing and other protections from the coronavirus have come under attack by people who see the virus as a “nothing burger,” as one epidemiologist put it to The Washington Post.
- Two associations of local health officials released a statement warning that “public health department officials and staff have been physically threatened and politically scapegoated,” The Post said.
[the_ad id=”667872″]
This fact-check features a video of an anti-statue removal protest in London where a large group of men is heard chanting, “we’re racist, and that’s the way we like it.” British fact-checking organization Full Fact reported three news outlets included the video in their coverage, and that it had been shared widely on social media.
But if you listen carefully, as fact-checker Abbas Panjwani did, you can hear the audio skip in the middle, which he pointed out is a sign this video is a fake. The original video was from 2015 when fans of the British football club Chelsea pushed a Black metro rider off a crowded subway in Paris.
What we liked: This fact-check draws attention to how easy it is to misinform the public with a bit of audio manipulation. It also offers advice to listen carefully for audio flaws as a way to detect misinformation.
– Harrison Mantas, IFCN
- Global Fact 7 is next week!!! The virtual conference between June 22-26 will feature 150 speakers from over 40 countries, and will be live-streamed for the public. Check out the full schedule here.
- Writing in The Washington Post, Nieman Lab’s Laura Hazard Owen reviewed the new book by the paper’s fact-checkers about President Trump’s lies. Her piece contains some thoughtful insights on fact-checking in general.
- In Kenya’s Dadaab refugee camp, Abdullahi Mire aka “Corona Guy” is using the power of radio to fight COVID-19 misinformation.
- The New York Times wrote this week about how Russian trolls have concluded it is easier to amplify and spread conspiracy theories and divisive content from real Americans than create tales of their own.
- The Carnegie Endowment’s Partnership for Countering Influence Operations and Twitter announced two panel discussions July 9 looking at misinformation campaigns by state actors on the platform.
That’s it for this week! Feel free to send feedback and suggestions to factually@poynter.org. And if this newsletter was forwarded to you, or if you’re reading it on the web, you can subscribe here. Thanks for reading.