Newsrooms gear up to cover 2020 misinformation
With a little more than a year to go before the 2020 election, U.S. newsrooms are gearing up for what they expect will be a deluge of misinformation aimed at influencing, dividing and confusing voters.
The efforts fall, roughly, into two categories: Covering misinformation as a beat to alert readers to hoaxes and trends in false information; and learning or improving verification skills to ensure that news stories don’t reproduce or amplify falsehoods.
In an example of the former, The Washington Post last month announced that it had assigned reporter Isaac Stanley-Becker to what it called a new “digital democracy” beat focused on “the largely unregulated and increasingly dominant role of the internet in driving U.S. politics.” One of his early pieces was a smart take on how Republicans who express concern about President Donald Trump’s interactions with Ukrainian President Volodymyr Zelensky are becoming targets of disinformation campaigns.
This summer, The New York Times, in a piece laying out its fact-checking operation and its plans to cover online disinformation, said that it will again create a “tip line” readers can use to flag material they think is intended to mislead. A similar venture during the midterms resulted in about 4,000 submissions, it said.
And, of course, our newsletter co-author Daniel Funke recently launched a new misinformation beat for (Poynter-owned) PolitiFact, focusing on falsehoods from and about the 2020 candidates’ campaigns.
We saw similar efforts to bolster coverage of misinformation before the 2018 midterms, but the presidential cycle is expected to bring new methods and intensity to the manipulation attempts.
In the verification space, the nonprofit First Draft has launched a program it calls “Together Now” to train newsrooms across the country in responsible reporting on misinformation.
“We see examples every day of problematic content winding its way into news reports,” said Aimee Rinehart, First Draft’s director of partnerships and development. She said it’s no longer sufficient for newsrooms to have one specialized forensics team to spot fakes — all journalists have to be trained in skills like reverse-image searches.
“Disinformation actors are working double time to fool the night and weekend crews,” she said. “Newsrooms can no longer rely on a 9-to-5 news cycle.”
Storyful, a social media intelligence and news agency, is doing work in both categories – helping its newsroom partners report on misinformation and also identifying manipulations.
The company, owned by News Corp, recently launched a unit called Investigations by Storyful that will partner with news organizations to use its social media analysis – data about what people are doing and saying online – to identify trends, including problematic ones.
An example, said Darren Davidson, Storyful’s editor-in-chief, was a recent Wall Street Journal story exposing how people are getting around Facebook’s ban on selling guns in its marketplaces by pretending to just sell the gun cases. But the gun cases are posted at inflated prices, an indication that the postings have become “code” for the sale of actual guns. After the Journal’s story, 15 senators called on Facebook to halt the practice.
Storyful will also work with newsrooms to help identify fake content or verify that photos and videos on social media are legitimate, which will be a particular need in the 2020 campaign, Davidson said in a phone interview.
“There’s a lot of concern in newsrooms about being gamed or conned as the election cycle ramps up,” he said.
Have you heard of other news organizations staffing up their misinformation teams? Let us know at factually@poynter.org.
. . . technology
- Facebook has a new problem in Brazil: false job offers. For a month, Agência Lupa (in Portuguese only) monitored 35 posts that carried fake hiring promises and had more than 107,000 interactions. Those posts promised users a job, but required them to leave a comment to get to “the promised offer.” After a while, a bot would pop up and contact users, sending them an external link. Those who clicked in this URL were asked to re-enter their login and passwords, but didn’t notice they were in a fake Facebook login page. Through this fraudulent action, much personal data was stolen and some profiles hijacked.
- Anti-vaccination billboards full of misinformation are being funded through Facebook fundraisers, despite s crackdown by the platform on such practices, NBC News reported. Facebook closed one fundraiser after NBC asked about the practice but several remained active, the network’s story said.
- More on Facebook: BuzzFeed News’ Craig Silverman published an investigation about how a company in the U.S. scammed users by selling them bogus free trial offers for low-quality products and paying them to rent their accounts to post ads.
. . . politics
- Democratic presidential candidate Elizabeth Warren ran a Facebook ad saying the company’s CEO Mark Zuckerberg had endorsed Donald Trump for re-election. He hadn’t, of course, but her point was to call Facebook’s bluff on its policy of not fact-checking political ads.
- Politico reported that the Democratic National Committee sent an urgent email to presidential campaigns the day before Tuesday’s debate warning them to stay on guard against foreign manipulation efforts and disinformation from Trump and his allies.
- Vice wrote about how the U.S. Census Bureau is working with technology companies to limit the spread of misinformation about the 2020 census. But some false reports have already gained traction online.
. . . the future of news
- ABC News mistakenly ran a video from a 2017 shooting event at a gun range in Kentucky with a story about Turkish attacks in northern Syria. The network apologized for the report. The New York Times quoted First Draft’s Claire Wardle as saying such mistakes are “relatively easy” to avoid.
- If you’re counting on machine learning to help people automatically identify misinformation, we have some bad news. Two papers from MIT researchers found that, while machines are good at detecting when content is created by other machines, they’re pretty bad at determining whether something is true or false.
- The Globe and Mail in Toronto talked to several experts about why people fall for false information and share it. One researcher at New York University is conducting a brain imaging study to investigate why people older than 65 are six to seven times more likely to share false news than their younger counterparts.
Last Monday, when the Spanish Supreme Court sentenced former leaders of the Catalan independence movement to lengthy prison terms, the streets of Barcelona became the stage for violent protests. Inevitably, social media was rife with false information.
In about 24 hours, Maldita.es and Newtral, two fact-checking organizations based in Madrid, caught and managed to debunk at least eight pieces of misleading content that had gone viral.
It was false, for example, that a 5-year-old boy and a man who had a heart attack at El Prat Airport died because protesters wouldn’t let an ambulance get to them. It was also false that shares in the Spanish stock market fell as a result of the ruling, and that business owners threatened employees who thought about going on strike.
Spanish fact-checkers also pointed out a series of unproven claims attributed to presidential candidates. The country will have its second election in six months in a few weeks.
What we liked: Maldita.es and Newtral not only worked quickly but also showed they are capable of fact-checking content in different contexts, using different sources: the public health system, the stock market and the presidential campaigns. Maldita even managed to put all its fact checks on one page, which made it easier to read and distribute.
- The first samba about fake news will be sung in February, during Rio de Janeiro’s Carnival, by São Clemente Samba School participants.
- The Washington Post Fact Checker has updated its tally of false or misleading claims from Trump: 13,435 over 993 days.
- First Draft has a new guide for verifying online information.
- As pro-democracy protests in Hong Kong continue, scholars warn that disinformation is worsening.
- Bellingcat wrote about a coordinated social media campaign aimed at distorting events in the Indonesian province of Papua.
- Writing for Poynter.org, Josie Hollingsworth of PolitiFact reported on how Spanish fact-checkers are preparing for the fourth national election in four years.
- Witness Media Lab held a workshop on deepfake videos in Brazil. Here are some takeaways.
- Could hiding likes on Facebook reduce the spread of misinformation? Research suggests it’s possible.
- Nearly two months after Daniel reported on how mass shooting threats were spreading in private messaging apps, Reuters wrote about how a similar rumor sparked fear in an Indiana community.
- Daniela Flamini of Poynter wrote about what misinformation researchers may find in the 32 million URLs Facebook has shared with them.
That’s it for this week! Feel free to send feedback and suggestions to factually@poynter.org. And if this email was forwarded to you, you can subscribe here.
Way back in 2016 the Poynter Institute published a fairly robust corrections policy.
https://www(dot)poynter(dot)org/archive/2016/submitting-a-correction-to-poynter-2/
Poynter Institute has not followed this policy for either the email or Web version of this newsletter and has stonewalled when asked about it.
For example, the 2016 policy calls for the subsequent email newsletter to run a correction for any errors in a previous version. That makes sense. How else is one who receives the newsletter exclusively via email supposed to see the correction?
Yet the email version of the newsletter does not at all mention the error in the preceding version of the newsletter.
Why?
And why the stonewalling on this?
Correction: The newsletter, contrary to what I wrote above, does mention the correction from last week.
That’s good for Poynter.
But the correction policy still hasn’t been followed regarding its stipulation that the correction disclose how the mistake was discovered.
I apologize for my error in misidentifying a problem with Poynter’s adherence to its corrections policy.