How misinformation makes money
There has been much written about how fake news websites and other sources make money from spreading misinformation. During the 2016 election in the United States, it even became a cottage industry.
Now a new study quantifies just how much misinformers are profiting from online advertising. Spoiler: It’s a lot.
On Monday, the nonprofit Global Disinformation Index published a study based on a sample of about 20,000 websites that have been found by (Poynter-owned) PolitiFact and others to publish misinformation. It found that ad technology companies spend about $235 million annually by running ads on such sites.
“Our estimates show that ad tech and brands are unwittingly funding disinformation domains. These findings clearly demonstrate that this is a whole-of-industry problem that requires a whole-of-industry solution,” said Clare Melford, co-founder and executive director of the GDI, in a press release sent to Daniel.
The researchers found ads for big-name brands like Amazon, Office Max and Sprint on clickbait and misinforming sites like Addicting Info, RT and Twitchy. And Google led the pack in supporting them.
According to the GDI study, Google served about 70% of the websites sampled. It also provided about 37%, or $86 million annually, of their revenue. The next few companies didn’t even come close in their support for misinforming sources.
Part of the reason for that is how easy Google makes it to monetize websites. Anyone with a website can apply to use AdSense and — if they’re accepted — start placing ads on their site.
After the 2016 election, the company tried to rein that in a bit. In a statement to Reuters at the time, Google said it would restrict ads on sites that “misrepresent, misstate, or conceal information about the publisher, the publisher’s content, or the primary purpose of the web property.” It has no rules explicitly against misinformation.
The GDI’s latest research shows that the company still has a long way to go in preventing the monetization of misinformation. And it might look to smaller ad companies for advice.
Last August, Revcontent, a “content recommendation network,” announced that it would start demonetizing individual pieces of content that had been fact-checked as false by at least two members of the IFCN. That effort ensures that, even if a publisher is transparent about its identity and purpose, it can’t make money simply by publishing falsehoods.
Obviously, implementing a partnership between fact-checkers and Google would be harder and more complicated than with smaller ad tech companies. But the GDI’s latest research reveals that the company’s existing rules aren’t sufficient to prevent misinformers profiting from falsehoods — in spite of Google’s other efforts to elevate fact-checking. And that could spell trouble going into the 2020 election.
. . . technology
- Facebook this week outlined its approach to handling content from politicians that breaks the platform’s community standards. It will be allowed to stay, said Nick Clegg, the company’s vice president of global affairs and communications, unless it poses some immediate danger or if it’s in a paid ad. From now on, he said “we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard.”
- Facebook, meanwhile, has taken down a page called “I Love America” that featured all kinds of “patriotic content” after it learned the page was run by Ukrainians. Judd Legum, who runs the newsletter Popular Information, first reported the Ukraine connection.
- Last summer, WhatsApp started limiting the number of groups users could forward messages to in an attempt to cut down on the virality of misinformation. Now, researchers have found that effort does slow the dissemination of falsehoods, but it doesn’t block them altogether.
. . . politics
- Thirty-one fact-checkers from 17 countries are working together this week in covering the 2019 United Nations’ General Assembly. Forty-three claims were verified on the first day and only 13 of them were considered 100% true. Read Cristina’s piece about it here.
- Democrats are not ready for the coming “tsunami” of disinformation coming in the 2020 election, The Washington Post’s Greg Sargent wrote this week. “What we’re about to see in disinformation warfare is likely to make 2016 look tame,” he said.
- Twitter is making publicly available archives of tweets and media that it believes resulted from potentially state-backed information operations on the platform. Those who provide an email address can access databases from October 2018. The latest additions came from Spain, the United Arab Emirates and Egypt, in April.
. . . the future of news
- A researcher from the University of Queensland in Australia said he has received funding from Facebook for research into how putting a “human in the loop” of artificial intelligence can help solve the misinformation problem. Here’s his account in The Conversation.
- Google’s “knowledge panels” – those no-click boxes that show up with search results – can surface false information, The Atlantic reported. “At their best, knowledge panels make life easier,” wrote Lora Kelley. “But at their worst, the algorithms that populate knowledge panels can pull bad content, spreading misinformation.”
- Research conducted in Oxford shows that “junk news” published on Facebook in May, before the European elections, got more shares, likes and comments than news from established media. The researcher points out the reason: “junk media” performed well because it is not bound by ethics, logic or truth.
An important aspect of accountability journalism is following up on whether laws intended to make lives better are working as promised. Last week, Factcheck.org did just that with the U.S. “right to try” law that President Donald Trump signed last year.
The law is aimed at helping terminally ill patients gain access to experimental drugs that haven’t been approved by the government. Trump has said several times that the law has helped “a lot of people.”
This is a nuanced story. The law might have good intentions and might even help save lives in the future if drug developers can make it work. But there is no evidence that it has helped “a lot of people” just a year after its enactment, wrote Factcheck.org’s director Eugene Kiely.
What we liked: This is a story that could have been told any number of ways. Kiely’s treatment demonstrated how fact-checking can streamline a story by focusing directly on the claim. He started with the simple quote from Trump that the law is saving “a lot” of lives, then showed that there is a lack of proof to support the president’s assertion.
- The New York Times tackled the question of whether the book publishing industry needs more rigorous fact-checking.
- Southeast Asian nations are banding together regulate Big Tech on issues including “fake news,” Reuters reported.
- The U.S. Army is warning soldiers to be aware of disinformation on social media posted by foreign agents. Nefarious actors, it said, might pose as senior military leaders.
- Two University of California-Berkeley professors argue in the Kansas City Star that breaking up Facebook would exacerbate the “fake news” problem.
- Data & Society created a taxonomy of manipulated videos ranging from cheapfakes to deepfakes.
- Speaking of deepfakes, Google has released a dataset of them to aid researchers working on detection methods.
- NBC News wrote about how anti-vaxxers are targeting mothers whose babies died unexpectedly and convincing them that vaccines are to blame.
- Fact-checking site Truth or Fiction was erroneously flagged as clickbait on Facebook.
- Axios asked misinformation reporters how they approach their beat.
- The BBC profiled Snopes in honor of its 25th anniversary.
That’s it for this week! Feel free to send feedback and suggestions to factually@poynter.org.