December 18, 2018

Fact-checking used to be a nerdy corner of journalism, a place for political junkies and detail-obsessed truth-seekers. No longer.

For the past few years, fact-checkers have been thrust to the center of the war for the future of the internet by the explosion of online misinformation and subsequent decisions by platforms and policy-makers. After a year of cross-national hoaxes in Europe, mob lynchings incited by WhatsApp rumors in India and attacks against fact-checkers in Brazil and the Philippines, the stakes for fact-checking have only gotten higher.

In 2019, we predict that fact-checkers will have to contend with the rise of government actions against misinformation around the world. They’ll see even more attempts to undermine their debunking efforts — particularly when it comes to videos. Technology companies will be coaxed into implementing more projects addressing the spread of misinformation on their platforms.

Our fact-checking predictions are not always right. But we promise to return to these at the end of the year and determine how outlandish the predictions turned out to be (see how well we did last year with our annotated 2018 predictions).

1. We’ll see more credibility scores deployed — and possibly misfiring

In 2018, more credibility score projects launched than we had time to keep up with.

NewsGuard, which draws upon a team of journalists to grade websites based on their transparency and accuracy, unveiled a browser extension that tells users whether or not a source is trustworthy. The Newseum, Freedom Forum Institute and Our.News partnered up this year to create a similar browser extension, which provides more context and related fact checks about specific websites. And VettNews basically does the same thing.

While these projects are, in theory, a good addition to the efforts combating misinformation, they have the potential to misfire. NewsGuard, for example, gave Al Jazeera the same grade it gave InfoWars — grades that the company hopes advertisers and tech platforms will use to blacklist suspect sites. Media Bias/Fact Check is a widely cited source for news stories and even studies about misinformation, despite the fact that its method is in no way scientific.

A lot of people want a quick-fix solution for misinformation. There’s a business incentive for figuring out a way to tell people what they should and shouldn’t read, even though online fakery is a dynamic, multidimensional problem. Expect to see more scores in 2019.

2. More platforms will take active measures to reduce the reach of misinforming content

A few days after Donald Trump’s election, Mark Zuckerberg famously said that “fake news on Facebook … is a very small amount of the content” and that suggesting it had any influence on voters was “a pretty crazy idea.” A mere month later, his company had partnered with fact-checkers in the United States to reduce the reach of fake news. As this program turns two years old, it has expanded to 23 countries and includes 49 partners. (Disclosure: Being a signatory of the International Fact-Checking Network’s code of principles is a necessary condition for joining the project. We also helped launch the project.)

While the results of this product are yet to be fully understood, Facebook has clearly committed to taking some action against demonstrably false content on its News Feed. Google has taken a bit of an inverse approach by highlighting fact checks in search. This approach, too, has yet to be evaluated rigorously and openly.

Other platforms have lagged behind. Twitter has stepped up the removal of false accounts, but has not put in place a more systemic response to virally false tweets. WhatsApp has been criticized for not doing much at all in complicated markets like Brazil and India, besides restricting forwarding and enabling access to its API. Both platforms will likely look at the lessons learned by their bigger cousins and roll out new product features in 2019.

3. Misinformers will continue to retreat to smaller groups and platforms where it’s harder to measure content

During the U.S. midterm elections, there probably was less clickbaity misinformation than during the 2016 presidential election. But more misleading, hyperpartisan content bubbled up in private Facebook groups — which served as a breeding ground for media-fueled conspiracy theories.

Over the past year, misinformers seem to have retreated to closed platforms in an effort to avoid detection by tech companies and journalists. During elections in Brazil, WhatsApp served as a primary vehicle for false news stories, memes and videos. Gab, a right-wing, Twitter-esque app with hate speech abound, seems to have been instrumental in the radicalization of the man who shot and killed 11 people at the Tree of Life synagogue in Pittsburgh. Conspiracists regularly plot to bait the media with false information on anonymous forums like 4chan and 8chan.

Some researchers have had luck measuring the spread of content on these closed platforms, but there’s still no systematic way to measure virality on those platforms. And as tech companies increasingly regulate the creation and distribution of fakery, misinformers will have more of an incentive to leave those platforms. That will pose a big problem in 2019, when journalists and fact-checkers in Nigeria and India will be tasked with debunking misinformation on closed platforms ahead of major elections.

4. The EU will take center stage in the battle against online misinformation

For reasons of both policy and politics, the European Union is a must-follow in 2019.

The French law against the manipulation of information, approved in November, is being challenged by opposition parties. If it emerges unchanged, however, we should expect to see the first court rulings against false claims spread online “massively, deliberately, artificially or automatically.” How capably (or not) the judicial system of a major democracy grapples with online fakery will lead others to follow suit (or have second thoughts).

At the continental level, the European Commission’s new action plan to combat disinformation will require monthly reports from the platforms and an early alert system for member states. Uncertainty looms over all these efforts. In addition, policy-makers will have to deal with European Parliamentary elections that risk offering the perfect conditions for a storm of pan-continental misinformation.

The European institutions have had a perennial history of attracting misperceptions: ahead of the 2016 Brexit referendum only 58 percent of British voters surveyed by IPSOS correctly indicated that MEPs were directly elected.

The recent rise in cross-border online misinformation, seem likely to result in a toxic mix. False stories have spread virally across member states, seeded by hyperpartisan actors and amplified by unwitting users or sloppy journalism. In the recent months alone, false stories about a staged refugee drowning video (actually a documentary on the Greek exodus from Asia Minor), George Soros-funded credit cards to cover refugees’ travel costs (a composite of real stories in a false narrative) and a xenophobic photoshopped buzzer appeared in five or more EU countries within days or weeks from one another.

With Russia and refugees likely to be major topics in the clash between political forces at the EU level, actions against misinformation themselves will be interpreted through a political lens (with Eurosceptics and Russophiles less keen on them) — and might be halted or reversed after the May election.

5. Videos will become an even more fraught source of evidence.

Deepfakes dominated the conversation about the future of misinformation in 2018. Yet it is unlikely that they will actually play a massive role in 2019. They are still very time-consuming to make, require more expertise than is generally understood and primarily target speakers for whom large video archives are available.

That doesn’t mean the credibility of video sources isn’t going to be under attack next year. The controversy over the video of a White House press conference that led to his temporary suspension show that we are already in a “choose your own reality” crisis story, BuzzFeed’s Charlie Warzel argued earlier this year.

Perhaps the greatest possible change will be felt when Adobe’s Project Cloak, an experimental tool that allows for seamless removal of elements from videos, will eventually ship as a product to millions of customers. The challenges this might pose to debunkers trying to verify the location or authenticity of a video have yet to be fully understood. Journalists and fact-checkers have ways to counter the deterioration of videos as a source of evidence (see suggestions from folks at The Wall Street Journal and the Ethics and Governance of AI Initiative) — but training and public campaigns will have to follow as well.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Daniel Funke is a staff writer covering online misinformation for PolitiFact. He previously reported for Poynter as a fact-checking reporter and a Google News Lab…
Daniel Funke
Alexios Mantzarlis is a recovering fact-checker and tech worker with some experience in governments both national and international. He cares about online information quality as…
Alexios Mantzarlis

More News

Back to News