The state of the internet is gloomy.
The formerly feted founder of the world’s largest social network was dragged in front of Congress after his company was accused of sabotaging democratic processes around the world. The biggest search engine’s video platform features conspiracy theorists. Microblogging is a smoldering dumpster fire ready to ignite with hatred and misinformation every time there’s breaking news. Messaging apps have been used to push hateful and false content largely undetected by fact-checkers.
Even Tim Berners Lee, who knows a thing or two about the internet having created it, expressed real concern about the situation in honor of the 28th birthday of the World Wide Web.
Around the world, public disaffection is being met by various forms of institutional intervention. Our information disorder, to use a term coined by Claire Wardle and Hossein Derrakshan at the Shorenstein Center, is a real problem — and it is daunting.
Yet too often, suggested solutions conflate distinct challenges that need different solutions — or propose the right fix for the wrong problem.
We need to get a lot more specific about which problem we are actually trying to solve and ensure that the solutions are proportionate, relevant and measurable.
Take EUvsDisinfo, set up in 2015 by the European Union to counter Russian propaganda. An in-house fact-checking unit was never going to appear credible to Euroskeptics. It has since been the object of a complaint to the EU Ombudsman, sued by three publications and censored by the Dutch parliament.
It’s not just legislators, either. To simplify and reach a broader audience, journalists often wrongly use the shorthand “fake news” to refer to all unsavory online behavior (we would know, because we made that mistake ourselves).
Violent anti-Rohingya propaganda in Myanmar is a very serious problem — but calling it “fake news” puts it in the same league as false pictures of sharks swimming on flooded highways. The German NetzDG law targets criminal material, primarily hate speech, and yet coverage has regularly described it at least in part as a measure against “fake news.”
If we are to build effective solutions to information disorder we must recognize its complexity. There are (at least!) 11 different things we’re actually concerned about when lamenting the parlous state of the online information ecosystem:
-
The viral reach of misinformation.
-
Increasingly sophisticated fabricated accounts and content.
-
Low levels of digital literacy among social media users.
-
Hate speech and trolling.
-
The incentive structures of the major platforms as they relate to producing content.
-
State-sponsored propaganda.
-
Polarized online communities.
-
Cambridge Analytica-style data skulduggery.
-
The (lack of) financial incentives for quality information online.
-
The monopolistic condition of search and social network platforms.
-
Distrust in media organizations and the content they deliver.
I am not an expert on any except the first two of these buckets. And I don’t mean to suggest that these challenges aren’t deeply intertwined, but it’s clear to me that they are very different beasts.
Polarization is a cultural and political challenge that requires a broader societal reckoning — that may or may not come. Countering state-sponsored propaganda is a matter for intelligence agencies and media transparency rules. Data privacy should be secured by statute and breaches punished with hefty fines. Competition authorities should investigate monopolistic situations. Hate speech is a matter for law enforcement, in those countries where it is illegal.
Viral fake news is related to all of the above. But it is at heart an engineering problem. And that seems like something we ought to be able to fix, or at least better control.
This is not, I recognize, a consensus position. At a recent journalism conference, the thoughtful media scholar Martin Moore pushed back on my “bucketification” of the problem:
“That suggests there are very specific responses that you can make which are going to resolve this. That’s, I think, just a fallacy. I think we have to start to recognize that the digital environment is a mess, particularly in the way it works economically.”
He suggested that tailored fixes are doomed for failure.
“It’s not going to work at all.”
I don’t mean to single out Moore on this. This approach is popular in much of Europe, as I learned during the deliberations of the EU’s high level group on fake news.
Tasked with scoping the phenomenon of fake news, the group ended up discussing everything from data privacy to public funding for news media. While there were lots of good ideas in the report, trying to touch on everything meant we drilled down on nothing.
Though I recognize the need to reform the incentive structures of major platforms to help promote accurate information, I also think that an all-or-nothing approach will scuttle our capacity to actually fix some of the problems.
Put plainly: We could waste years discussing whether Facebook is fundamentally good or bad for democracy and not resolve anything. That won’t stop a single viral fake photo.
The way out has got to be to try out many approaches like the third-party fact-checking tool that Facebook has been expanding globally. (Disclosure: Being a signatory of the International Fact-Checking Network’s code of principles is a necessary condition to be a partner.)
It is far from perfect, as I’ve repeatedly noted — and it might well fail. At the same time, the tool is addressing a specific problem with a specific response. It has a specific goal that it can be judged against.
This isn’t a defense of any single solution. It’s a rallying cry for a sectoral approach. Researchers and journalists should scrutinize the single solutions to distinct problems. Funders and institutions should develop policies in the same way.
We need fewer wide-ranging discussions about how everything needs to change for anything to change and more concrete initiatives addressing the actual problems.
We need buckets.