Covering crazy conspiracy theories and bogus claims online gets clicks. But sometimes it might do more harm than good.
In the second episode of Poynter’s limited-run podcast about fact-checking and misinformation, we tried to figure out when debunking false narratives ends up giving them a larger audience.
“It was the effect of journalists kind of going by established best practices that allowed a lot of these manipulators to really hijack conversations in ways that were insidious,” Whitney Phillips of Syracuse University said on the show. “A lot of people got duped because they were being good at their job, basically.”
So how can journalists do better when reporting on conspiracy theories and fake news? In the episode, Ben Collins reflects on his experience covering extremism for NBC News, and Phillips debriefs us on her playbook to avoid amplifying misinformation.
Listen to the show below, or wherever you get your podcasts. And let us know what you think by emailing dfunke@poynter.org, tweeting @factchecknet or filling out this form.
Below is a transcript of the full episode, edited for clarity and brevity. Read more transcripts for other episodes of (Mis)informed here.
The Bump – 0:41
Daniel Funke: It sounds like a plot that’s ripped from the pages of some weird sci-fi novel.
QAnon is a conspiracy theory that claims U.S. officials aren’t investigating Russian interference in the 2016 U.S. election. Instead, they’re investigating Democrats like Barack Obama and Hillary Clinton for a variety of salacious crimes.
Back in October 2017, a 4chan user started posting about what they called “the storm.” That’s when adherents believe that top Democrats will all be sent to prison. The conspiracy went mainstream over the summer, when Trump supporters started wearing QAnon T-shirts to campaign rallies.
Weird, right? But that’s the norm for 4chan, where users can post anything they want, anonymously. Users regularly post racist, xenophobic and sexist hoaxes with the goal of getting reporters to cover them. And sometimes it works.
DF: Today on the show, we’ll hear from two people who have helped write the playbook for how to cover conspiracy theorists — including what not to do.
First, we’ll talk to Ben Collins, who covers misinformation and online extremism for NBC News. He has some advice for reporters who regularly talk to hoaxers online.
Then we’ll catch up with Whitney Phillips at Syracuse University. She published a report on how journalists can avoid getting tricked into amplifying misinformation and false narratives.
The Set – 3:17
DF: There’s no reason a reporter would want to be on 4chan. Racism, homophobia, sexism and xenophobia are the norm there. But for NBC News reporter Ben Collins, sites like 4chan are invaluable sources for his stories on misinformation and extremism.
I spoke to him about what it’s like to cover those stories, and how other reporters can avoid amplifying bogus narratives.
DF: Hey Ben, thanks for coming on the show. I really appreciate it.
Ben Collins: Hey, thanks for having me.
DF: Yeah, of course. So you cover what you call the dystopia beat and I’m curious, like, what that means exactly.
BC: Sure. So usually it’s disinformation and extremism and basically how the internet is affecting real life. And I know that a lot of the focus right now is on the bad stuff because we are averaging this month, like, two extremist events that are based on internet ideology, but there are some good parts of the dystopia. There are some funny things as well. It’s how technology has sort of infiltrated life in ways that we weren’t expecting and it would be funny if it weren’t so stupid and really terrible sometimes.
DF: Walk me through like how you report on conspiracy theories. I know they come from a lot of different places, 4chan and 8chan being a few of them. So maybe talk about where these things bubble up and why.
BC: Sure, and I mean a lot of it’s in broad daylight and some of it is, maybe the more pernicious stuff, is sort of hiding in group messages, more private platforms. But I think the most important thing right now that we do know is that there’s not enough checks on these. Like, we see tons of them on these platforms, we see tons of mostly racists, anti-immigrant conspiracy theories on a daily basis come through the same groups of people on Facebook and Twitter. And that’s the stuff that reaches your dad and your grandma, things like that. But they bubble up in these more closed spaces, or spaces that are just harder to access for regular people.
Places like 4chan are open spaces that are, you know, totally incomprehensible to most people. And then there’s places like Discord which are relatively closed message boards for people trying to do harm, but they organize in these spaces. They realize they are putting out disinformation conspiracy theories that are wrong, or they’re trying to dupe people. And then once it gets to these spaces like Facebook and Twitter, that’s when it gets credulously eaten up by stupid people.
RELATED ARTICLE: When and how to use 4chan to cover conspiracy theories
DF: Hey, let’s go into 4chan a little bit more because I think that’s the one that, you know, has captured a lot of attention. How does the platform work? What kind of people go there and coordinate some of these misinformation campaigns?
BC: It’s like Snapchat for ghouls that don’t leave the house, like, it disappears. I’m talking more specifically about the /pol board, P O L, which is a politics board on 4chan where most of the sort of political alignment gets pushed through. Its overtly white nationalist or white supremacists. It’s extremely anti-Semitic. It’s homophobic, transphobic. It, look, if it’s, if you can hate a group of people that it has been persecuted at one point in time, that is the place to do it.
Everything on there is ostensibly anonymous. You can sort of jerry-rig a username but it’s hackable. Usually it’s not, it’s not a particularly great system. So basically people go there, they troll anonymously, most of the stuff on there is wrong or racist. The anonymity in an ecosystem that doesn’t need facts to be true, they just need plausible deniability, it works really well for far right-wing media. Places like Gateway Pundit, InfoWars and sometimes the Drudge Report, will pick up those places from InfoWars and get it to pundits. All those rumors start on 4chan.
DF: How do you navigate covering these kinds of rumors and conspiracy theories? Because obviously, it’s newsworthy when you know a rumor on 4chan makes a jump to InfoWars and to Fox News and it goes up the media ecosystem. But you also don’t want to give more voice to people who are spreading like anti-Semitic, racist things.
BC: It’s not perfect because it’s journalists sort of working together to find out what’s working in mitigating really bad, totally false, conspiracy theories from getting to, say the president. Like, we don’t know if, you know, if we’re stamping something out or if we’re throwing fuel on the fire sometimes. And that’s scary but we have, we have, through, like, basically guess and check, got to the point where we’re pretty confident about when to step in.
And usually that’s when, you know, a medium-tier public figure starts to push this out in a way that can get to the president or a way that can get to another more high-profile public figure that can get onto television or something like that. We try to snuff it out before it gets to the point where, you know the president can, I guess, send it out to his followers with plausible deniability.
DF: Let’s talk about the bad situations. Like, what are some examples in which you’ve seen journalists or media companies amplify conspiracy theories, rumors, hoaxes from these darker corners of the internet and given them more breath when maybe they didn’t deserve it or, you know, they amplified rumors to the point where media companies reported it as if it were true?
BC: Sure. I’ll do the fun both sides thing here, where we, both sides of the liberal and the conservative, sort of conspiracy festering. On the liberal side it’s, you know, it’s sort of Russian disinformation dashboards, which, so basically Russia has an enormous sprawling disinformation campaign that is ongoing in the United States. That is 100 percent true, is something that we know.
For a while, there a bunch of websites were using these dashboards as like “Look at what the Russians are doing.” That’s not what the Russians are doing. That’s what, you know, that’s what people who can sometimes take Russian talking points, you know, at a face value and sort of regurgitating them.
So that’s that’s one thing. I think more scarily, because this has more public policy impact in that thousands of people are going to the border because of this. Migrant caravan conspiracies are everywhere. This is a moral panic that’s been spewed largely with like fake memes, fake information. If you see it on Facebook, pretty much every bit of violence that you see from this migrant caravan is really a photo of something else
And the media has not done a good enough job at stamping that out, in part because I think they are afraid of the political blowback of saying, you know, the president is working with tremendously bad information and a lot of scaremongering craziness on social media. So that I think that’s, you know, it’s, I do want to say it’s, like, happening everywhere on all sides of the political spectrum. But one side is clearly, right now, considerably worse to containing it.
DF: Where do we go from here? What kind of advice would you have for other journalists who are trying to do their best and avoid spreading this stuff?
BC: Sure. Just stick to facts. I think that’s the most important thing, you know — find out the provenance of this stuff, find out where it comes from and tell people, you know, if it’s anonymous, it’s anonymous. That means, and if they’re anonymous for a reason, it’s usually the case especially if they’re spreading really fantastical tales about, you know, immigrants and other persecuted people. It’s important to note that if that’s coming from an anonymous account or, you know, you know, Freedom Patriot.eagle or whatever, you know, whatever the hell fake news site it came from, it’s important to tell people that’s where it came from.
And even more so if it came from somebody like InfoWars or Paul Joseph Watson or something like that — somebody who’s known to fearmonger and spreads things without looking it up first, like, you really, that’s the number one thing to do. Always provide context about the messenger. Because if they have some sort of political motive then that’s 90 percent of the problem, right? So go back to where the info came from, really spread out the context around that thing and call a spade a spade too, you know.
Call a conspiracy theorist a conspiracy theorist. Call people who are known for graft and, like, general, like, weird internet behavior — say that, next to the name. Like, every time. It’s very important because that’s how you don’t continue the fire onward when you’re trying to contain it. So that would be the most important thing. And also, like, if it’s just a guy on 4chan saying a thing and it really hasn’t gotten pick up yet, don’t give them the oxygen they want. Like, that is not a useful thing. Wait till it has some sort of real life before you start, you know, what you think maybe debunking might actually just be fanning the flames a little bit.
The Spike – 11:00
DF: When a hoax jumps from 4chan to Facebook or Twitter, it becomes more newsworthy. When the president of the United States repeats one, even more so. But not all hoaxes need to be reported on.
In May, Whitney Phillips, an assistant professor of communication, culture and digital technologies at Syracuse University, published a report titled “The Oxygen of Amplification.” In it, she makes the case that by covering all misinformation, journalists can inadvertently make the problem worse.
DF: Hey Whitney, thanks so much for joining us. I really appreciate you being here.
Whitney Phillips: Thanks so much for having me.
DF: Today’s episode is all about amplification: how we can report on this information responsibly. And you’ve actually done a lot of good research on this you authored a report called “The Oxygen of Amplification.” Would you mind walking us through that report and some of the things that it found?
WP: So I think the best entry point into the report is why I decided to write it, because that encapsulates all of the issues that I was trying to, at least, to start articulating what some of the issues were. During the 2016 election, of course, I was getting a lot of press requests about trolls and 4chan and all of that stuff. And the reason that I did was because I have been studying trolling subculture and basically internet ugliness for the better part of 10 years.
And so I was fielding a lot of press requests and just sort of realized that I was noticing an increasing amount of anxiety in the voices of the journalists that I was speaking with, and this became even more apparent after the Charlottesville white supremacist rally, where there was, I think, a kind of reckoning or a greater awareness of the real world implications of online behavior.
And so after that point in particular, the journalists that I was working with, that I was being interviewed by, just seemed so stressed out. And I had one conversation with a particular journalist and we were talking about the dangers of the ambivalence of amplifying extremist content, because of course that would send it into even more audiences and could potentially do more damage, even as it was explaining what the behaviors were and what it meant and all of that. And in this conversation, back and forth, you know, the reporter sort of mused that she wished that someone could write a best practices guide for how journalists could do this, because she had not encountered that in her own newsroom and wasn’t aware of any newsrooms that have really kind of structurally dealt with that problem of amplification, particularly in the context of white supremacist speech, and other kinds of hate speech and media manipulation tactics. And I remember thinking “Yeah, that sounds like a great idea — wait a second.”
DF: And the inspiration for this report, I find really fascinating, right? It’s like that, you know, nervousness about reporting misinformation and the potential for amplifying it. What have been some missteps that you’ve seen reporters take when they’re covering this information? What are some common errors that they have when they’re covering this area?
WP: So what reporters do and when they’re doing their job well is to not editorialize, to talk about the news as it’s unfolding. And so by not editorializing, by not coming right out and saying, “OK, so this person is a known liar, so you really shouldn’t trust anything that they say,” and we’re, you know. So instead of kind of situating the interviewee in a way that would frankly editorialize and would, kind of shows the reporter’s hand by not doing that, it will accidentally, — again, totally inadvertently — but it accidentally lent credence.
It was the effect of journalists kind of going by established best practices that allowed a lot of these manipulators to really hijack conversations in ways that were insidious. A lot of people got duped because they were being good at their job, basically.
DF: And this whole puts journalists in a really tough spot, right? Like, it’s obviously their job to find things that are newsworthy and eye-catching online and report it out so that, you know, readers will come to the site and find interest in it. But at the same time, like, they have an obligation to not be amplifying false narratives from people with bad intentions. So, like, where would you draw that line?
WP: I use this in the report as sort of a shining example of what to do and how to think about these kinds of stories, and that is the Tipping Point Criterion. So if you are confronted with a particular behavior, or you know a certain meme, more something like anything online that you might find, if that thing — if that artifact or behavior — is only relevant to the people within that community, all reporting is going to do is take that narrative, or artifact or whatever, and make it more visible and more likely to become a real genuine, authentic newsworthy story.
And this also applies to, you know, hoaxes or other false narratives. That if the only people who care about those false narratives are a small handful of people in a localized location online, there’s just no way to report on that in a way that isn’t going to make the story bigger. Now the problem is that because of social media and because everyday people and everyday manipulators have their own mechanisms for amplification, that of course amplification happened so much faster with such more drastic effects when a journalist engages with something, but they still have their own kind of many media ecosystems.
Sometimes the issue is that a particular story or controversy manages to pass the Tipping Criterion on its own, but it really does become sort of an authentic, organic, at least, story emerging from a particular community, but it ends up impacting more people than outside of that community. It then becomes a trickier calculus, because on one hand, the story is newsworthy maybe from a certain calculus, that people outside the community are engaging with something.
And so, therefore, it might trigger this idea that, well as a reporter, I need to report on it. But if that artifact, if that narrative, if whatever happens to be explicitly false or dehumanizing or might put someone in danger, then reporting on it — even though it’s past that Tipping Point Criterion — it can still ultimately do more harm than good by kind of finalizing the amplification and making it spread even further, to even more audiences.
RELATED ARTICLE: How a far-right conspiracy theory went from 4chan to T-shirts
DF: Stepping back a bit, when we consider all these things you’re saying, what are a few things that fact-checkers and reporter should consider incorporating in their reporting process to ensure that they’re not amplifying some of this junk and making sure that they’re making more responsible reporting decisions?
WP: I think that you know, tethering the particular story to broader narratives. One of the big problems is reporting that essentially just points at something. The kinds of articles that can result in the most ambivalent outcomes, is — one way to put it are sort of the listicle-type articles, where it’s the whole point of the article to say “Hey everybody, we just want you to know that this thing is happening,” and then you prove, you know, provide examples of whatever it is.
That can be really potentially dangerous depending on, you know, what story it is and what the stakes happened to be, because. So one thing that, you know, is just sort of pure amplification and it provides essentially sort of a, I don’t know, a repository. Like, if you’re talking about a listicle of the most defensive versions, or most offensive examples of a particular meme, then now it’s sort of Google search indexed.
But if you’re taking that same content and you are situating it within much broader cultural conversations, or tethering it for example to how and why certain information travels in certain ways on social media, or connecting something to issues of moderation practices or something that’s a deeper bigger issue, I think that then becomes a different conversation. Because then you’re not just pointing you are helping people understand the ecosystem.
And in the same vein, I think it’s really helpful for reporters to step back and kind of go meta a little bit — and talk about cycles of amplification and essentially model thoughtful self-reflection. I think it’s helpful to see it, to sort of see the thought process on the published page, but also to model those kinds of reflection techniques for readers because often times, even now, amplification is something that is kind of regarded as synonymous with being on the internet.
WP: That the people, everyday people — so not reporters, but everyday folks — they retweet things, even to call attention to how bad something is. They comment on stuff in ways that, you know, will float that content to the top of an algorithm if enough people are engaging with it.
So getting people to think that commenting on something isn’t just an act of commenting — it’s also an act of amplification and that makes content spread further. That doesn’t mean we shouldn’t comment. It doesn’t mean we shouldn’t retweet. It means we just need to take a moment and think about what it is we’re doing and how we all fit in those chains of amplification. And how we can maybe make more conscientious choices about the things that were engaging with online.
This episode of (Mis)informed was produced by Vanya Tsvetkova, an interactive learning producer at Poynter’s News University. It was edited by Alexios Mantzarlis, with additional editing and creative direction from Alex Laughlin.