May 13, 2021

Day three of the United Facts of America festival started off with some lighthearted banter as PolitiFact executive director Aaron Sharockman informed Sen. Mark Warner, D-Va., he was one of the more accurate members of Congress. Sharockman complimented Warner for having zero “Pants on Fire” rated claims.

“That feels like it should be a low bar. Unfortunately, I don’t think it is,” Warner said.

The two moved on to a conversation on social media platforms and the role the government can play in regulating issues around algorithmic amplification and offline harms. Warner acknowledged that social media companies have improved their content moderation since the 2016 election, but argued the U.S. government can’t rely on social media companies’ self-regulation.

“We need to put in rules of the road, and I think our failure to put in rules of the road is ceding traditional American leadership in an extraordinarily important media field,” Warner said. He suggested Congress could pass legislation enabling data portability so users could move their content between platforms, similar to how cell phone users can keep the same phone number when changing network providers. Warner also advocated for more transparency around how social media companies profit from user data, as well as regulating what he called clearly deceptive practices or “dark patterns.”

Shifting to Section 230 of the Communications Decency Act — the 1990s law that shields platforms from litigation for content posted to their websites — Warner argued the law needs to be updated to account for the role algorithms and company policies play in enabling offline harms.

“We’ve seen these companies go from simply a kind of listing of services or non manipulated information to where some of the most sophisticated algorithms ever created tailor information to each of us,” Warner said. He gave the example of how Facebook was used in Myanmar to foment a genocide of Rohingya Muslims. “Our legislation would allow those claims to at least get into court and be litigated.”

Sharockman shifted the conversation to talk about the gulf between Democrats and Republicans on how to regulate social media companies. Republicans have argued social media companies are censoring conservative speech, while Democrats have argued the companies are enabling offline harm.

Warner acknowledged the perception among some Republican lawmakers that social media companies have a political bias but countered that it’s a relatively small yet vocal group. Instead, he argued that social media companies’ only bias is toward user growth, borrowing the journalism maxim, “If it bleeds, it leads.”

“At the end of the day, the Facebooks of the world are trying to keep eyeballs,” Warner said. He added that more focus needs to be shifted to how social media companies operate and allow extreme content to foment and grow.

[the_ad id=”667826″]

Behind the scenes of Facebook’s Third-Party Fact-Checking Program

Keren Goldshlager, news integrity partnerships lead at Facebook, and Justine Isola, Facebook’s head of misinformation policy, gave viewers a sneak peek into the inner workings of the company’s Third-Party Fact-Checking Program, including how it works, how it evolved, and the interface fact-checkers use to evaluate content on the platform.

Isola said that the impetus behind creating the program in 2016 was to combine journalistic insights with technical scale.

“From early on, we really wanted to be very careful about who had the power to evaluate truth or falsity,” Isola said. “And we really wanted to make sure that we were bringing in professional journalists with the expertise to look into the facts and find out what was true and false.”

Goldshlager talked about how the program evolved from a handful of fact-checkers in the United States and Western Europe — including PolitiFact, which remains a partner today — to now include more than 80 fact-checking organizations working in more than 60 languages. She said this has helped the company get a more global perspective on the flow of false information.

Isola presented a graphic showing the ways users encounter fact-checked content on their news feeds. Fact-checked posts often come up on a user’s news feed with a grayed-out screen, an explanation of why the post was flagged and a link to the fact-checked article. Isola said 95% of the time a user encounters a fact check label, they don’t go on to share or engage with that post.

Sharockman said that while posts get labeled, they rarely get removed, and asked Isola to elaborate on that policy. She said that while Facebook has community standards that enable the platform to remove violating content, Facebook doesn’t require everything users post on the platform to be true.

“We really want to make sure that we’re not interfering with debates between friends, in-jokes, in-arguments and get that balance right,” Isola said. “There’s really a tension between a company that is focused on free expression and also promoting a safe and authentic community.”

Sharockman shared a slide offering viewers a mock-up of the interface used by fact-checkers to evaluate content on Facebook. Goldshlager said the interface is designed to give fact-checkers a sense of how viral a claim is, and whether it’s part of a broader trend to help them decide what to cover. She also said that Facebook uses its partners’ work to help train its algorithms to find similar or copies of fact-checked content to help remove its distribution.

“The reason we do that again is because we want to use our fact-checking partners’ journalistic insights, while we take some of that technical work to look for that content at scale,” Goldshlager said. Sharockman stressed that fact-checkers’ role is simply to evaluate the veracity of the content rather than make decisions about how it’s moderated.

Isola summarized some of Facebook’s work around COVID-19, including its information center aimed at providing users with up-to-date information about the virus and treatments. She said that while fact-checkers have been pivotal in helping detect and remove some of the most harmful misinformation, the company has been focused on serving its users authoritative health information, including allowing users to ask legitimate questions about vaccines as a way to cut back on vaccine hesitancy.

Goldshlager addressed the company’s repeat offender policy in reference to anti-vaccine superspreaders. She said the company levies penalties against repeat offenders based on the severity and frequency of their offense with the goal of having users correct their behavior rather than punishing them indefinitely.

Isola fielded an audience question about whether the platform intends to let its fact-checking partners check the content of politicians. She said she had no updates on the policy, but reiterated the company’s policy that politicians should be scrutinized by users independently given they have plenty of resources both on and off the platform to help them do so.

[the_ad id=”667872″]

How a firefighter went viral debunking wildfire claims on TikTok

Wildland firefighter Michael Clark told PolitiFact’s Josie Hollingsworth that his viral reaction videos to false firefighter claims were born out of a sense of frustration. His first reaction video debunked a claim that a firefighting drone conducting prescribed burns was responsible for the fires.

“That one really kind of bugged me because it’s what I do for work,” Clark said. He debunked a second video claiming a map that shows wildfires stopping at the Canadian border was proof of a government conspiracy after seeing widespread acceptance in the comments section.

“It really just made me really reach out and made me want to share my information saying, ‘Hey, here’s the facts. You’re looking at a U.S. map. You’re not looking at the whole thing,” he said. Clark said people will make assumptions about things like fire because they don’t have the same background information as someone with his expertise. He said it’s incumbent on people like himself who see falsehoods about their fields to set the record straight and share what they know with the public.

The state of misinformation online

The last panel of the day focused on misinformation superspreaders and how social media companies are responding to the challenge. NBC News senior reporter Brandy Zadrozny echoed Sen. Warner’s sentiments from earlier in the day that the companies had gotten better, but had room for improvement. She referenced comments from First Draft strategy and research lead Claire Wardle about how it’s hard to evaluate social media companies who are “grading their own homework.”

“We get a couple of news releases every once in a while saying, ‘Hey, you know, YouTube has taken off 30,000 COVID videos.’ Wow, that’s very exciting,” Zadrozny said sarcastically, “but we don’t know what that means.” She credited Facebook with having some transparency around the way it shares statistics about viral content through products like CrowdTangle but argued the company has to do more to crack down on the spread of misinformation in easily accessible public groups that have led to demonstrable examples of public harm.

Wall Street Journal tech reporter Jeff Horwitz said that the platforms have made a concerted effort to crack down on misinformation in areas like COVID-19 misinformation, but they’ve struggled with heavily trafficked influencers and groups actively spreading disinformation.

“Misinformation actually feeds extremism, and it works together to grow an audience, and so I think that’s part of why it’s so popular,” Zadrozny said. She argued that Facebook’s algorithms keep people on the platform by feeding them more of what they want to see, which has the potential to create more extreme environments.

IFCN director Baybars Örsek said the platforms still have more work to do when it comes to combating misinformation outside the United States and in languages other than English. Though Facebook is a U.S.-based company, Örsek noted its global reach.

“I think we need to make sure that our demands as fact-checkers, as reporters, as scholars, we need to point out some contracted steps to demand for a globally applicable approach from those tech companies to address misinformation and hate speech in our world,” Örsek said.

PolitiFact deputy editor Krishnan Anantharaman asked Örsek about recent efforts in Michigan to force fact-checkers to register with the state and protect people who’ve been fact-checked. Örsek criticized the move as a clear violation of the First Amendment and noted that fact-checkers on Facebook cannot check political speech. He also acknowledged that much of the criticism of fact-checkers stems from a fundamental misunderstanding of the work they do,

“We all have a common understanding that fact-checkers share a common set of principles and values in holding the powerful in society accountable,” Örsek said. “The more we communicate better about this, there will be less room for confusion.”

Thursday’s schedule includes an interview with CNN’s Christiane Amanpour about trust and the news media, in the U.S. and around the world.

[the_ad id=”667878″]

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Harrison Mantas is a reporter for the International Fact-Checking Network covering the wide world of misinformation. He previously worked in Arizona and Washington D.C. for…
Harrison Mantas

More News

Back to News