SARAJEVO, Bosnia and Herzegovina — In her opening remarks at the world’s largest fact-checking summit this week, International Fact-Checking Network director Angie Drobnic Holan thanked those who had made the conference possible. The first three names she listed: TikTok, Google and Meta.
Not even an hour later, Nobel Prize winner and keynote speaker Maria Ressa took to the same stage, and blasted those very companies to loud applause: “Thank you Meta, TikTok for funding fact-checking, but I mean, frankly, you just wanted distance for actually doing it yourself.
“We’re frenemies.”
Time and time again during GlobalFact 11, IFCN’s annual fact-checking conference, fact-checkers criticized tech platforms onstage, all the while flanked by a large screen thanking the companies for their financial sponsorship of the event.
The conference was illustrative of the complicated relationship many fact-checkers have with tech companies. On one hand, those companies often provide considerable monetary support to fact-checking organizations. They also host the monitoring tools fact-checkers use to track mis- and disinformation.
But fact-checkers say the companies do not do enough to stem the flow of that mis- and disinformation. Moderation decisions and algorithms from these companies are shrouded in secrecy, and fact-checkers struggle to do their jobs effectively under a barrage of falsehoods and harassment.
The stakes are high. Billions of people around the world use social media, which has become the site of mis- and disinformation campaigns. Fact-checkers worry that false information spread on social media can jeopardize elections and influence public policy.
During a panel with representatives from TikTok, moderator Tommaso Canetta, who is Pagella Politica and Facta.news’ deputy director, brought up Georgia’s new “foreign agent” law, which brands organizations that receive foreign funding, like NGOs and some media outlets, as foreign agents. Before the law was approved, disinformation about the bill thrived on TikTok, and Georgian fact-checkers faced heavy harassment.
Though TikTok T&S integrity and authenticity regional programme lead Lorenzo Andreozzi said the company is planning meetings with local civil society organizations and the Georgian electoral commission, FactCheck Georgia project manager Mariam Tsitsikashvili said the company’s efforts are too late.
“The law is already adopted. Damage is already done. I can assure you that TikTok definitely contributed to the erosion of democracy in Georgia,” Tsitsikashvili said. “Thank you if you’re seriously considering doing something right now in Georgia, but I’m not sure independent fact-checkers will be there when you make your decision and come to Georgia. There might not be independent voices left.”
TikTok and Meta
TikTok and Meta representatives held panels at GlobalFact Wednesday and Thursday, respectively, to answer questions and address concerns from the fact-checking community.
Both companies partner with fact-checkers to combat misinformation on their platforms. But fact-checkers say the tech giants could do more to address misinformation, criticizing the platforms’ policies as opaque and unevenly applied.
TikTok, for example, has a policy that centers mostly on removing misinformation instead of providing additional information and context, Canetta said. Such a policy raises freedom of speech issues. Fact-checkers also believe that giving users additional context is more effective at fighting and preventing misinformation than removing the offending content completely.
Andreozzi pointed out that the company also has enforcement mechanisms beyond removal, such as the labeling of unverified content. He said the company is doing more internal testing with labels but declined to provide additional details, including when users could expect the initiative to be rolled out.
During the panel with Meta, Faktograf executive director and moderator Ana Brakus pushed representatives to explain why the company is de-emphasizing news content on users’ pages at a time when misinformation is so rampant.
Meta public policy manager Lara Levet explained that decision stems from the company’s goal to allow users to control their own feeds.
“We’ve received quite a lot of feedback from our users on the kind of content that they do and don’t want to see more or less of. And it’s not quite news content, but it’s political content that at large, we have gotten feedback that people want to see less of,” Levet said. “Meta products are rooted in personalization, so if a user wants to see less political content, they have the user controls to do that.”
Several fact-checkers said they have seen journalistic organizations’ content get falsely flagged as misinformation on the companies’ platforms. Konkret24 journalist and fact-checker Gabriela Sieczkowska, for example, told Poynter that she has seen media from Palestinian fact-checkers mislabeled on Instagram as harmful content.
Belarussian fact-checkers have faced similar issues on TikTok, Canetta said during his panel with company representatives. Last year, the Belarus Investigative Center had several of its videos flagged on TikTok as disinformation or violence/terrorism because they contained content referencing the original disinformation that they were debunking.
Andreozzi apologized for any errors TikTok had made in mislabeling fact-checkers’ content and urged fact-checkers to report any errors.
“There is also a way to request a second review,” Andreozzi said. “You can appeal a decision that a moderator took, so the content goes back to us, and we can have a secondary assessment of the content.”
Financial ties
Complicating many fact-checking organizations’ relationships with tech platforms is their heavy reliance on the platforms for money. Polígrafo director of operations Filipe Pardal, for example, told Poynter that partnerships with platforms made up 85% of his organization’s revenue last year. This year, Poligrafo is trying to diversify its revenue streams. Pardal estimates that percentage is now 50 to 60%, and the eventual goal is 30%.
That breakdown in revenue sources is not uncommon. At a panel on funding independent fact-checking organizations Thursday, representatives from other outlets shared similar figures. Factly Media & Research founder and CEO Rakesh Dubbudu said 70% of his organization’s revenue comes from platform partnerships. At Pagella Politica and Facta.news, that figure is 60 to 65%, said director Giovanni Zagni.
MEEDAN CEO Edward Bice said he has noticed platforms, along with foundations, slow their funding in recent years.
“Funding in fact-checking was really easy five years ago, when everybody was deer-in-headlights around the Trump era,” Bice said. “The shifts that we’re seeing now with platforms are they are hesitant funding fact-checking.”
The vast majority of the 136 fact-checking organizations who responded to the IFCN’s 2023 “State of Fact-Checking” survey stated that the biggest challenge they face is securing funding and becoming financially sustainable. Given the precarious state of a business model that is dependent on the platforms, many fact-checking organizations are seeking to diversify their revenue streams.
The advent of artificial intelligence has also made fact-checkers nervous. At the panels with Meta and TikTok, moderators asked representatives to affirm their commitment to their fact-checking programs and to working with humans in their fight against misinformation.
Both organizations said they already use AI to identify content that violates their policies. But at Meta, the “human job” of investigating claims and establishing what is true remains a task for fact-checkers, said Tom Bonsundy-O’Bryan, Meta’s head of misinformation policy for Europe, Middle East and Africa.
“We use AI on misinformation in a different way, as you know, to help surface content based on human signals, based on technology-driven signals that could be misinformation,” Bonsundy-O’Bryan said. “So the fact-checkers can then go and do 90% of the job of working out, is this misinfo or not? It is absolutely not, unequivocally, substituting for fact-checkers.”
TikTok representatives said they couldn’t guarantee that the company would always allow humans, not AI, to make the final decision when reviewing potentially problematic content. But Jakub Olek, the government relations and public policy director of the Nordics and Central Europe at TikTok, said current moderation procedures could provide a “hint” as to what fact-checkers can expect in the future.
“Ninety-eight percent of the content that is being removed before anyone sees it — it’s exactly because the AI is doing the moderation under clear situations, whether it’s violence, hate, nudity, etc.,” Olek said. “But whenever there’s this gray zone, those come to the human moderators, and they are moderating in local languages.”
A ‘stalemate’
Two years ago, more than 80 fact-checking organizations around the world sent YouTube an open letter demanding that the company take stronger action against misinformation on the platform.
The letter included four demands — increased transparency around YouTube’s moderation policy, applying more context to videos, harsher action against repeat offenders and more attention to misinformation in languages other than English.
Those demands have largely gone unanswered, said speakers on “The problem with YouTube and Fact-Checking” panel Thursday. Though European Fact-Checking Standards Network chair and panel moderator Carlos Hernández-Echevarría conceded that YouTube has been more forthcoming in speaking with fact-checkers and has made resources available, he said the company has not done enough to protect its users against misinformation.
“It is the common view of the whole stage that things haven’t really changed in terms of the user experience in YouTube,” Hernández-Echevarría said.
Representatives from YouTube were not present at GlobalFact 11 to take questions from fact-checkers. But even in the cases of TikTok and Meta, which did host panels, some attendees said the representatives from the tech companies didn’t share anything new.
“The situation has been some kind of stalemate, where parties agree on some things 100%,” Mongolian Fact-Checking Center senior fact-checker Bilguun Shinebayar told Poynter. “I expect tech companies like YouTube, TikTok and Meta to do more — that’s very obvious. But at the same time, they have their own incentives to retain the status quo.”
The discrepancy in values between fact-checkers and tech companies was apparent during panels with company representatives. On Facebook, for example, private content cannot be fact-checked. Brakus questioned this policy, pointing out that some closed groups can have tens of thousands of members: “They have become, in many, many cases, basically sources of disinformation, forums where people meet and discuss how to — especially in our case — attack and harass fact-checkers.”
Bonsundy-O’Bryan explained that one of Facebook’s core values is privacy, which sometimes conflicts with its value of safety. “We’ve tried to balance this, recognizing that in different countries, different societies — sometimes different sides of the same room of 10 people — you’ll have different views about, where do you draw that line between privacy and safety? We’ve drawn it in the place that public content should be eligible to be fact-checked, private content shouldn’t.”
Sieczkowska told Poynter that she finds it unacceptable that large companies like Meta don’t do more to fight misinformation. She said that in Poland, fact-checkers are vocal about the issues they face on the platform, but their concerns are often ignored.
“It’s about the safety of the users and their right to be well-informed and be safe in this community.” Sieczkowska said. “They trust Facebook, they trust Instagram. That’s why they use it. And they still are not protected enough.”