3 questions about a military anti-disinformation project
Last month, we asked who was going to lead the U.S. government’s war on disinformation. Now, an effort in one obscure corner of the federal bureaucracy appears to be taking shape.
At the end of August, the Defense Advanced Research Projects Agency (DARPA), an arm of the Department of Defense, announced that it was working on a project to detect and counter online disinformation. The initiative, called Semantic Forensics or “SemaFor,” is aimed at developing “technologies to automatically detect, attribute, and characterize falsified multi-modal media assets.”
In short: DARPA wants to use custom software to fight misinformation. And, for us, that raises three big questions, rooted in reporting 101: What, how and why?
1. What?
As Gizmodo noted in its piece on SemaFor, DARPA is an agency that has become known for its out-there technology. It once made a vacuum the size of a penny just because it could.
DARPA has essentially said it wants to develop three different algorithms. The first would identify manipulated media, the second would determine where the media came from and the third would somehow figure out whether the media were “generated or manipulated for malicious purposes.”
The pursuit of a one-size-fits-all approach to countering online falsehoods is hardly unique. A cottage industry has sprung up around the idea that artificial intelligence and machine learning models can somehow be implemented to both identify and counter the spread of misinformation. Because, as academics and media critics are wont to observe, there just aren’t enough fact-checkers.
2. How?
The idea of an automated system that could somehow single-handedly deal with online misinformation sounds like a tantalizing proposition. But is it even possible?
As Gizmodo noted, many existing automated models aimed at limiting the spread of falsehoods are flawed. Startups like Factmata have raised millions of dollars in seed funding to pursue AI verification, but humans are still writing those programs, allowing bias to creep in — and misinformation is rarely black and white. Will Oremus covered this problem well in a piece about credibility scores for Slate in January.
Finally, the big platforms are the elephant in the room.
Without buy-in from Facebook, Twitter or YouTube — where a lot of misinformation is spread — how would DARPA even implement its three-pronged algorithm program? Sure, Twitter has a relatively open API (that’s why there is so much misinformation research about the platform), but Facebook’s is notoriously closed off. And it’s hard to imagine a world in which these companies would willingly give the Defense Department the keys to their products.
3. Why?
This is perhaps the most important question to ask about DARPA’s anti-disinformation project.
Spurred by increasing interest in online falsehoods, governments around the world have taken a variety of actions against misinformation. These actions range from bills outlawing the spread of hoaxes online to initiatives to bolster media literacy efforts.
From the outside, these efforts seem rooted in a genuine desire to promote more facts online. But critics of government anti-misinformation attempts often suspect censorship as an ulterior motive, and anecdotal evidence suggests they’re right in at least some cases. Take Egypt, for example, where mainstream journalists are regularly imprisoned on charges of violating a law that’s supposedly aimed at criminalizing the spread of “fake news.”
Despite its strong tradition of press freedom, the U.S. isn’t exempt from these discussions of media censorship. And journalists would do well to ask hard questions about how DARPA’s proposed systems could be weaponized as the agency continues to develop them.
. . . technology
-
A report about disinformation from New York University is calling on tech companies to remove “provably false” information from their platforms. “They have to take responsibility for the way their sites are misused,” Paul M. Barrett, the professor who wrote the report, told The Washington Post. Here’s the report itself.
-
Facebook and Instagram are rolling out a new feature to halt the spread of misinformation about vaccines. Users in the United States will get a pop-up window connecting them to the U.S. Centers for Disease Control and Prevention, CNN reported, while non-U.S. users will be connected to the World Health Organization.
-
Speaking of vaccines, Pinterest has received a lot of praise for its handling of the issue. The Washington Post’s editorial page added its endorsement to the mix this week. The Verge’s Casey Newton wrotethat the platform’s smaller size and lower profile has helped facilitate these decisions.
. . . politics
-
The Washington Post Fact Checker has launched a guide to verify campaign ads. In a way, this is a full-circle moment for fact-checking, which has roots in Brooks Jackson’s “ad police” checks for CNN in the 1990s.
-
Thailand wants to launch a Fake News Center to combat online scams in November. According to the Bangkok Post, its government is “working to iron out a framework for fake news detection that is compatible with the practices upheld by the International Fact-Checking Network.” Thailand, however, hasn’t reached the IFCN yet.
-
Fact-checkers in Indonesia had a tough August. Cristina talked to Ika Ningtyas, from Tempo, to understand how they can debunk stories about separatist protests and a new capital under a huge internet shutdown.
. . . the future of news
-
Data & Society is out with a new report by Joan Donovan and Brian Friedberg. Called “Source Hacking: Media Manipulation in Practice,” the report explains in detail how online manipulators often use specific techniques to hide the source of the false and problematic information they circulate, typically during breaking news events.
-
In a Q&A, The New York Times’ Matthew Rosenberg described the tools and strategies he uses on his beat covering disinformation in politics.
-
A new book by Richard Stengel, “Information Wars: How We Lost the Global Battle Against Disinformation & What We Can Do About It” ought to be a wake-up call, wrote Washington Post columnist David Ignatius. “In the end, people will get the news media they deserve: If they consume false information, they’re certain to get more of it,” he said.
Early this week, while debunking another celebrity death hoax, Lead Stories noticed the use of two sophisticated tricks to fool people and spread misinformation online in a faster and uncontrolled way. Here is what happened.
On Monday, Lead Stories saw a YouTube video about Clint Eastwood’s death. It was obviously false. The U.S. actor and director is alive and fine. So fact-checkers started debunking it.
But, while working on this topic, Lead Story’s team noticed the scammers had also embedded the video in a webpage with a fake view count in the description that would appear when the page was shared on Facebook. Instead of showing the actual view count, it raised the number to “10M views” to make people believe the video had indeed racked up that many views.
Besides that, once users clicked on it on Facebook, they didn’t see a video, but rather an image linking to a site full of banners and an embedded video player. If users tried to watch the video by clicking on it, they would get a graphic warning after a few seconds and see an “uncover now” button. By hitting it, they would then be “invited” to share content on Facebook.
But, instead of sharing the URL of the page, they would share one of several dozen identical pages promoting the false death hoax about Eastwood.
What we liked: It is just amazing what scammers can do to get people’s attention (and their clicks) — and it is just great to see how fact-checkers around the world are able to reveal it. On Tuesday morning, Lead Story had already flagged this post as false 120 times on Facebook. And while looking for new copies to flag on the original site, they stumbled on a second hoax (this time about actor Tom Cruise) that hadn’t even been promoted yet but which they were able to pre-emptively flag 44 copies of.
-
Info Finder, from Africa Check, now has a dedicated editor. The site provides factual answers (based on publicly available sources) to some of the most frequently asked questions sent by users on 14 topics including agriculture, crime, economy, education, health and migration, covering Kenya, Nigeria and South Africa. For now, it is available only in English. Soon, in French, too.
-
Agência Lupa has launched “Verifica,” the first fact-checking podcast in Portuguese. It is a 20-minute-long production available every Wednesday on Apple Podcasts, Breaker, Castbox, Google Podcasts, Overcast, Pocket Casts, RadioPublic, Spotify and Stitcher. Here is episode one.
-
With Hurricane Dorian slamming the Bahamas and the U.S. east coast, the IFCN has created a quick guide for dispelling myths or hoaxes surrounding it. The Associated Press did one, too.
-
Condé Nast’s New Yorker magazine will hire its subcontracted fact-checkers and editors as direct employees. Editorial staff said their subcontractor status encouraged them to work more and complain less in hopes of becoming full-fledged employees.
-
For combining journalists and researchers, and for having published more than 110 fact checks in two years across multiple platforms, the RMIT ABC Fact Check team won the Business of Higher Education Round Table Award in Brisbane, Australia.
-
Canada is planning a coordinated attack on disinformation in an effort to protect this fall’s elections, according to Politico.
-
Also from Politico: The head of the U.S. Federal Election Commission will hold a symposium Sept. 17 with officials from Google, Facebook and Twitter to talk about election disinformation.
-
Writing in The Nation, Joan Walsh wondered whether, “in the post-Trump world, factual details don’t matter as much as gut feelings.” She was referring to former Vice President Joe Biden’s recent mistakes in recounting a story about a soldier in Afghanistan.
-
IFCN has launched an Instagram channel. Come join us there too.
-
Not tired of reading about misinformation? The Guardian offered a list of 10 books on the topic.
That’s it for this week! Feel free to send feedback and suggestions to factually@poynter.org.