There’s a hidden bit of code that’s been quietly helping counter misinformation around the world. Few people know about it — but fact-checkers in 56 countries have now used this tagging system, known as ClaimReview, more than 100,000 times to help call more attention to their reporting.
This milestone may seem pretty geeky, on the same level as the latest Marvel super-fan theories about WandaVision. But it actually signals the importance of ClaimReview and its young sibling MediaReview in the battle against misinformation.
ClaimReview IS only as sexy as any bit of code. It’s just a summary of a fact-check in a consistent format: the factual claim, the person or group that made it and the conclusion or rating of the fact-checker. Editors add it to a database or in the HTML of their articles when they publish their fact-checks.
The product of a collaboration with Google, Jigsaw, Schema.org, the fact-checking community and the Duke Reporters’ Lab, ClaimReview was conceived as a way to help fact-checkers get their articles highlighted in search results. (Disclosure: The Lab gets funding from Google and Facebook for various projects involving ClaimReview and MediaReview.) But over the last three years, we’ve realized it has additional uses that do not involve the tech platforms. That’s been a happy accident – and it can help the battle against misinformation.
ClaimReview is now big enough to make a difference. In our lab at Duke, we have used ClaimReview to power “Squash,” a groundbreaking experiment in automated fact-checking. During the party conventions and the presidential debates, our Squash system detected what speakers said, matched it with fact-checks that were tagged with ClaimReview and then displayed summaries of those fact-checks on the screen. The process is not quite ready for prime time, but the results were still remarkable.
We’ve also used ClaimReview to present the latest fact-checks in a mobile app called FactStream and for a 2016 experiment on the Amazon Echo and Google Home. (We still need to create our dream skill for the Echo, a Grandpa Alert that will trigger an alarm at the dinner table when Grandpa repeats something false that he heard on cable TV.)
In a class at Duke this semester, we are using the growing ClaimReview dataset so students can analyze falsehoods in U.S. politics. Now that the global database has topped 100,000 records, it offers the potential for academic research at a larger scale.
So far, the tech platforms have been the main users of ClaimReview. Google, Bing and YouTube use it to highlight fact-checks in search results, and Google News has a dedicated prominent box for fact-checks. Facebook has also used it to help identify fact-checks.
There’s untapped potential for other tech companies. Twitter has gotten headlines for its efforts to combat misinformation, including blocking the account of former President Donald Trump. But the company’s overall efforts have been more sparse than substantial, and its new plan to crowdsource factual information through a tool called Birdwatch has had a bumpy start. Twitter could use ClaimReview to truly expand its efforts and take advantage of professional fact-checkers.
[the_ad id=”667872″]
The misinformation problem isn’t limited to spoken or written claims, which is why we are developing MediaReview, a sibling to ClaimReview that fact-checkers will use when they debunk false or misleading videos, images, or audio. MediaReview, like ClaimReview, will be open to anyone – the tech companies, academics and app developers.
With 100,000 snippets of code, ClaimReview has reached a critical mass. The code may not be sexy, but those snippets are dull but effective weapons in the battle against misinformation.
Bill Adair is the Knight Professor for the Practice of Journalism and Public Policy at Duke University and the founder of PolitiFact, which is owned by Poynter.Â
Joel Luther is an associate in research at the Duke Reporters’ Lab and manager of the ClaimReview Project.