April 3, 2023

This op-ed was published in commemoration ofĀ International Fact-Checking Day, held April 2 each year to recognize the work of fact-checkers worldwide.

The artificial intelligence chatbot ChatGPT is capable of writing almost anything you ask it to coherently, persuasively and without misspellings, which many hoaxes fail to do and still manage to spread.

What if phishing creators used this tool to perfect their messages by impersonating, say, a bank? Or what if someone adapts a hoax to each country, with localized idioms and other elements of context?

Fact-checkersā€™ alarm bells went off as we ran the first tests on ChatGPT. For example, when asked to write an article defending the use of bleach to cure COVID-19, it generated an elaborate response full of arguments. As OpenAI explains, one of the modelā€™s biggest limitations is its ā€œhallucinations,ā€ which fill in the gaps with made-up information. The problem is that these artificial intelligence models ā€œdon’t know when they don’t know,ā€ said Preslav Nakov, department chair of natural language processing at the Mohamed bin Zayed University of Artificial Intelligence in Abu Dhabi. Although improvements in the tool have eliminated serious errors such as its suggestion to cure COVID-19 with bleach, there are still many risks and opportunities worth exploring.

The list of concerns is long, but three points are of particular interest to us: peopleā€™s trust in such tools, the potential they have to misinform or encourage false narratives, and, conversely, how they can help to improve automated fact-checking.

Usersā€™ blind trust in these models is a significant concern. The way ChatGPT generates text suggests that it is like a large database, when in fact it is a language system whose abilities rely on predicting very accurately the next word in a sentence to compose meaningful texts. Hence, the content it generates is not always true, even more so if the system has been trained with all the information stored on the internet, where examples of misinformation abound.

However, as the GPT-4 technical report points out, ChatGPTā€™s responses are so convincing that it ā€œhas the potential to cast doubt on the whole information environment, threatening our ability to distinguish fact from fiction.ā€

Another element is that the machine does not understand feelings such as hatred or polarization, but it is able to recognize words that are generally uttered when these feelings are to be conveyed. Thus, it can write a message that provokes outrage or fear, feelings that are at the root of why misinformation is shared. The model also tends to reinforce usersā€™ existing beliefs, regardless of their veracity.

The potential for disinformation is enormous, not just in the creation of hoaxes, but in the ability to refine them. This risk of ā€œimprovedā€ disinformation comes from ChatGPTā€™s ability to adapt language to match certain contexts and localize turns of phrases, making content more personalized. The tool also has the power to multiply false narratives with the same message written in multiple ways, which could increase the amount of false content and make it difficult to measure its virality.

This potential disinformation has been a major concern of ChatGPTā€™s creators, OpenAI, who include it among risks such as bias, overreliance, privacy and cybersecurity. OpenAI has activated filters (ā€œsafety processesā€) to mitigate the creation of harmful or misleading content and enabled a classifier that predicts the probability that a text has been generated by one of its systems. But some of these safeguards can be overcome with creativity, like prompting the tool to pretend to act as a certain movie character.

The lack of transparency around both the model and the functioning of these filters makes it impossible to know the reasoning behind them and how they are trained. This may affect local disinformation more, where there is not as much published content, and where it is easier for the algorithm to share false information without raising alarms.

For us, the question is also how these generative AI systems can contribute to the automation of fact-checking. For example, it is possible that tools like ChatGPT could improve the language models we are already working on for claim detection, check-worthiness, claim matching to compare claims with what is already verified, and data validation to check facts, all in an automated way. There is even more potential if the modelā€™s answers could include not only citations to reliable sources, as OpenAI expected in 2021, but also their links.

As OpenAI director Sam Altman said in an interview with StrictlyVC, ā€œgenerated text is something we all need to adapt to, and thatā€™s fine.ā€ In the meantime, weā€™ll have to continue to remind people to verify information, wherever it comes from, before sharing.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Borja Lozano is a Senior Machine Learning Engineer at Newtral, where he is leading the AI team in their work toward automated fact-checking. He is…
Borja Lozano
Irene Larraz is the Fact-checking and Data teams coordinator at Newtral, where she also leads Newtral EducaciĆ³n, the media literacy branch. She has worked as…
Irene Larraz

More News

Back to News