September 12, 2023

Fact accuracy has been under assault for more than 20 years. It began when corporate owners reaped huge profits without reinvesting in newsrooms. The internet redefined audience and reconfigured advertising, reducing newsroom staff by 26% between 2008-20. Then Donald J. Trump emerged with his big/little lies and a cult-like MAGA following whose adherents dubbed journalists as enemies of the people.

Now artificial intelligence may eradicate truth in our time. Not because of plagiarism. Not because of deepfakes. Not because of fewer writing jobs.

Journalism may succumb to AI hallucinations, outright fabrications and illogical deductions, cast as effortlessly and believably as possible.

This is why newsrooms should temper their use of chatbots, hire more copy editors, emphasize fact-checking, establish “truth beats” and create or update guidelines about machine applications.

For starters, everyone should know the four types of artificial intelligence being used or under development:

  • Reactive AI, programmed for narrow tasks. It doesn’t learn from past interactions but can play chess, serve as a spam filter and analyze data sets.
  • Limited Memory AI, patterned after the human brain and able to learn from past inputs as in self-driving cars, intelligent virtual assistants and chatbots.
  • Theory of Mind (General Intelligence) AI, under development to know and respond to emotions with decision-making abilities equal to humans.
  • Self-Aware (Superintelligence) AI, theorized not only to recognize human emotions but also to conjure their own needs, desires and feelings.

In other words, the smarter machines get, the more dangerous their hallucinations. It is one thing to get a bad Netflix recommendation or cite non-existent references and quite another to misdiagnose a mental condition or respond to a false military threat.

The new abnormal

According to the American Psychological Association, the term “hallucination” is defined as “a false sensory perception that has a compelling sense of reality despite the absence of an external stimulus.” Medically, a hallucination seems real but is not due to “a false perception of objects or events involving your senses: sight, sound, smell, touch and taste.”

The technical term also is often confused with delusion, or something falsely believed or propagated and associated with “the act of tricking or deceiving someone.”

The IT education website Techopedia states that AI hallucinations occur when “a large language model like OpenAI’s GPT4 or Google PaLM makes up false information or facts which aren’t based on real data or events.”

Risk analyst Avivah Litan states that AI-driven chatbots fabricate information concerning names, dates and historical events. Worse, they present these fabrications “with confidence and authority” that can lead to “serious consequences in some cases,” especially involving health and medicine.

Writing about AI’s impact in academe, Chronicle of Higher Education reporting fellow Maggie Hicks states that the use of ChatGPT could further erode trust in research, already at all-time lows. “Students and other novice researchers could also lose essential research skills and run into trouble in the classroom if they don’t understand ChatGPT’s many flaws.”

Hicks notes that scholars typically are pressed for time, feeling pressure to publish, and so might use chatbots for references. Peer reviewers rarely scrutinize sources before papers are published, meaning hallucinations may sully the scientific method.

That is precisely the situation in newsrooms. Reporters are pressed for time and paid for productivity. Outlets often fail to scrutinize sources in the absence of copy editors.

The GPT-4 Technical Report affirms these fears, noting that the profusion of false information — “either because of intentional disinformation, societal biases, or hallucinations — has the potential to cast doubt on the whole information environment, threatening our ability to distinguish fact from fiction.” This can aid those who stand to gain from widespread distrust, also known as “liar’s dividend.”

As Poynter noted earlier this year, “Another element is that the machine does not understand feelings such as hatred or polarization, but it is able to recognize words that are generally uttered when these feelings are to be conveyed.” This can provoke outrage or fear, reinforcing belief systems without validity.

These hallucinations not only endanger society, but also jeopardize our profession as journalism is tasked with safeguarding the sanctity of fact. Thus, embracing AI without understanding its nature may result in ever deeper distrust of the news media.

Fluency over fact

As digital journalist Marina Adami notes at the Reuters Institute, the challenge for journalism with generative AI is “the factual mistakes ChatGPT often makes, sometimes even in public demos, as seems to have happened with both Google’s and Microsoft’s new AI-powered tools.” They point readers to references that do not exist.

In other words, they lie. Convincingly. Frequently. Fluently.

I address this in my media ethics syllabus, informing students that I will be fact-checking use of ChatGPT and other large language model-based chatbots, specifically looking for hallucinations. I am no longer impressed by fluid writing. I am suspicious.

The job of teaching journalism just got harder.

In the past, instructors deducted points or failed students because of misspelled names, fabrications and inventions. We would emphasize the legal and moral consequences of falsehood to impress upon aspiring reporters the necessity of fact accuracy. Now we rail against the machine, which disregards our remonstrations.

The mantra of “accuracy, accuracy, accuracy” is under assault. My research with Daniela V. Dimitrova, editor of Journalism & Mass Communication Quarterly, documents how people surrender cherished ideals for the ability to do things quickly without effort. They gave up privacy in the late 1990s to browse the internet. Between 2005 and 2015, they sought affirmation rather than information, ending paid subscriptions and relying on social media for news. As a result, they are in the process now of surrendering democracy, without which, the Fourth Estate has no role.

What will result?

In “ChatGPT and the future of trust,” Janet Haven, executive director of Data & Society, an independent nonprofit research site, sees three possibilities:

  1. AI will be used in adversarial ways undermining confidence in information environments.
  2. Uses and applications will foster “incredible advances in ways that truly benefit society while limiting harms.”
  3. The federal government will erect guardrails to protect “fundamental rights and freedoms over pure technical innovation.”

The lesson here concerns facts. Without them, what incredible beneficial social advances will arise? What benevolent government will protect the public’s rights and freedoms (role of the Fourth Estate)? How can journalism survive in an increasingly adversarial untrustworthy environment?

In the past two decades some 2,000 newspapers have closed, creating information deserts and fueling political divisions. Outlets operate in a sectarian society in which one side hates the other more than they love their own. Newsrooms exacerbate those trends with niche journalism, appealing to progressives and conservatives across the political spectrum because the margins are too low in the glutted impartial middle.

At present, the public cannot distinguish between fact and fiction. That’s bad enough. Now prophesy how AI will alter the contaminated media environment with facile truths and delusional intelligence. During the Trump presidency, journalists were aghast when adviser Kellyanne Conway propounded “alternative facts” and attorney Rudy Giuliani proclaimed “truth isn’t truth.”

Chatbots embody those fallacies under the hyperbolic guise of machine learning whose definition deletes human reason. No engineering wizard lurks behind the existential curtain of our screens. Algorithms reign supreme. People are mere nodes.

There are no easy fixes. Media and digital literacy might offer long-term societal benefits. Currently, however, our public school systems are preoccupied with banning books, abolishing diversity, outlawing divisive topics and revising civil war history while hypocritically embracing freedom of speech.

AI will add to these woes by affirming in some quarters preexisting biases and mounting marginalization. Thus, it is up to the news media to reclaim the role of Fourth Estate from the emergent machines that poison truth.

Facts matter

Newsrooms need to temper their use of chatbots, hire copy editors, emphasize fact-checking, create “truth beats,” call out fabrications and disseminate knowledge in every article, podcast and post, especially when issues involve civics and history.

The need for copy editors is more acute than ever. Their ranks were culled in 2008 when digital journalism was on the rise. The Pew Research Center reported then that 42% of all newspapers and 67% of major ones were laying off copy editors to make room for “fresh, young blood” with new skills and aptitudes. Nevertheless, Pew noted that the loss of veterans weakened the editing process “and with it, a degree of the paper’s collective wisdom and judgment.”

Copy editors do not create content, so cuts are made on that basis. By 2013, nearly a third of remaining positions were terminated.

In 2017, the Columbia Journalism Review reported staff cuts at The New York Times. “Copy editors are deeply valuable and important,” CJR noted. “They are the last check before a story reaches the public and the final line of defense against factual errors — the original ‘fact check,’ if you will.”

Copy editors have continued to lose jobs. In June 2023, the Los Angeles Times deleted 74 newsroom positions, with a third of those coming from news and copy editor ranks. The Times lost revenue because “social media giants, including Facebook and Twitter, have scaled back the promotion of news articles.”

We are about to witness even deeper newsroom cuts across platforms as consultants persuade management to invest in AI. Only this time the damage may be catastrophic without the guardrails of the copy desk.

Truth relates to every newsroom beat, including the primary ones of politics, food, education, health, sports and entertainment. In the age of AI, we need “truth beats” with reporters calling out human delusions and machine hallucinations. The editors also should enforce fact-checking, oversee corrections and emphasize ethical standards across the entire newsroom.

This is much like the Washington Post’s Standards desk, launched Dec. 9, 2022, “to protect the integrity of news reporting and support Post journalists.” Editors Meghan Ashford-Grooms and Carrie Camillo handle tasks associated with fact accuracy. Ashford-Grooms updates, explains and enforces existing newsroom policies and creates new ethical ones. Camillo oversees corrections policy as well as issues of language and taste.

Media outlets also should create or update guidelines about the use of artificial intelligence.

The Generative AI in the Newsroom project at Northwestern University has published an article about current AI policies and how to create or update guidelines. It analyzed standards from such outlets as Reuters, the Guardian and Wired.

Reuters maintains oversight to ensure “meaningful human involvement, and to develop and deploy AI products and use data in a manner that treats people fairly.” The Guardian states that the use of generative AI requires human oversight linked to a “specific benefit and the explicit permission of a senior editor.”

Wired does not publish stories with text generated by AI, except when it is the focus of a story. Its editors also state that they “will not publish text that is edited by AI” or use AI-generated images instead of stock photography.

The project also provides recommendations to create AI guidelines aligned with ethical standards and codes. Any such policy should be created by a diverse group and include a risk assessment component to foresee future challenges.

A truth or standards desk should be equipped with top AI detectors. It also should utilize fact-checking sites, including Poynter’s PolitiFact, FactCheck.org, Washington Post Fact Checker and Snopes.

One of the most comprehensive such sites is Poynter’s International Fact-Checking Network, launched in 2015 “to bring together the growing community of fact-checkers around the world and advocates of factual information in the global fight against misinformation.”

The fight is about to become a world war with AI hallucinations. Newsrooms must be equipped to defend fact before it falls prey to machines. The public good is at stake.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Michael Bugeja, a regular contributor at Poynter, is author of "Interpersonal Divide in the Age of the Machine" (Oxford Univ. Press) and "Living Media Ethics"…
Michael Bugeja

More News

Back to News