November 9, 2023

As the journalism industry scrambles to create policies and adopt standards around generative artificial intelligence, I worry visual journalists will have little say in the matter.

There was a time when visual journalists were second-class citizens in some newsrooms. Their journalistic and creative opinions were not valued. Their contributions were considered secondary to those who wrote and spoke the words. Some visual journalists might argue we still haven’t reached parity, especially when it’s still common for editors to ask “Do we have an image to go with that story?” rather than how visuals could improve the story.

Before joining Poynter, I spent the majority of my nearly 30-year career as a graphic designer. Then I transitioned to leadership, where I managed teams of designers, illustrators, photographers and eventually, innovators and developers.

I watched visual storytelling take a back seat to reporting — what some journalists consider our core product. Our work was sometimes deemed supplemental, rather than central. When it came time to reduce staff counts, visual journalists took the biggest cuts (along with copy desks).

Just this week, the Pulitzer Prize Board finally opened up eligibility to digital news sites operated by broadcast and audio organizations, but made sure to stress, “Entries from these organizations should rely essentially on written journalism.”

While I hear many in the industry agonize over when or if to use generative AI to write stories or headlines, very few seem to discuss generative photos, illustrations, graphics or other visual elements. Where is the conversation about the explosion of tools like Dall-e 3, MidJourney, Stable Diffusion and Firefly? All of these tools make visual content creation incredibly easy. The only barrier to creating any image or illustration you want is a halfway decent prompt.

I have lived through the move from paste-up to digital design, saw print reduced to almost nothing, led a photo team that was forced to pivot to video and helped design many apps that were supposed to save us all. None of those changes moved at the speed of generative AI.

As cuts continue, visual positions disappear or outlets with limited resources start up, it’s possible, maybe even probable, that image generators will become the easiest path for news organizations to create visual elements.

And that should be a major concern for everyone.

Just a few weeks after my Poynter colleague Alex Mahadevan wrote that generative AI has not been a major factor in the flood of misinformation, Crikey, an Australian media outlet, published a story documenting how Adobe was selling AI-created images depicting the war between Israel and Hamas.

I searched Adobe Stock with the term “Israel-Hamas war” and quickly found several images available to license. Most were labeled “Generated with AI,” but at least one image that was labeled AI-generated in one crop was not labeled in a different crop. The images have the cinematic look that image generators manufacture for photography, but at a quick glance, or on mobile, it would be hard to immediately identify them as AI-created.

On a recent search of Adobe Stock, most AI-generated images were labeled (top), but at least one was not (bottom). (Screenshots/Adobe Stock)

In an interview, Adobe public relations manager Kevin Fu told me Adobe Stock requires all of the generative AI content submitted to the marketplace to be clearly labeled as such. “The images in question were categorized as generative AI and we want to reinforce our commitment to fighting misinformation,” he said.

Adobe also released this statement to multiple news outlets, including Poynter: “Adobe is committed to fighting misinformation, and via the Content Authenticity Initiative, we are working with publishers, camera manufacturers and other stakeholders to advance the adoption of Content Credentials, including in our own products. Content Credentials allows people to see vital context about how a piece of digital content was captured, created or edited including whether AI tools were used in the creation or editing of the digital content — increasing transparency about digital content and reducing the likelihood of widespread misinformation.”

In the interview, Fu stressed the content in the stock marketplace is fictional in nature. When asked about AI-created images being used for misinformation, Fu replied via email stating Adobe understands the sensitivities surrounding a volatile political environment and is one of the technology companies signed onto the White House’s voluntary AI commitments.

Fu also said Adobe is “working closely with members of the European Parliament to share our perspective on things like harm and bias and restoring trust online as they finalize the EU’s AI Act. And we are leading advocacy efforts in the UK, in Singapore, and India as they are all tackling the issues of AI on their own.”

While they probably didn’t envision creating images from black box algorithms, leaders from the National Press Photographers Association, Associated Press Photo Managers and The Kalish Visual Editing Workshop addressed similar concerns in 2018 when they noted related risks in stock photography.

“By not licensing images from a reputable visual journalist or other trusted contributors, one runs a far greater risk of obtaining an image that has been manipulated, has a caption that is inaccurate or misleading, or passes off a staged moment as genuine,” they wrote in an opinion piece for Poynter. “At a time when credibility is a news organization’s greatest asset, this cost is significantly higher than a budget for photography.”

Before the pandemic, I remember watching futurist Amy Webb talk about how software engineers were using deep learning to create realistic images of people. Now any number of image generators can dream up those people and put them in any scene you can describe — fitting any narrative you want.

Advancements in the technology and applications are happening at an unprecedented pace. Less than a year ago I was working with developers and innovators who were building tools that used machine learning to help produce data stories at scale. It felt like the first steps we needed to take on — what I thought at the time — was a long road to true artificial intelligence use.

In the blink of an eye, machine learning models feel a decade old. Overnight, AI went from something on the horizon to the main topic. This year’s Online News Association conference seemingly had more AI-related sessions than attendees.

In his thorough report on journalism and AI, AI researcher David Caswell said, “The development of generative AI has placed journalism at the cusp of significant change, variously equated to the iPhone moment, the birth of the internet and even the appearance of the printing press.”

We are absolutely not ready for it.

If we want to keep the public’s trust, we have to talk about how we will handle misinformation, manipulation, privacy, erosion of trust, ethics, legal implications, bias and unintended consequences. Those are not my thoughts. Those are the answers ChatGPT gave me when I asked it, “What are the dangers of using AI-produced images for news and journalism?”

The technology is evolving so fast that we may not even need to log in to access generative AI tools. Google’s new Pixel 8 lets you start manipulating right from your camera roll.

As a response, news organizations are quickly creating and posting their guidelines and policies. Here are a few excerpts:

Just a few weeks ago, the Miami Herald published a story using  Firefly, Adobe’s generative AI illustration tool. The Herald’s AI visual policy states, “In situations where AI plays a significant role in the content creation process, transparency is vital. For instance, in a digital illustration where the central imagery (the center of interest) is entirely AI-generated, we will disclose the methodology or specify what parts were created by AI and what was not.”

(Screenshot/Miami Herald)

It’s encouraging that news organizations are putting thought and effort into deciding how we use, or don’t use, generative AI. It’s my hope those conversations involve every aspect of journalism, not just reporting and writing.

I won’t pretend that I have any answers for the questions we’re all asking, but I know this has to be top of mind for visual storytellers. I want to hear your thoughts as I continue to research what this means for our industry. Over the next few months, I’ll be exploring this topic and hope to gather your insights, on the record or as background.

I want to hear from visual leaders, content creators and producers on what you think about generative AI and storytelling.

  • How do you think it will affect journalists who are creating visual elements?
  • Do you think jobs will disappear because of this technology?
  • What are your ethical concerns about image creation and manipulation?
  • What ways do you think generative AI can help news organizations?
  • Should we experiment more, or less, as guidelines and policies are being written?
  • How do we keep the public’s trust when almost anything can be faked?
  • What are we not thinking about, or what questions am I not asking?

If you want to be part of the conversation, I created a form for you to share your thoughts, or schedule to chat with me. I want this to be a community-driven conversation we can share with leaders, executives and our audiences to help build an AI framework that can benefit us all.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Donate
Tony Elkins, a citizen of the Comanche Nation, is a faculty member at Poynter. His portfolio includes Poynter's early and mid-career leadership workshops Essential Skills…
Tony Elkins

More News

Back to News