We need to challenge the notions about automation that seem to be carried along in conversations about ChatGPT and other forms of “artificial intelligence” — notions including that AI acts on its own and that it threatens to replace humans.
The results of a Google image search for “artificial intelligence” or “machine learning” are telling: lots of pictures of brains, robots and humanoid-type figures. Anthropomorphizing automated technologies reveals our fascination with them, but it gets in the way of a meaningful understanding of how they work, and how they impact us.
As an academic, formerly as a professor at UC Berkeley and now director of research at the nonprofit research institute Data & Society, I’ve devoted my career to studying the relationship between digital technology and society. I’m committed to using my research to nudge policymakers and experts of all stripes toward a more humane and human-centered approach to computing technology.
In fact, human labor plays an important role in AI tools. It’s human labor that trains these models, based on data produced by humans: We teach them what we know. And around the world, it’s on-call human workers who fix errors in the technology, respond when tools get stuck, moderate content, and even guide robots along city streets.
Take ChatGPT, which has captivated so many imaginations since its public launch in November. This chatbot responds to prompts with extraordinary fluidity — it gives immediate plausible-sounding answers to questions, provides expert-sounding explanations, and can write longish texts with stylistic flourishes. But while its abilities might seem uncanny, the explanation is comparatively simple: What made ChatGPT possible is the global move of everyone and everything online — the mass digitization of everyday life, resulting in the extraordinarily broad text corpus that is the internet. ChatGPT sucks up all of that text and uses it to predict patterns, to devise sequences of words.
Here is just some of what ChatGPT does not do: research, fact-checking, or copyediting at a minimally adequate level. Indeed, ChatGPT is proof that finding “truth” is a lot trickier than having enough data and the right algorithm. Despite its abilities, ChatGPT is unlikely to ever come close to human capabilities: its technical design, and the design of similar tools, is missing fundamental things like common sense and symbolic reasoning. Scholars who are authorities in this area describe it as being like a parrot; they say its responses to prompts resemble “pastiche” or “glorified cut and paste.”
When you think of ChatGPT, don’t think of Shakespeare, think of autocomplete. Viewed in this light, ChatGPT doesn’t know anything at all.
Some of the misunderstandings can be traced back to the language computer scientists have long used to describe this type of research. “Machine learning” and “intelligence,” for example, could more accurately be “data mining” or “statistical optimization.” These terms sound more like technical jargon, but they don’t carry the misleading connotations of references to “intelligence.”
But in the face of AI hype, journalism is also culpable, with headlines like this one from The New York Times: “Meet GPT-3. It Has Learned to Code (and Blog and Argue).” Articles meant to serve as correctives still fall into anthropomorphism, like a piece in Salon that said “AI chatbots can write, but can’t think.” Even the claim that ChatGPT can “write” is an exaggeration, an interpretation of the tool’s capabilities that inflate the reality of it and contribute to further misunderstandings and overstatements that have real consequences.
Depending on whether we find viable and valuable uses for it, it’s true that ChatGPT could be part of a broader shift and redelegation of how journalism is done. What we need to avoid is using it to replace humans, which it does ineptly — flooding the internet with even more unreliable (but plausible-sounding) junk. CNET recently made the mistake of overestimating AI’s ability, yielding not only a series of articles rife with factual errors but a broader reckoning for the company and perhaps the industry at large.
One thing we learn in scholarship on the history of technology is that there are very often outrageous expectations set for tech at its invention. Airplanes will bring about world peace. Movies will make schools obsolete. We have the advantage of being able to look to that history and see that in fact, no technology is inevitable and that the march of progress (while relentless) takes many, many unexpected turns along the way.
Journalists are well-positioned to help fight the hype. Don’t let these tools dazzle you beyond reason. Don’t anthropomorphize them. Ask hard questions about what they’re purported to do.
That, of course, has always been the journalist’s job — and despite AI’s influence, it’s one that’s not going anywhere.