Categories
media technology

Chat Boom

AI in journalism is here to stay. Humans must ethically deploy it.

Leapfrogging AI chat capabilities have recently been competing with geopolitical strife and insurrection updates for front page headlines. ChatGPT, Microsoft’s chatbot, even burnished the reputation of the Bing search engine, albeit briefly. And one tech journalist after another has described their often unnerving time spent with seemingly snappy or malcontented chatbots. 

AI’s improved linguistic fluency has caused a stir in various fields, with its impact on the future of education dominating much of the early conversation. If professors are unable to distinguish an essay written by bot from that of a student, will there be a fundamental rethinking of expository writing on college campuses? Might it cause a return to essay writing by hand? Would college students in 2023 be able to complete a paper under such circumstances? 

The technology’s effects on professional writers is another tantalizing and uneasy question. Buzzfeed announced that it would outsource parts of its content building to AI shortly after news of the breakthroughs. On the non-editorial side, the Times is eager to use AI for digital archiving purposes. And a gaggle of journalists opened recent articles with an AI-generated lede before retaking control with a wink, and perhaps some flop sweat. 

The general verdict, for now, is that while ChatGPT and similar systems are impressive, their texts lack a human touch, reading like the Wikipedia entries that partly feed the bots’ predictive capabilities. That shouldn’t provide much reassurance, though, since a hallmark of AI advancement over the past five years has been its staggering swiftness of improvement relative to what came before. 

More immediately troubling is its potential to spout coherent, grammatically sound misinformation without the ethical concerns that, hopefully, still guide most journalists toward attempting accuracy. (When I asked ChatGPT if it was naive to think that modern journalists sought objectivity, it included in its response that “the concept of objectivity in journalism has come under scrutiny in recent years…with some arguing that the pursuit of objectivity has been used to justify false equivalencies.”) 

I’m not generally a tinfoil hat type, but I think we should be more worried about chatbots’ threat to journalism than the coverage I’ve read suggests we are. We’re already in a precarious moment for fact-based news reporting and dissemination. Audiences are already siloed. And journalists are already getting laid off en masse. New technology that could compound these problems must be studied and implemented with utmost care. 

How can newsrooms do that? 

First, let’s look at some of the concrete changes that advanced chatbots will bring to journalism outlets. Many of them are good! As Franceso Marconi and Alex Siegman of the Columbia Journalism Review wrote back in the simpler days of 2017, AI “will allow reporters to analyze data; identify patterns and trends from multiple sources; see things that the naked eye can’t see; turn data and spoken words into text; text into audio and video; understand sentiment; analyze scenes for objects, faces, text, or colors—and more.”

An intriguing example of this is AI’s potential to collect information from social media. If a reporter is covering a protest or even a traffic jam, for instance, considerable grunt work would be spared thanks to an AI algorithm acting as a superpowered trending hashtag feed or Waze report. 

Breaking news isn’t the only feature to benefit from increased efficiency. (In a future column, I’ll explore how chatbots may be a boon for expert commentators and opinion writers.) AI could analyze vast troves of public records and earnings reports, for example, a crucial but often tedious and time-consuming element of journalism. Then there’s the ability of automation  to create digital archives of photos currently collecting dust and, as the New York Times has already done, produce a massive collection of cooking recipes published over the past several decades. 

The downsides, and they could be disastrous, involve AI worsening two interwoven problems already plaguing the industry: the lightning fast spread of misinformation and siloed media audiences. 

Automated misinformation is less malicious than human-generated misinformation, but only for the simple reason that chatbots lack a moral or ethical imperative to “write” honestly and inform readers. It is not a truthseeker, as it’s incapable of wrestling with the truth. ChatGPT programmers acknowledge this on the platform’s mainpage, saying it “sometimes writes plausible-sounding but incorrect or nonsensical answers” and that fixing that is “complicated” because “during its training, there’s currently no source of truth” and “supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”

As Mashable’s Mike Mills put it after ChatGPT supplied him with a series of plainly inaccurate answers to rather simple questions, “Humanity has created a machine that can blurt out basic common sense, but when asked to be logical or factual, it’s on the low side of average.”

A chatbot’s inaccuracy becomes a bigger problem when integrated with a traditional journalistic apparatus. How are readers, or even journalists, to know when automated and human reporting meet and diverge? 

Will they even care? While data collection of public records and witness reports is an asset of using AI in the newsroom, the data it could collect on readers and their habits is a liability. I’ve often lamented the splintering of media audiences into ever-smaller niches of self-affirmation. The startlingly fast improvement of AI technology should cause serious concern about those silos shrinking further. 

AI is already changing journalism, and it’s here to stay. Enjoying its substantial benefits while keeping serious risks at bay will require a major transformation at media companies. This powerful technology will require rigorous oversight by human beings at the news outlets implementing it. The government must also build safeguards around AI’s development in the media, as it slowly is working to do with Big Tech’s sway over the sphere. Both of these protections, private and public, must come fast. 

css.php