Skip to main content
 

AI in the newsroom: the hard sell

A robot hand reaching out. Friend or foe?

It’s been fascinating to watch AI vendors like Microsoft try to sell their emerging products to industries like news publishing. Having come from tech startup-land, with both feet now firmly planted in nonprofit-news-land, I find myself wondering if I have a unique perspective, or if everyone is quietly thinking the same things I am while not saying them out loud.

It’s a strong, hard sell that reminds me a little of the fast-talking traveling salesman from The Music Man, trying to get the neighborhood to buy instruments and band uniforms before he skips town to avoid fulfilling his promise to give lessons. It makes sense: they have billions of dollars of investment to justify. But in the case of news publishing it feels like kicking an industry that is already struggling.

Four things that are particularly of note:

As always, they call it “AI”, bringing to mind science fiction and superhero movies, rather than anchoring their products factually in their actual capabilities. It’s fun to think about C3PO and Data; it’s less exciting to think of it in terms of a modern upgrade to Clippy.

Vendors are telling publishers that they’ve been late to adopt AI. They’re trying to create FOMO in the industry, but the truth is that these products as currently advertised, whether as end-user products or back-end APIs, are still not widespread in most industries. There are other, much older, forms of AI that newsrooms absolutely are using, as part of the same everyday products as everyone else.

Very little thought has been put into the kinds of systemic biases that people like Dr Joy Buolamwini and Timnit Gebru have warned about. These are real issues that would have the potential to have a material impact on how stories are reported if these technologies did find their way deeply into newsrooms. But it’s clear that, at least publicly, vendors have little to say about it.

Vendors want to focus newsrooms on what AI can do for them, and not how they might cover AI’s wider societal impacts. The 19th’s publisher Amanda Zamora dove into this in an X thread yesterday, following a presentation on AI at the Online News Association conference that turned out to be more of a Microsoft sales event than a true discussion.

It’s not that there aren’t uses for these technologies, or that they can’t or won’t improve. Autocomplete is very useful, and there are some mundane tasks that LLMs can, indeed, speed up (as long as their user takes care to carefully check their work afterwards). If vendors truly internalize and systematize concerns raised by organizations like the Algorithmic Justice League, and if the teams underlying AI system production become more diverse and inclusive themselves, biases may be able to be at least reduced if not fully overcome.

But with any technology that appears at first glance to be magic, we must use a skeptical lens. How does it work? What are the real dangers? What are the advantages vs the drawbacks? What must a newsroom do to ethically use these products — and how might it cover them and their wider intersectional impact?

A sales pitch is not going to help with those things. Neither will FOMO, or a one-size-fits-all approach. When so much is at stake, as it is with true journalistic reporting, newsrooms must tread carefully and use all their powers of nuance, investigation, and thoughtfulness to determine what is the best path for them.

· Posts · Share this post