An AI company set out to fix news deserts. Instead, it copied local journalists’ work

"Nota shut down its news sites after Axios and Poynter found dozens of plagiarized quotes, phrases and photos."

[Angela Fu in Poynter]

Repeat after me: AI cannot write journalism and should never be used in place of a journalist. I believe it can be a very useful tool — but it is a tool for humans.

So this whole initiative was misconceived:

“Artificial intelligence company Nota — whose clients include organizations like The Boston Globe and the Institute for Nonprofit News — is scrapping its network of local news sites after learning that they contained dozens of instances of plagiarism. […] The 11 sites — collectively called Nota News — launched in September as an effort to bring “bilingual local reporting and civic tools to underserved communities.””

The deal here was that the company would identify news deserts: places that were unserved or underserved by real newsrooms. And then it would try to serve those areas with content created by an LLM-based system.

This was inevitably going to plagiarize existing journalism, because what other source could it possibly use? An agentic system can’t do the on-the-ground research and reporting work involved in creating a story. It can gather together data points and turn them into something that looks like news, rather than journalism: sports scores, city council votes, and that kind of thing. But it can’t provide context if someone hasn’t already written it.

As the linked Poynter article points out:

“The articles were supposed to be based on publicly available civic information, such as press releases and videos of city council meetings. In reality, Poynter found more than 70 stories dating back to October that included reporting, writing and photography from local journalists without attribution.”

Someone had already written it: human journalists whose work was subsequently incorporated without attribution. The eleven human editors who used the LLM tools to generate content apparently didn’t realize that this work had been drawn into the mix. Again: that was inevitably going to happen as the stories began to not just say what had happened but explain why.

The AI hype cycle has created a bunch of really regrettable case studies that other organizations should learn from. This is one. There are more like it, where good intentions lead to accidental plagiarism (or hallucinations). There are plenty of stories where organizations have prematurely let people go because they incorrectly think they can replace human initiative with software. And all of them come down to believing a science fiction version of what this technology does instead of the actual reality of it.

That’s understandable: the reality is shifting quickly, and the marketing machine is incredibly strong. But everyone needs to take a breath with AI and get themselves to a more nuanced understanding of what it is — and isn’t.

[Link]