AI is changing the style and substance of human writing, study finds

"Teams from Google and leading universities found that large-language models change the voice, tone and intended meaning of human authors."

[Jared Perlo at NBC News]

Repeat after me: an LLM cannot replace a human’s skill, judgment, or taste.

Another exhibit for the jury:

“The research team found that users who heavily relied on large language models (LLMs) produced responses that diverged significantly in meaning from the answers of participants who only partially relied on LLMs or avoided their use altogether, suggesting heavy AI use alters the substance of humans’ arguments in addition to changing writing style.”

It might seem easy or fast, or a neat way to push out more content with less effort. But the software really does change the substance of your writing in what I would call objectively bad ways: it makes it less personal and less emotional, and it actively changes its underlying meaning in the process. It’s less human. Sure, ask it to give you feedback, if you want, but don’t actually let it polish your writing itself.

For the record, my posts aren’t written or conceived with an LLM, although I know an increasing number of people who use one to write a first draft and then edit. I’m not a fan. The whole point of the web — its beauty — is that it’s unrelentingly human and diverse. There are lots of ways in which AI is eating away at that core (fewer search engine referrals, more automated content, more spam), but this is the most insidious: through people who believe they are writing the piece themselves but are actually handing over their creativity to the model.

[Link]