[Michael Kennedy and Isobel Cockerell in Coda Story]
Just one example of many of AI being used to take agency away from ordinary workers:
"The upshot was, it took away our ability to advocate for patients. We couldn’t point to a score and say, ‘This patient is too sick, I need to focus on them alone,’ because the numbers didn’t help us make that case anymore. They didn’t tell us if a patient was low, medium, or high need. They just gave patients a seemingly random score that nobody understood, on a scale of one to infinity.
We felt the system was designed to take decision-making power away from nurses at the bedside. Deny us the power to have a say in how much staffing we need."
The piece goes on to discuss the mass surveillance that AI enables. In a world where a patient's discussions with the healthcare workers attending to them are recorded, to feed an agent or otherwise, all kinds of abuses become possible. Not only does it remove agency from the experts who should be advocating for patients, but consider the effects in a state with adverse reproductive healthcare laws, for example.
This is the salient point:
"The reasoning for bringing in AI tools to monitor patients is always that it will make life easier for us, but in my experience, technology in healthcare rarely makes things better. It usually just speeds up the factory floor, squeezing more out of us, so they can ultimately hire fewer of us."
And this tends to be true regardless of what the original intention might be. If a technology can be used to cut costs or squeeze more productivity out of a worker, absent of any other constraints, it absolutely will be. In healthcare, like many fields that depend on care, attention, and underlying humanity, that's not necessarily a good thing.
[Link]
· Links · Share this post
I’m writing about the intersection of the internet, media, and society. Sign up to my newsletter to receive every post and a weekly digest of the most important stories from around the web.