"Cognitive surrender" leads AI users to abandon logical thinking, research finds
"Experiments show large majorities uncritically accepting “faulty” AI answers." Increasingly, incentive structures are asking them to.
I’m tired. Everyone’s tired. There are so many demands being made of us constantly that the output from an AI chatbot can seem like a godsend: rather than buckling down and doing yet more work, the machine can shortcut that for us.
Not so fast:
“Overall, across 1,372 participants and over 9,500 individual trials, the researchers found subjects were willing to accept faulty AI reasoning a whopping 73.2 percent of the time, while only overruling it 19.7 percent of the time. The researchers say this “demonstrate[s] that people readily incorporate AI-generated outputs into their decision-making processes, often with minimal friction or skepticism.” In general, “fluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta-cognitive signals that would ordinarily route a response to deliberation,” they write.”
There are no shortcuts to doing great work, but if AI is used in this pressure-driven way, it becomes little more than a shortcut machine: a way to get to the end goal faster without really scrutinizing the thinking it took to get there. It’s no wonder that AI users didn’t examine the answers they were given; in a world where AI allows people to be saddled with more tasks, they might not have had the time to do anything else. Good enough; onto the next thing. Most people don’t want to cut corners, but under adverse circumstances, they will.
It may also be that they were rote learners who were less good at identifying the principles behind a solution. The people who bucked this trend were the ones who scored highly in “fluid reasoning” tests. I have to admit that this was new to me, but fluid learners are more able to find the underlying principles and links between topics and ideas in order to solve problems. The better people were at abstract thinking, the more likely they were to question outputs from the AI.
That makes some sense to me. AI can’t reason particularly well: it outputs convincing-sounding responses, but the underlying principles behind them aren’t necessarily fully-formed. If you’re used to just accepting something that looks right, perhaps because you’ve been taught to memorize rather than understand, it’s harder to discern when this kind of superficially intelligible, highly confident answer is right. If you scratch the surface and try to understand the underlying logic, that’s when it becomes clearer that the LLM doesn’t know what it’s talking about.
Managers that salivate about using AI to increase the workload / productivity of a team should consider this effect: the more you press people to use these systems, the more they might accept faulty reasoning from them. Hiring abstract thinkers — the people who are more likely to rise to be senior engineers etc — will help, but you need to give people the space, permission, and expectation to think for themselves.
[Link]