Political Pollsters Are Trying to Save Money by Polling AI Instead of Real People, and It’s Going About as Well as You’d Expect
"Obviously, any campaign that used only that AI-generated data would miss the mark — instead of looking at the views of real respondents, it would be looking at a funhouse mirror reflection of a demographic cooked up by a language model with no access to actual data."
Repeat after me: do not use AI for user research. That goes for software, it goes for news, and it goes, as it turns out, for political polling.
“In a white paper about the topic for the survey platform Verasight, data journalist G. Elliott Morris found, when comparing 1,500 "synthetic" survey respondents and 1,500 real people, that large language models (LLMs) were overall very bad at reflecting the views of actual human respondents.”
It should be obvious that this is a terrible thing to do. As the article points out:
“Obviously, any campaign that used only that AI-generated data would miss the mark — instead of looking at the views of real respondents, it would be looking at a funhouse mirror reflection of a demographic cooked up by a language model with no access to actual data.”
There’s been a push in various quarters to use AI to do all kinds of user research. You can see why it’s tempting: it’s very fast to ask a probabilistic model rather than an actual human being, and because LLMs respond with very confident, human-sounding language, it’s easy to be duped into thinking there’s anything other than mathematics going on behind the scenes.
I’ve always been against personas, where teams do research and synthesize it into amalgamated profiles. Those are essentially fiction: it’s too easy for a team to impose their own biases and assumptions, and asking an amalgam that you created is essentially the same as asking a short story that you wrote. It’s nowhere near the same thing as researching with an actual human being, who will always be quirkier, more surprising, and more interesting than an invented simulacrum. A fictional person is never going to buy your product or engage with your content; that’s something real human beings do. But doing this with LLMs is far more disingenuous and self-deceiving than even that practice.
So in many ways, this finding is obvious. But it’s good to see it on paper, and now it’s something we can point to in order to demonstrate how ludicrous this practice actually is.
[Link]