I think this is right: AI companies, and particularly OpenAI, have a crisis of trust with the public. We simply don't believe a word they say when it comes to privacy and respecting our rights.
It's well-earned. The way LLMs work is through training on vast amounts of scraped data, some of which would ordinarily be commercially licensed. And the stories AI vendors have been peddling about the dangers of an AI future - while great marketing - have hardly endeared them to us. Not to mention the whole Sam Altman board kerfuffle.
I think Simon's conclusion is also right: local models are the way to overcome this, at least in part. Running an AI engine on your own hardware is far more trustworthy than someone else's service. The issues with training data and bias remain, but at least you don't have to worry about whether your interactions with it are being leaked. #AI
[Link]
· Links · Share this post
I’m writing about the intersection of the internet, media, and society. Sign up to my newsletter to receive every post and a weekly digest of the most important stories from around the web.