Skip to main content
 

The AI trust crisis

I think this is right: AI companies, and particularly OpenAI, have a crisis of trust with the public. We simply don't believe a word they say when it comes to privacy and respecting our rights.

It's well-earned. The way LLMs work is through training on vast amounts of scraped data, some of which would ordinarily be commercially licensed. And the stories AI vendors have been peddling about the dangers of an AI future - while great marketing - have hardly endeared them to us. Not to mention the whole Sam Altman board kerfuffle.

I think Simon's conclusion is also right: local models are the way to overcome this, at least in part. Running an AI engine on your own hardware is far more trustworthy than someone else's service. The issues with training data and bias remain, but at least you don't have to worry about whether your interactions with it are being leaked.

· Links · Share this post