Skip to main content
 

Exploring AI, safely

I’ve been thinking about the risks and ethical issues around AI in the following buckets:

  • Source impacts: the ecosystem impact of generative models on the people who created the information they were trained on.
  • Truth and bias: the tendency of generative models to give the appearance of objectivity and truthfulness despite their well-documented biases and tendency to hallucinate.
  • Privacy and vendor trust: because the most-used AI models are provided as cloud services, users can end up sending copious amounts of sensitive information to service providers with unknown chain of custody or security stances.
  • Legal fallout: if an organization adopts an AI service today, what are the implications for it if some of the suits in progress against OpenAI et al succeed?

At the same time, I’m hearing an increasing number of reports of AI being useful for various tasks, and I’ve been following Simon Willison’s exploratory work with interest.

My personal conclusions for the above buckets, such as they are, break down like this:

  • Source impacts: AI will, undoubtedly, make it harder for lots of people across disciplines and industries to make a living. This is already in progress, and continues a trend that was started by the internet itself (ask a professional photographer).
  • Truth and bias: There is no way to force an LLM to tell the truth or declare its bias, and attempts to build less-biased AI models have been controversial at best. Our best hope is probably well-curated source materials and, most of all, really great training and awareness for end-users. I also would never let generative AI produce content that saw the light of day outside of an organization (eg to write articles or to act as a support agent); it feels a bit safer as an internal tool that helps humans do their jobs.
  • Privacy and vendor trust: I’m inclined to try and use models on local machines and cloud services that follow a well-documented and controllable trust model, particularly in an organizational context. There’s a whole set of trade-offs here, of course, and self-hosted servers are not necessarily safer. But I think the future of AI in sensitive contexts (which is most contexts) needs to be on-device or on home servers. That doesn’t mean it will be, but I do think that’s a safer approach.
  • Legal fallout: I’m not a lawyer and I don’t know. Some but not all vendors have promised users legal indemnity. I assume that the cases will impact vendors more than downstream users — and maybe (hopefully?) change the way training material is obtained and structured to be more beneficial to authors — but I also don’t know that for sure. The answer feels like “wait and see”.

My biggest personal conclusion is, I don’t know! I’m trying not to be a blanket naysayer: I’ve been a natural early adopter my whole life, and I don’t plan to stop now. I recently wrote about how I’m using ChatGPT as a motivational writing partner. The older I get, the more problems I see with just about every technology, and I’d like to hold onto the excitement I felt about new tech when I was younger. On the other hand, the problems I see are really big problems, and ignoring those outright doesn’t feel useful either.

So it’s about taking a nimble but nuanced approach: pay attention to both the use cases and the issues around AI, keep looking at organizational needs, the kinds of organic “shadow IT” uses that are popping up as people need them, and figure out where a comfortable line is between ethics, privacy / legal needs, and utility.

At work, I’m going to need to determine an organizational stance on AI, jointly with various other stakeholders. That’s something that I’d like to share in public once we’re ready to roll it out. This post is very much not that — this space is always personal. But, as always, I wanted to share how I’m thinking about exploring.

I’d be curious to hear your thoughts.

· Posts · Share this post