Skip to main content
 

Fingerprinting AI to prevent spam

Lots of people have been worried about deepfakes for a while, but I think the bigger, more pressing concern is detecting AI-generated text.

I’d love to be proven wrong on this hypothesis: the only real market for long-form AI text generation on the web is to generate spam. There are other use cases, for sure, but the people who will be buying and deploying the tech in the short term want to generate huge amounts of content at scale in order to trick people into looking at ads or buying ebooks.

Fingerprinting AI-generated content will allow it to be filtered from search engine results, email inboxes, store listings, and so on. While software providers might not want to remove this content entirely, it seems generally sensible to down-rank it in comparison to human-generated content. Fingerprinting will also be useful in educational settings to prevent AI-generated plagiarism, among other places.

Ironically, the best way to do this might be through AI: what better way to identify neural net output than a neural net itself? While this might lead to false positives, I’m not going to lose a whole lot of sleep about de-ranking content that reads a lot like the output from a software model. The outcome is the same: poor quality, mass produced content is de-emphasized in favor of insightful creativity from real people.

I do think AI has lots of positive uses: for example, I’ve been using DALL-E in my own creative endeavors. It’s a great drafting tool and a way to stimulate ideas. Visual AI tools are avenues for creative expression in their own right. But spam is a problem, and the incentives to create high-volume content for commercial gain are not going away. Previously creating it was human-limited; now it’s CPU-bound. That means any enterprising spammer with a cloud can flood the internet with content as part of an arbitrage scheme. That’s the kind of thing we need to protect ourselves against.

· Posts · Share this post