Skip to main content
 

An AI capitalism primer

A clenched robot fist

Claire Anderson (hi Claire!) asked me to break down the economics of AI. How is it going to make money, and for whom?

In this post I’m not going to talk too much about how the technology works, and the claims of its vendors vs the actual limitations of the products. Baljur Bjarnason has written extensively about that, while Simon Willison writes about building tools with AI and I recommend both of their posts.

The important thing is that when we talk about AI today, we are mostly talking about generative AI. These are products that are capable of generating content: this could be text (for example, ChatGPT), images (eg Midjourney), music, video, and so on.

Usually they do so in response to a simple text prompt. For example, in response to the prompt ‌Write a short limerick about Ben Werdmuller asking ChatGPT to write a short limerick about Ben Werdmuller, ChatGPT instantly produced:

Ben Werdmuller pondered with glee,
“What would ChatGPT write about me?”
So he posed the request,
In a jest quite obsessed,
And chuckled at layers, level three!

Honestly, it’s pretty clever.

While a limerick isn’t particularly economically useful, you can ask these technologies to write code for you, find hidden patterns in data, highlight potential mistakes in boilerplate legal documents, and so on. (I’m personally aware of companies using it to do each of these things.)

Each of these AI products is powered by a large foundation model: deep learning neural networks that are trained on vast amounts of data. In essence, the neural network is a piece of software that ingests a huge amount of source material and finds patterns in it. Based on those patterns and the sheer amount of data involved, it can statistically decide what the outcome of a prompt should be. Each word of the limerick above is what the model has decided is the most probably likely next piece of the output in response to my prompt.

The models are what have been called stochastic parrots: their output is entirely probabilistic. This kind of AI isn’t intelligence and these models have no understanding of what they’re saying. It’s a bit like a magic trick that’s really only possible because of the sheer amount of data that’s wrapped up in the training set.

And here’s the rub: the training set is a not insignificant percentage of everything that’s ever been published by a human. A huge portion of the web is there; it’s also been shown that entire libraries of pirated books have been involved. No royalties or license agreements have been paid for this content. The vast majority of it seems to have been simply scraped. Scraping publicly accessible content is not illegal(and nor should it be); incorporating pirated books and licensed media clearly is.

Clearly if you’re sucking up everything people have published, you’re also sucking up the prejudices and systemic biases that are a part of modern life. Some vendors, like OpenAI, claim to be trying to reduce those biases in their training sets. Others, like Elon Musk’s X.AI, claim that reducing those biases is tantamount to training your model to lie. He claims to be building an “anti-woke” model in response to OpenAI’s “politically correct” bias mitigation, which is pretty on-brand for Musk.

In other words, vendors are competing on the quality, characteristics, and sometimes ideological slant of their models. They’re often closed-source, giving the vendor control over how the model is generated, tweaked, and used.

These models all require a lot of computing power both to be trained and to produce their output. It’s difficult to provide a service that offers generative AI to large numbers of people due to this need: it’s expensive and it draws a lot of power (and correspondingly has a large environmental footprint).

The San Francisco skyline, bathed in murky red light.

Between the closed nature of the models, and the computing power required to run them, it’s not easy to get started in AI without paying an existing vendor. If a tech company wants to add AI to a product, or if a new startup wants to offer an AI-powered product, it’s much more cost effective to piggyback on another vendor’s existing model than to develop or host one of their own. Even Microsoft decided to invest billions of dollars into OpenAI and build a tight partnership with the company rather than build its own capability.

The models learn from their users, so as more people have conversations with ChatGPT, for example, the model gets better and better. These are commonly called network effects: the more people that use the products, the better they get. The result is that they have even more of a moat between themselves and any competitors over time. This is also true if a product just uses a model behind the scenes. So if OpenAI’s technology is built into Microsoft Office — and it is! — its models get better every time someone uses them while they write a document or edit a spreadsheet. Each of those uses sends data straight back to OpenAI’s servers and is paid for through Microsoft’s partnership.

What’s been created is an odd situation where the models are trained on content we’ve all published, and improved with our questions and new content, and then it’s all wrapped up to us as a product and sold back to us. There’s certainly some proprietary invention and value in the training methodology and APIs that make it all work, but the underlying data being learned from belongs to us, not them. It wouldn’t work — at all — without our labor.

There’s a second valuable data source in the queries and information we send to the model. Vendors can learn what we want and need, and deep data about our businesses and personal lives, through what we share with AI models. It’s all information that can be used by third parties to sell to us more effectively.

Google’s version of generative AI allows it to answer direct questions from its search engine without pointing you to any external web pages in the process. Whereas we used to permit Google to scrape and index our published work because it would provide us with new audiences, it now continues to scrape our work in order to provide a generated answer to user queries. Websites are still presented underneath, but it’s expected that most users won’t click through. Why would you, when you already have your answer? This is the same dynamic as OpenAI’s ChatGPT: answers are provided without credit or access to the underlying sources.

Some independent publishers are fighting back by de-listing their content from Google entirely. As the blogger and storyteller Tracy Darnell wrote:

I didn’t sign up for Google to own the whole Internet. This isn’t a reasonable thing to put in a privacy policy, nor is it a reasonable thing for a company to do. I am not ok with this.

CodePen co-founder Chris Coyier was blunt:

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

For small publishers, the model is intolerably extractive. Technical writer Tom Johnson remarked:

With AI, where’s the reward for content creation? What will motivate individual content creators if they no longer are read, but rather feed their content into a massive AI machine?

Larger publishers agree. The New York Times recently banned the use of its content to train AI models. It had previously dropped out of a coalition led by IAC that was trying to jointly negotiate scraping terms with AI vendors, preferring to arrange its own deals on a case-by-case basis. A month earlier, the Associated Press had made its own deal to license its content to OpenAI, giving it a purported first-mover advantage. The terms of the deal are not public.

Questions about copyright — and specifically the unlicensed use of copyrighted material to produce a commercial product — persist. The Authors Guild has written an open letter asking them to license its members’ copyrighted work, which is perhaps a quixotic move: rigid licensing and legal action is likely closer to what’s needed to achieve their hoped-for outcome. Perhaps sensing the business risks inherent in using tools that depend on processing copyrighted work to function, Microsoft has promised to legally defend its customers from copyright claims arising from their use of its AI-powered tools.

Meanwhile, a federal court ruled that AI-generated content cannot, itself, be copyrighted. The US Copyright Office is soliciting comments as it re-evaluates relevant law, presumably encompassing the output of AI models and the processes involved in training them. It remains to be seen whether legislation will change to protect publishers or further enable AI vendors.

The ChatGPT homepage

So. Who’s making money from AI? It’s mostly the large vendors who have the ability to create giant models and provide API services around them. Those vendors are either backed by venture capital investment firms who hope to see an exponential return on their investment (OpenAI, Midjourney) or publicly-traded multinational tech companies (Google, Microsoft). OpenAI is actually very far from profitability — it lost $540M last year. To break even, the company will need to gain many more customers for its services while spending comparatively little on content to train its models with.

In the face of criticism, some venture capitalists and AI founders have latterly embraced an ideology called effective accelerationism, or e/acc, which advocates for technical and capitalistic progress at all costs, almost on a religious basis:

Technocapital can usher in the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness.

In part, it espouses the idea that we’re on the fringe of building an “artificial general intelligence” that’s as powerful as the human brain — and that we should, because allowing different kinds of consciousness to flourish is a general good. It’s a kooky, extreme idea that serves as marketing for existing AI products. In reality, remember, they are not actually intelligence, and have no ability to reason. But if we’re serving some higher ideal of furthering consciousness on earth and beyond, matters like copyright law and the impact on the environment seem more trivial. It’s a way of re-framing the conversation away from author rights and considering societal impacts on vulnerable communities.

Which brings us to the question of who’s not making money from AI. The answer is people who publish the content and create the information that allow these models to function. Indeed, value is being extracted from these publishers — and the downstream users whose data is being fed into these machines — more than ever before. This, of course, disproportionately affects smaller publishers and underrepresented voices, who need their platforms, audiences, and revenues more than most to survive.

On the internet, the old adage is that if you’re not the customer, you’re the product being sold. When it comes to AI models, we’re all both the customer and the product being sold. We’re providing the raw ingredients and we’re paying for it to be returned to us, laundered for our convenience.

· Posts · Share this post