Skip to main content
 

The map-reduce is not the territory

Someone holding up an old-fashioned compass in Yosemite

There are two ways to use GPS navigation in a car:

The first is to use the directions as gospel. The system has found the right path for you to take; you need to follow them if you’re going to get to your destination.

The second is as a kind of North Star. The navigation will always point you towards your destination, but you know that you don’t have to follow the directions: if you choose to take one street instead of another, or take a detour, the system will adapt and find another way to go.

In the first scenario, the computer tells you what to do. In the second, it’s there to advise you, but the decisions are yours.

The first may get you there faster, if the systems’s model of the streets and traffic around you matches reality to an adequate degree. We’ve all discovered road closures or one-way streets that weren’t represented on-screen. By now, most people are familiar with the practical implications of Alfred Korzybski’s reminder that the map is not the territory even if they haven’t encountered the work itself. The computer’s knowledge of the street and its traffic is made of city plans, scans from specially equipped cars, road sensors, and data from geolocated phones of people in the area. While this collection of data points is very often adequate, much is left out: the model is not the territory.

But even if the model were accurate, the second method may be the most satisfying. A GPS system doesn’t have whims; it can’t say, “that street looks interesting” and take a detour down it, or choose to take an ocean drive. The first method gets you there most efficiently, but the second allows you to make your own way and take creative risks without worrying about getting completely lost. Sometimes you dowant to get completely lost — or, at least, I do — and then there’s no need to have the GPS switched on at all. You find new places, the second way; you discover new streets; you explore and learn about a neighborhood. You engage your emotions.

One way to think about the current crop of AI tools is as GPS for the mind. They can be used to provide complete instructions, or they can be used as a kind of North Star to glance at for suggestions when you need a helping hand. Their model isn’t always accurate, and therefore their suggestions aren’t always useful.

If you use AI for complete instructions — to tell you what to do, or to create a piece of work or a translation — you likely will get something that works, but it’ll be the blandest route possible. Prose will be prosaic; ideas will not be insightful. The results will be derived from the average of the mainstream. Sometimes you’ll need to adjust the output in the same way you need to drive around a closed road that GPS doesn’t know about. But the output will probably be something you can use and it’ll be more or less fine.

If you use AI as a kind of advisor, you’re still in control: your creativity has the wheel. You have agency and can take risks. What you’ll make as a human won’t be the average of a data corpus, so it’ll be inherently more interesting, and very likely more insightful. But you might find that a software agent can unstick you if you run into trouble, and gently show you a possible direction to go in. It’s a magic feather.

I worry about incentives. For many, they will be used to instruct or replace our decision-making faculties, rather than as a tool we can use while remaining in control. Software can be used to democratize and distribute power, or it can be used by the powerful to entrench their dominance and disenfranchise others. So it is with AI: the tools can aid creativity and augment agency, or they can be used to prescribe and control. I have no doubt that they will often be used for the latter.

There was a story that Google Maps intentionally routed people driving south from San Francisco on US Route 101, an objectively terrible stretch of highway, leaving the parallel and far more pleasant Interstate 280 free for its employees. It’s kind of funny but not actually true, as far as I know; still, because Google Maps navigation is a black box, they could have done it without anyone realizing it was on purpose. Nobody would need to know. All benefit would be to the owners of the system.

GPS isn’t only used by human drivers. Take a stroll around San Francisco or a few other major cities and you’ll notice fleets of driverless taxis, which use a combination of GPS, sensor arrays, and neural networks to make their way around city streets. Here, there’s no room for whim, because there’s no human to havewhims. There’s just an integrated computer system, creating instructions and then following them.

Unlike many people, I’m not particularly worried about AI replacing peoples’ jobs, although employers will certainly try and use it to reduce their headcount. I’m more worried about it transforming jobs into roles without agency or space to be human. Imagine a world where performance reviews are conducted by software; where deviance from the norm is flagged electronically, and where hiring and firing can be performed without input from a human. Imagine models that can predict when unionization is about to occur in a workplace. All of this exists today, but in relatively experimental form. Capital needs predictability and scale; for most jobs, the incentives are not in favor of human diversity and intuition.

I also have some concerns about how this dehumanization may apply life beyond work. I worry about how, as neural network models become more integrated into our lives and power more decisions that are made about us, we might find ourselves needing to conform to their expectations. Police departments and immigration controllers are already trying to use AI to make predictions about a person’s behavior; where these systems are in use, their fate is largely at the hands of a neural network model, which in turn is subject to the biases of its creators and the underlying datasets it operates upon. Colleges may use AI to aid with admissions; schools may use it to grade. Mortgage providers may use AI to make lending decisions and decide who can buy a house. Again, all of this is already happening, at relatively low, experimental levels; it’s practically inevitable that these trends will continue.

I see the potential for this software-owned decision-making to lead a more regimented society, where sitting outside the “norm” is even more of a liability. Consider Amazon’s scrapped automated hiring system for software developers, which automatically downgraded anyone it thought might be a woman.

Leaving aside questions of who sets those norms and what they are, I see the idea of a norm at all as oppressive in itself. A software engine makes choices based on proximity to what it considers to be ideal. Applying this kind of thinking to a human being inherently creates an incentive to become as “normal” as possible. This filtering creates in-groups and out-groups and essentially discards groups the software considers to be unacceptable. If the software was a person or a political movement, we’d have a word for this kind of thinking.

Using AI to instruct and make decisions autonomously does not lead to more impartial decisions. Instead, it pushes accountability for bias down the stack from human decision-making to a software system that can’t and won’t take feedback, and is more likely to be erroneously cast as impartial, even when its heuristics are dangerously dystopian.

I like my GPS. I use it pretty much every time I drive. But it’s not going to make the final decision about which way I go.

I appreciate using AI software agents as a way to check my work or recommend changes. I like it when software tells me I’ve made a spelling mistake or added an errant comma.

I do not, under any circumstances, want them running our lives.

· Posts · Share this post