I was asked last week about the ethics of social networks: what would need to change to create a more ethical ecosystem.
Targeted display advertising, of course, has a huge part to play. Facebook created a system designed to capture the attention of its users so that they could interact with advertising that was tailored for them in order to manipulate them into an action or position. People buy advertising on Facebook to drive sales, but they also buy them to manufacture brand awareness and loyalty - and to manipulate users into adopting a politcal position. Facebook's machine was not originally built to manipulate, but its business model ratified its sociopathy.
The persuasive effect of its targeted advertising and engagement algorithms would have been diminished, however, if Facebook wasn't completely ubiquitous. In Q3 2018, it had 2.27 billion monthly active users. For context, there will be an estimated 3.2 billion people online by the end of the year: Facebook's monthly active users represent 71% of the internet. In America, it's the site most commonly used to discover news, or in other words, to learn about the world.
This is a dangerous responsibility to place in the hands of a single corporation with no meaningful competition. Yes, other social networks exist, but each serves a different purpose. Twitter is a kind of town hall zeitgeist Pandora's box full of wailing souls (sorry, a place that aims to "give everyone the power to create and share ideas and information instantly, without barriers"); Instagram (which, of course, is Facebook again) is the Vogue edition of everybody's life; Snapchat rests on its "mom don't read this" ephemerality. Facebook is designed, as its homepage used to proudly proclaim, to be a social utility that "reinforces connections to the people around you". Over time, it aims to make those social connections dependent on its service.
In a world where Facebook is a core part of life for billions of people, its policy and product decisions have an outsized effect on how its users see the world. Those decisions can certainly swing elections, but they have a measurable effect on public sentiment in other areas, too. It's not so much that the platform is reflective of the global culture; because that culture is shared and discovered on the platform, the culture reflects it. A bad actor with enough time and money can construct a viral message - or suite of messages - that can sweep across billions of people in less than a day. Facebook itself could engage in social engineering, with almost no oversight. There are few barriers; there is no real vaccine beyond a vain hope that Facebook will do the right thing.
But imagine a world where there isn't one Facebook, and we all participate in many social communities across many different platforms. Rather than one mega filter bubble, we engage with lots of bubbles, loosely joined - all controlled by a different entity, potentially in a different culture, with different priorities. In this world, the actions of a single one of these bubbles become less important. If each one is making different policy and product decisions, and is a logically separate network with its own codebase, userbase, and way of working, it becomes significantly harder for anyone to make a message ubiquitous. If each one has a different feed algorithm, while a malicious campaign could infiltrate one network, it would be much harder for it to infiltrate them all. In a healthy market, even discovering all the different communities that a user participates in could become a difficult task.
Arguably, this would require a different funding model to become the norm in Silicon Valley. Venture capital has enabled many businesses to get off the ground with the capitalization they need; it is not always the bad guy. But it also inherently encourages startups to aim towards monopoly. Venture capital funds want their investments to grow at an expontential rate. VCs want to return 3X the value of each fund inside 10 years (typically) - and because most startups fail, they're looking to invest in businesses that will return around 37X their original investment. That usually looks like owning a particular market or market segment, and that's what tends to find its ways into pitch decks. "This is a $100 billion market." Subtext: "we have the potential to capture all that". In turn, targeted advertising became popular as a way for startups to make revenue because asking customers for money creates sign-up friction and reduces growth.
So accidentally, venture capital creates Facebook-style businesses that aim to grow as big as possible without regard to the social cost. It does not easily support marketplaces with lots of different service providers competing with each other for the same market over a sustained period. And businesses in Silicon Valley have a hard time getting up and running without it, because the cost of living here is so incredibly expensive. Because of the sheer density of people who have experience building technology businesses here, as well as high-end technical talent and a general culture of helpfulness, Silicon Valley is still the best place to start this kind of business. So, VC it is, until we find some other instrument to viably fund tech companies. (Some obvious contenders are out: ICOs have rightly been slapped down by the SEC, and revenue sharing investment only really works for very small amounts of investment.)
Okay, so how about we just break Facebook up, and set a precedent for future businesses, just like we did with Microsoft in the nineties? After all, its impact is even more catastrophic than Microsoft's, and its actions are even more brazenly monopolistic. Everything else aside, consider its use of a VPN app it acquired to identify apps whose usage was threatening Facebook's, so that it could proactively acquire them and shut them down.
American anti-trust law has been set ostensibly to protect consumers, rather than competition. As Wired reported a few years ago:
Under current U.S. law, being a "monopoly" is not illegal; nor is trying to best one’s competitors through lower prices, better customer service, greater efficiency, or more rapid innovation. Consumers benefit when Apple disrupts the market with iPhones and iPads, even if this means RIM sells fewer BlackBerries or that Microsoft licenses fewer desktop operating systems. Antitrust law only springs into action against a monopoly when it destroys the ability of another company to enter the market and compete.
The key question, of course, is whether a particular monopoly is harming consumers – or merely harming its competitors for the benefit of those consumers.
With any lens except the most superficial, Facebook fails this test. Yes, its product is free and available to anyone. But we pay with our data and privacy - and ultimately, with our democracy. Facebook's dominance has adversely affected entire industries, swung elections, and fuelled genocides. In the latter case, this hasn't been in the United States - at least, not so far - and perhaps this is one of the reasons why it's escaped serious repurcussions. Its effects have been felt in different ways all over the world, and various governments have had to deal with them piecemeal. There is no jurisdiction big enough to cover its full impact. Facebook is, in some ways, more powerful than the government of any nation.
There's one thought that gives me hope. Anyone who has watched Facebook closely knows that it didn't grow through brilliant strategy and genius maneuvering. Its growth curve closely maps to the growth of the internet; it happened to be in the right place at the right time, and managed to not screw it up enough to drive people away. As people joined the internet for the first time, they needed a place to go, and Facebook was it. (The same is true of Instagram, which closely maps to the growth in smartphone camera usage.) As the internet became saturated in developed nations, Facebook's growth curve slowed, and it now needs to bring more people online in developing nations if it wants to continue dominating new markets.
That means two things: Facebook will almost inevitably stagnate, and it is possible for it to be outmaneuvered.
In particular, as new computing paradigms take hold - smart speakers, ambient computing, other devices beyond laptops and smartphones - another platform, or set of platforms, can more easily take its place. When this change inevitably happens, it is our responsibility to find a way to replace it ethically, instead of with yet another monopolistic gatekeeper.
There is work to do. It won't be easy, and the outcome is far from inevitable. But the internet is no longer about code being slung from dorm rooms and garages. It's about democracy, it's deadly serious, and it needs to be treated as such.