Skip to main content
 

My Oxford

VICE has a long-running series where British writers bring photographers back to their hometowns, which I stumbled into this morning via Metafilter. It's stunning, and while I didn't recognize the Edinburgh entry almost at all, there was another piece that unexpectedly took my breath away.

I'm from Oxford as much as I'm from anywhere, but I've never read a piece that captured my experience of the city. Instead, it's always the opulent, ancient buildings of the university, the famous writers like CS Lewis and JRR Tolkein, or the plummy, upper middle class concerns of the North Oxford set. An outsider could be forgiven for thinking that the city was all cream teas and tennis.

Not so much. As Nell Frizzell writes:

Say its name and people will think of spires, books, bicycles, punting, philosophers and meadows. Few will think of cheap European lager, samosas, hardware shops, GCSEs, underage drinking, the number 3a bus or going twos on a roll-up beside a mental health hospital. They may not even think of the car factory, the warehouses on Botley Road, Powell's timber merchants, the registry office in Clarendon Shopping Centre, The Star pub, plantain sandwiches or Fred's Discount Store. But that's the Oxford I grew up in.

Me too. Nell's description of east Oxford is spot on, although we moved in different social circles; there was no cocaine in my world, even if we also gathered at exactly the same pub. All of these places are my places too, and the things she cares about in her hometown are things I care about also. In a life where I've lived in multiple countries and never quite found myself fitting in, including in the place I grew up, that's an incredible rareity.

I'm very glad I moved away - living in a variety of places has been right for me, and I expect I'll continue to move around. Having no nationality and no religion means that the pull to travel and exist in different contexts is strong. And Oxford really does have some deep problems. But that doesn't mean I don't miss it, too.

Because of the images that Oxford conjures in the minds of people who have never been there (and even some who have), I find it hard to explain where I came from. I can immediately taste the samosas and smell the beer-stained floorboards, but it's hard to convey. Now, at least, I have something I can point to; a description I actually recognize.

If you're interested in a realer Britain, the whole series is worth reading.

· Posts · Share this post

 

Open APIs and the Facebook Trash Fire

The New York Times report on Facebook's ongoing data sharing relationships is quite something. The gist is that even while it claimed that its data sharing relationships had been terminated in 2015 - to users and to governments around the world - many were still active into this year. Moreover, these relationships were established in such a way as to hide the extent of the data sharing from users, possibly in contravention of GDPR and its reporting responsibilities to the FTC:

“This is just giving third parties permission to harvest data without you being informed of it or giving consent to it,” said David Vladeck, who formerly ran the F.T.C.’s consumer protection bureau. “I don’t understand how this unconsented-to data harvesting can at all be justified under the consent decree.”

The company's own press release response to the reporting attempts to sugarcoat the facts, but essentially agrees that this happened. Data was shared with third parties during the period when the company declared that this wasn't happening, and often without user permission or understanding.
Back to the NYT article to make the implications clear:

The social network allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.

The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.

In September 2007, I flew out to Silicon Valley to participate in something called the Data Sharing Summit, organized by Marc Canter. At the time, I was working on Elgg, and we believed strongly in establishing open APIs so that people wouldn't be siloed into their social networks and web services. I met people there who have remained friends for the rest of my career. And all of us wanted open access to APIs so that users could move their data around, and so that startups wouldn't have as a high a barrier to entry into the market.

That was an ongoing meme in the industry ten years ago: open data, open APIs. It's one that has clearly informed Facebook's design and influential decisions. I certainly bought into it. And to some extent I still do, although I'd now prefer to go several steps further and architect systems with no central point of control or data storage at all. But such systems - whether centralized or decentralized - need to center around giving control to the user. Even at the Data Sharing Summit, we quickly realized that data control was a more meaningful notion than data ownership. Who gets to say what can happen to my data? And who gets to see it?

Establishing behind-the-scenes reciprocal data sharing agreements with partners breaks the implicit trust contract that a service has with its users.

Facebook clued us in to how much power it held in 2011, when it introduced its timeline feature. I managed to give this fairly asinine quote to the New York Times back then:

“We’ve all been dropping status updates and photos into a void,” said Ben Werdmuller, the chief technology officer at Latakoo, a video service. “We knew we were sharing this much, of course, but it’s weird to realize they’ve been keeping this information and can serve it up for anyone to see.”

Mr. Werdmuller, who lives in Berkeley, Calif., said the experience of browsing through his social history on Facebook, complete with pictures of old flames, was emotionally evocative — not unlike unearthing an old yearbook or a shoebox filled with photographs and letters.

My point had actually not so much been about "old flames" as about relationships: it became clear that Facebook understood everyone you had a relationship with, not just the people you had added as a friend. Few pieces dove into the real implications of having all that data in one place, because at the time it seemed like the stuff of dystopian science fiction. Some of us were harping on about it, but it was so far outside of mainstream discourse that it sounded crazy. But here we are, in 2018, and we've manifested the panopticon.

In the same way that the timeline made the implications of posting on Facebook clear, this year's revelations represent another sea change in our collective understanding. Last time - and every time there has been this kind of perspective shift - the Overton window has shifted and we've collectively adjusted our expectations to incorporate it. I worry that by next election, we'll be fairly used to the idea of extensive private surveillance (as a declared fact rather than ideological speculation), and the practice will continue. And then the next set of perspective shifts will be genuinely horrifying.

Questions left unanswered: what information is Facebook sharing with Palantir, or the security services? To what extent are undeclared data-sharing relationships used to deport people, or to identify individuals who should be closely monitored? Is it used to identify subversives? And beyond the effects of data sharing, given what we know about the chilling effects surveillance has on democracy, what effect on democratic discourse has the omnipresence of the social media feed already had - and to what extent is this intentional?

I'm done assuming good faith; I'm done assuming incompetence; I'm done assuming ignorance. I hope you are too.

 

Image: Elevation, section and plan of Jeremy Bentham's Panopticon penitentiary, drawn by Willey Reveley, 1791, from Wikipedia

· Posts · Share this post

 

Groups 2.0 is Explode, kinda

I'm getting really strong deja vu from Gro.ups 2.0 - an open source social networking platform that is now entirely implemented in JavaScript widgets.

From the Product Hunt thread:

I should also note that Grou.ps v2 is actually a set of GraphJS widgets Put on top of a gorgeous Bootstrap theme. That makes it super easy for developers to (a) come up with new templates, look & feel (b) port the social functionality to other digital assets they may have, like a mobile app, WordPress blog or website (but Not cryptocurrencies LOL)

GraphJS has been around for a little while, and is designed to be an embeddable set of widgets.

Explode was an embeddable JavaScript social network I made out of Elgg almost twelve years ago. Here's the TechCrunch article from the time:

A new open source cross-site social networking service called Explode launched today and looks like a very appealing alternative to the now Yahoo! owned MyBlogLog.  Built by UK open-source social network provider Curverider (whose primary product, Elgg, is similar to PeopleAggregator), Explode offers an embeddable widget that links out to users’ respective profile pages on any social network but allows commenting and befriending in one aggregated location.  I found Explode via Steve O’Hear’s The Social Web, one of my new favorite blogs.

What's old is truly new again. It's interesting to see people experiment with open social networks in 2018 - something I spent at least a decade of my life on. More power to them.

· Posts · Share this post

 

Unlock and Joint Ownership

Since August, I've been helping my friend Julien Genestoux at his startup Unlock.

Unlock is a protocol which enables creators to monetize their content with a few lines of code in a fully decentralized way. In the initial version, anyone can sell their work on the internet by adding two lines of code. Those are the only steps: create your content; add code; you're ready to accept payment. It's blockchain-based, so it's equally accessible to everyone in the world, both to buy and to sell.

It's really a decentralized protocol for access control. There are two elements to consider: a lock, and a set of  keys. You place a lock on some content to protect it; anyone with a key for that particular lock can access it. Publishers can use the same lock for as many different items of content as they want, and anyone with an appropriate key can access all of it. Content could be an article, a video, a podcast, or a software application. It can also be a mailing list, which is on the roadmap for 2019.

It's an open protocol at heart, which means it starts to get really interesting when other people begin to build on it. The initial Unlock code is a paywall; you can run our hosted version, or you can install the software and run your own. But you can also take the Unlock blockchain and structure and build something completely new. Over time, there will be more Unlock code and libraries that you can use as building blocks. Unlock, Inc doesn't need to be the central hub, and it doesn't need to own the blockchain. Unlike a service like Twitter, where the underlying company gets value by controlling access (and running ads), and therefore developers may get burned if they use it to underpin their products, Unlock the company is physically incapable of exerting central control over the Unlock Protocol.

I think what I've described is a good thing for the web - Unlock is the low-friction payments layer that should have been there from the very beginning - but much more is possible, and this isn't a "decentralize all the things!" argument. There are concrete benefits for businesses today. One thing I'm particularly excited about is that, because the blockchain is both transparent and decentralized, jointly-owned content becomes much more possible.

Two hypothetical examples:

Radiotopia is a podcast co-operative. Each podcast is wholly owned by its producer, but they raise money together and distribute funds as a stipend between them. Right now, they're fundraising using CommitChange; funds presumably pool to one central point - someone holds a bank account - and then are distributed by a human. But what if they could raise money by creating a lock that people purchase keys for, and the proceeds from that lock were automatically and transparently sent to every member of the Radiotopia network? They could still use CommitChange as a front end (particularly as it's based on the open source Houdini project), but their accounting and payments overhead would be dramatically lower. Each member of the network would also be able to trust that payments were made to them immediately and automatically. And for new networks - baby Radiotopias - creating a bundled content network becomes just a case of deciding to work together.

Project Facet is an open source project for collaborative journalism. Increasingly, in a world of budget cuts and changing business models, newsrooms need to collaborate to produce investigative reporting. Right now, they pool resources in informal ways, and produce separate stories based on the reporting. With the Unlock Protocol, they could collaborate on the substance of the stories themselves, and put them under a shared lock that automatically pools the revenue between the participating organizations. This would be much harder in a universe where you'd have a custodial bank account and an accountant who made payments; here it could be fully transparent, and fully automatic.

These are purely hypothetical, and non-exclusive; much more is possible. Just a flexible paywall, or paid-for mailing lists, are exciting. The point is that we can think beyond how we've traditionally restricted access, and how we've transferred value. Personally, in my work, I'm most motivated by concrete human use cases - and Unlock illustrates how blockchain services have a lot of potential. This isn't an ICO, and it's not a speculative coin play. It's a way for creators to pool and share value, and make money from their work in a flexible way. And that's exciting to me.

The code is fully open; you can get involved here.

 

Photo by Francois Hurtaud on Unsplash

· Posts · Share this post

 

The Trolls from Olgino, the Sabateurs from Menlo Park

There's a lot in the news this morning about online influence campaigns conducted by the Internet Research Agency, a propaganda firm with close ties to the Russian government. Two reports were prepared for the Senate Intelligence Committee: one by an Austin-based security firm called New Knowlege, and the other by the Oxford Internet Institute's Computational Propaganda Project.

As of today, both are freely available online. Here's the full New Knowlege report; here's the full Oxford Institute Institute report.

This is the first time we've really heard about Instagram being used for an influence campaign, but it shouldn't be a surprise: if I say the word "influencer", it's probably the first platform that you think of. Like any decent digital advertising campaign, this one was cross-platform, recognizing that different demographics and communities engage on different sites. In a world where 44% of users aged 18 to 29 have logged out of the Facebook mothership, any campaign hoping to reach young people would to include Instagram. And of course, that's why Facebook bought the service to begin with.

News stories continue to paint this as some kind of highly sophisticated propaganda program masterminded by the Russian government. And it does seem like the Russian government was involved in this influence campaign. But this is how modern digital campaigns are run. People have been building Facebook Pages to gain as many likes as possible since the feature was released, precisely so they can monetize their posts and potentially sell them on to people who need to reach a large audience quickly. Influencers - people who are paid to seed opinions online - will represent $10 billion in advertising spending by 2020.

It is, of course, deeply problematic that a foreign influence campaign was so widespread and successful in the 2016 election - I have no desire to downplay this, particularly in our current, dire, political environment. But I also think we're skimming the surface: because of America's place in the world, it's highly likely that there were many other parallel influence campaigns, both from foreign and domestic sources. And all of us are subject to an insidious kind of targeted marketing for all kinds of things - from soft drinks to capitalism itself - from all kinds of sources.

The Iowa Writers' Workshop is one of the most influential artistic hubs of the twentieth century. Over half of the creative writing programs founded after its creation were done so by Iowa graduates; it helped spur the incredible creative boom in American literature over the next few decades. And its director, Paul Engle, funded it by convincing American institutions - like the CIA and the Rockefeller Foundation - that literature from an American, capitalist perspective would help fight communism. It could be argued that much of the literature that emerged from the Workshop's orbit was an influence campaign. More subtle and independent than the social media campaigns we see today, for sure, but with a similar intent: influence the opinions of the public in the service of a political goal.

And of course, Joseph Goebbels was heavily influenced in his approach by Edward Bernays, the American founder of modern public relations, who realized he could apply the principles of propaganda to marketing. Even today, that murderous legacy lives on: the Facebook misinformation campaigns around the genocide in Myanmar are its spiritual successor.

So political influence campaigns are not new, and they have the potential to do great harm. The Russian influence campaign is probably not even the most recent event in the long history of information warfare. While it's important to identify that this happened, and certainly to root out collusion with American politicians who may have illegally used this as a technique to win elections, I think it's also important to go a level deeper and untangle the transmisison vector as well as this particular incident.

Every social network subsists on influence campaigns to different degrees. There's no doubt that Facebook's $415 billion market cap is fuelled by companies who want to influence the feed where half of adultsdisproportionately from lower incomes - get their news. That's Facebook's economic engine; it's how it was designed to work. The same is true of Instagram, Twitter, etc etc, with the caveat that a feed with a lower population density is less valuable, and less able to have a measurable impact on the public discourse at large. There's one exception: while Twitter has significantly lower user numbers, it is heavily used by journalists and educators, who are then liable to share information gleaned there. Consider the number of news stories of the form, "here's what Trump tweeted today," which are then read by people who have never logged on to Twitter and wouldn't otherwise have seen the tweets.

The root cause of these misinformation campaigns is that people will do whatever they can to obtain, or hold onto, power. I don't think solving this is going to be possible during the entire remaining span of human civilization. So instead, let's think about how we can limit the "whatever they can" portion of the sentence. If people are going to use every means at their disposal to obtain power, how can we safety-check the tools we make in order to inhibit people from using them for this purpose?

Moving on from targeted advertising is a part of the answer. So is limiting the size of social networks: Facebook's 2.27 billion monthly active users are a disease vector for misinformation. As I've written before, its effective monopoly is directly harmful. Smaller communities, loosely joined, running different software and monetized in different ways, would make it much harder for a single campaign to spread to a wide audience. Influence campaigns would continue to run, but they would encounter community borders much more quickly.

A final piece is legislation. It's time for both privacy and transparency rules to be enacted around online advertising, and around user accounts. For their protection, users need to know if a message was posted by a human; they also need to know who placed an advertisement. And advertising for any kind of political topic in an election period should be banned outright, no matter who placed it, as it was in the UK. You can't have a democratic society without free and open debate - and you can't have free and open debate if one side is weighted with the force of millions of dollars, Facebook's market cap be damned.

 

Photo by Jakob Owens on Unsplash

· Posts · Share this post

 

Checking in on my social media fast

Three weeks ago, I decided to go dark on social media. No convoluted account deletion process; no backups. I just logged out everywhere, and deleted all my apps. It's one of the best things I've ever done.

I thought I'd check in with a quick breakdown: what worked, and what didn't. Here we go.

 

What worked

I haven't logged into Twitter, Facebook, or Instagram. I feel much calmer for it. I also feel better for not contributing to the Facebook machine. And I've gained 7 to 10 hours a week in time I'm not looking at my phone.

Crucially, I don't feel like I'm missing out or going to be forgotten, which were two of the things I was afraid of. I miss the hour-to-hour outrage but am on top of the important news. Lots of people have reached out to me; I've reached out to others; I've had the most non-work one on one email conversations in a decade. It's led to lunches, meeting up with people for dinner that I haven't seen in ages - it's been genuinely great.

The way I use my phone when I am looking at my phone has changed, too. I'm reading a lot more news and long-form content. I treated myself to a New Yorker digital subscription, which has been nourishing. (I've also got subscriptions to the NYT, Washington Post, and WSJ, and realizing that I'm missing a more international perspective. Recommendations needed!) I'm still thinking about this James Baldwin essay. I've started heavily using Pocket to save articles I might want to do something with later. Have you read the Laurie Penny blockchain cruise piece? You really have to.

And I'm blogging a lot more. For the first week or so, I felt compelled to write something every day. I'm definitely not doing that now, but not tweeting lets the thoughts bubble up until they're something a little more substantial. I've also branched out into writing things for other outlets; I'm hoping one will show up today. But the best part about blogging is that writing helps me order my thoughts and go deeper on topics I'm interested in. It also, for more personal subjects, helps me process.

 

What didn't work

I had to log back into LinkedIn. Of all the social networks, I'm sad that this is the one that proved indispensible. But it turns out I don't have a lot of peoples' email addresses, so when I needed to reach out to someone, I couldn't do it any other way. I've accepted my fate here for now, but I'm fairly uncomfortable with Microsoft being at the center of my professional relationships, so I'll need to figure something else out.

And I can't help it: I check Google Analytics for my blog. It's taken the place of hoping for interactions on my tweets, and the little realtime graph still provides enough of a dopamine rush to give me a hit. I need to wean myself away - perhaps by simply removing Google Analytics from my site. (Arguably, if I'm serious about decentralization and privacy, this is something I should do anyway - so I've just talked myself into it. It'll be gone today.)

I still spend far too much time looking at my phone. I thought about illustrating this piece with some stats, but I decided not to. They're embarrassing.

Finally: my blog is still mostly about tech. Or at least, it has been - but that's not the entirety of what I read and think about. So I'm trying to figure out if I want to have two outlets, or if anyone cares whether I digress from user privacy to talk about writing for Doctor Who or - and this might be a piece that happens soon - making pad thai for my mother. In some ways, I feel like I need to ask your permission to do this, which is sad, and I shouldn't. (So, again: I've just talked myself into not worrying about it.)

In other words: I haven't been bold enough. I could go further. So, I will.

 

Conclusions so far

This change has been more positive than expected. I'll probably keep it up in the new year, perhaps with some tweaks. Give it a try!

· Posts · Share this post

 

With RAD, podcasters can finally learn who's listening

NPR announced Remote Audio Data today: a technology standard for sending podcast audience analytics back to their publishers. Podcasting is one of the few truly decentralized publishing ecosystems left on the web, and it's a relief to see that this is as decentralized as it should be.

Moreover, it's exactly the role public media should be playing: they convened a group of interested parties and created an underlying open source layer that benefits everyone. One of the major issues in the podcast ecosystem is that nobody has good data about who's actually listening; most people use Apple's stats, look at their download numbers, and make inferences. This will change the game - and in a way that directly benefits podcast publishers rather than any single central gatekeeper.

What's not listed in the spec is a standard way to disclose to the listener that their analytics are being shared. This may fall afoul of GDPR and similar legislation if not handled properly; to be honest, I'd hope that any ethical podcast player would ask permission to send this information, giving me the opportunity to tell it not to. Still, at least in the five minutes that everyone isn't sending their listening data to be processed by Google Analytics, this is an order of magnitude better than using Apple as a clearinghouse.

Here's a quick technical overview of how it works:

While MP3 files mostly contain audio, they can also contain something called an ID3 tag for human-readable information like song title, album name, artist, and genre. RAD adds a JSON-encoded remoteAudioData field, which in turn contains two arrays: trackingUrls and events. It can also list a custom podcastId and episodeId. Events have an optional label and mandatory eventTime, expressed as hh:mm:ss.sss, and can have any number of other keys and values.

The example data from the spec looks like this:

{
 "remoteAudioData": {
   "podcastId":"510298",
   "episodeId":"497679856",
   "trackingUrls": [
     "https://tracking.publisher1.org/remote_audio_data",
     "https://tracking.publisher2.org/remote_audio_data",
     "https://tracking.publisherN.org/remote_audio_data",
   ],
   "events": [
     {
       "eventTime":"00:00:00.000",
       "label":"podcastDownload",
       "spId":"0",
       "creativeId":"0",
       "adPosition":"0",
       "eventNum":"0"
     },
     {
       "eventTime":"00:00:05.000",
       "label":"podcastStart",
       "spId":"0",
       "creativeId":"0",
       "adPosition":"0",
       "eventNum":"1"
     },
     {
       "eventTime":"00:05:00.000",
       "label":"breakStart",
       "spId":"123456",
       "creativeId":"1234567",
       "adPosition":"1",
       "eventNum":"2"
     },
     {
       "eventTime":"00:05:15.000",
       "label":"breakEnd",
       "spId":"123456",
       "creativeId":"1234567",
       "adPosition":"1",
       "eventNum":"3"
     }
   ]
 }
}

The podcast player sends a POST request to the URLs listed in trackingURLs, wrapped in a session ID and optionally containing the episodeId and podcastId. By default the player should send this at least once per hour, although the MP3 file can specify a different duration by including a submissionInterval parameter. The intention is that the podcast player stores events and can send them asynchronously, because podcasts are often listened to when there's no available internet connection. After a default of two weeks without sending, events are discarded.

Here's an example JSON string send to a reportingUrl from the spec:

{
 "audioSessions": [
   {
     "podcastId": "510313",
     "episodeId": "525083696",
     "sessionId": "A489C3AD-04AA-4B5F-8289-4D3D2CFE4CFB",
     "events": [
       {
         "sponsorId": "0",
         "creativeId": "0",
         "eventTime": "00:00:00.000",
         "adPosition": "0",
         "label": "podcastDownload",
         "eventNum": "0",
         "timestamp": "2018-10-24T11:23:07+04:00"
       },
       {
         "sponsorId": "0",
         "creativeId": "0",
         "eventTime": "00:00:05.000",
         "adPosition": "0",
         "label": "podcastStart",
         "eventNum": "1",
         "timestamp": "2018-10-24T11:23:08+04:00"
       },
       {
         "sponsorId": "111128",
         "eventTime": "00:00:05.000",
         "adPosition": "1",
         "label": "breakStart",
         "creativeId": "1111132",
         "eventNum": "2",
         "timestamp": "2018-10-24T11:23:09+04:00"
       },
       {
         "label": "breakEnd",
         "sponsorId": "111128",
         "eventTime": "00:00:05.000",
         "adPosition": "1",
         "creativeId": "1111132",
         "eventNum": "3",
         "timestamp": "2018-10-24T11:23:10+04:00"
         }
     ]
   },
   {
     "podcastId": "510314",
     "episodeId": "525083697",
     "sessionId": "778A4569-4B06-469B-8686-519C3B43C31F",
     "events": [
       {
         "sponsorId": "0",
         "eventTime": "00:00:00.000",
         "adPosition": "0",
         "creativeId": "0",
         "eventNum": "0",
         "timestamp": "2018-10-24T11:23:11+04:00"
       }
     ]
    },
   {
     "podcastId": "510315",
     "episodeId": "525083698",
     "sessionId": "F825BE2B-9759-438A-A67E-9C2D54874B4F",
     "events": [
       {
         "sponsorId": "0",
         "eventTime": "00:00:00.000",
         "adPosition": "0",
         "label": "podcastDownload",
         "creativeId": "0",
         "eventNum": "0",
         "timestamp": "2018-10-24T11:23:12+04:00"
       }
     ]
   }
 ]
}

It's a very simple, smart solution. There's more information at the RAD homepage, and mobile developers can grab Android or iOS SDKs on GitHub.

· Posts · Share this post

 

Examining the degrees of Fortune 500 tech CEOs

One of my recurring regrets is that I stopped at a bachelor's degree. There are many times when I wonder if having an MBA - or just the deeper study that a master's or even a PhD would provide - would be useful.

There's also a recurring meme in the tech industry that you don't need university. The story of the kid who drops out of college to start a multi-billion dollar business is often repeated. I suspected it wasn't true, but I didn't know.

So I posed the question: assuming your goal is to be the CEO of a big tech company, and based on previous experience, what should you study?

This comes with a big asterisk: being the CEO of a big tech company is not currently my goal, and I believe in education for education's sake, rather than to meet a career need. Treating education as a purely vocational pursuit is how we get to phenomenally expensive courses that are tied to salary expectations, rather than treating the collective pursuit of human knowledge as a common good, regardless of its ability to lead to a well-paying position.

Still, I thought it was doing the research. I took the relevant tech companies in the 2017 Fortune 500 list, and looked at two datasets: their first CEOs, and their current CEOs. (Where a company is no longer independent, like Yahoo, I used the last person who was CEO when it was - in this case, Marissa Meyer. And because Eric Schmidt was CEO of Google before Larry Page, while not technically being a founder, he is listed here.) In both cases, I did my best to find their educational history: their degrees undertaken, at which level they took them, and at which institutions. There are probably mistakes, but here's the whole dataset.

Here are some interesting highlights. I'm curious if there's more you've discovered from the data - let me know!

 

Founding CEOs

7 out of 38 CEOs dropped out of college: Bill Gates (Microsoft), Steve Jobs (Apple), Mark Zuckerberg (Facebook), Larry Ellison (Oracle), Michael Dell (Dell), William Morean (Jabil Circuit), and David Packard (HP). It's worth noting that Packard and Morean dropped out decades before the others. Packard later went back and got a Master's degree in Electrical Engineering from Stanford.

13 out of 38 CEOs have a Master's or higher degree. Electrical Engineering and Computer Science dominate the subjects taken, followed by Physics. None have an MBA, although Sandy Lerner, founder of Cisco, has a Master's in Econometrics.

5 have a doctorate, with Computer Science again being the most common.

 

Current CEOs

No current CEOs of any Fortune 500 tech company is a college dropout, and all went to college. This stands to reason: while founders can effectively luck or hustle their way into this position, it's highly unlikely that a new CEO will be hired without a degree.

In this list, Business Administration vies with Computer Science for being the top subject taken at an undergraduate level.

22 out of 38 CEOs have a Master's or higher degree. 13 have an MBA (again, compared to zero founders); 2 more did Law. Computer Science is relatively rare at the postgraduate level - and no current CEOs have a doctorate.

 

Fields of Study Overall

At the Undergraduate level, Electrical Engineering and Computer Science take the top spots, with 13 and 7 CEOs respectively. These two subjects are, of course, highly related, and often taught together. Mathematics and Mechanical Engineering also rate highly.

While Business Administration and Law have a few takers (7 and 3 respectively), most non-science subjects are represented with just one CEO each.

MBAs are by far the most popular graduate degrees, unsurprisingly, with 13 CEOs represented. 9 CEOs have a Master's in Computer Science; 6 in Electrical Engineering; and 3 in Law. There are no humanities represented at the Master's level.

 

Institutions

More Fortune 500 tech industry CEOs went to Stanford than anywhere else. That's completely unsurprising, given Stanford's interrelated history with Silicon Valley. This is then followed in quick succession by the University of Michigan, UC Berkeley, MIT, the University of Texas, and Princeton.

The only two CEOs to go to Harvard - Bill Gates and Mark Zuckerberg - dropped out of it.

More interestingly, there are very few institutions represented outside of the United States. While 51% of founders of $1bn+ companies are immigrants, most of them went to school here. No foreign university has more than one individual CEO representing it, although India and Canada have three CEOs each. I was also surprised (but not disappointed) to see that the only UK institution represented is not Oxford or Cambridge, but Bradford Polytechnic (now the University of Bradford). Bradford is one of the most racially diverse cities in Britain.

 

Conclusions

Stay in school, kids. The dropout CEOs all happened to be in the right place at the right time for a moment in computing that will never be repeated. There may be new moments, but the advantages of having a degree far outweigh any chance that you'll be successful without one.

Founding CEOs are most likely to be computer scientists or electrical engineers, with a deep level of technical knowledge that allows them to connect their vision with the feasible realities. Hired CEOs are likely to be business professionals who have risen through the ranks and found their position more deliberately. Of course, the degree taken doesn't deal with the intangibles - there are a host of skills and personality traits needed to run a company successfully. I could give my opinions as to what they are, but they can't be quantified.

Obviously, overall, my data is incomplete. There's more to examine, I'm sure there are entries to correct, and I haven't touched interim CEOs (those who served between the first CEO and the current one). But I hope you'll agree it's an interesting exploration.

Should I go back to school? Maybe, eventually. An MBA or law degree probably would be useful. At the same time, I wish there were more CEOs with humanities degrees. But what's more clear is that technical skills are an enormous boon - not a surprising finding, but it's interesting to see it reinforced. So it stands to reason that continuing to develop those skills, and keeping them sharp and up-to-date, is worth investing in.

Finally: AirTable made this much faster. I love it, and I'll sing its praises forever. If you haven't yet, give it a try.

 

Thanks to my sister Hannah Werdmuller for helping me to do the underlying research.

· Posts · Share this post

 

Persuading people to use ethical tech

I've been in the business of getting people to use ideologically-driven technology for most of my career (with one or two exceptions). Leaving out the less ideologically driven positions, it goes something like this:

Elgg: We needed to convince people that, if they're going to run an online community, they should use one that allows them to store their own data anywhere, embraces open standards, and can run in any web browser (which, at the height of Internet Explorer's reign, was a real consideration).

Latakoo: In a world where journalism is experiencing severe budget cuts, we needed to persuade newsrooms that they shouldn't buy technology with astronomically expensive licenses and then literally build it into the architecture of their buildings (when I first discovered that this was happening, it took a while for my jaw to return to the un-dropped position).

Known: We needed to convince people that, if they're going to run an online community-- oh, you get the idea.

Matter: We needed to convince investors that they should put their money into startups that were designed to have a positive social mission as well as perform well financially - and that media was a sound sector to put money into to begin with.

Unlock: We need to persuade people that they should sell their work online through an open platform with no middleman, rather than a traditional payment processor or gateway.

That's a lot of ice skating uphill!

So how do you go about selling these ideas?

One of the most common ideas I've heard from other startup founders is the idea of "educating the market". If people only knew how important web standards were, or if they only knew more about privacy, or about identity, they would absolutely jump on board this better solution we've made for them in droves. We know what's best for them; once they're smarter, they'll know what's best for them too - and it's us!

Needless to say, it rarely works.

The truth comes down to this: people have stuff to do. Everyone has their own motivations and needs, and they probably don't have time to think about the issues that you hold dear to your heart. Your needs - your worries about how technology is built and used, in thise case - are not their needs. And the only way to persuade people to use a product for it to meet their deeply-held, unmet needs.

If you have limited resources, you're probably not going to pull the market to you. But if you understand the space well and understand people well, you can make a strong hypothesis about whether the market is going to come to you at some point. If you think the market is going to want what you're building two or three years out, and you can demonstrate why this is the case (i.e., it's a hypothesis founded on research, not just a hunch) - then that's a good time to start working on a product.

Which is why, while many of us were crowing over the need for web products that don't spy on you for decades, it's taken the aftermath of the 2016 election for many people to come around. Most people aren't there yet, but the market is changing, and tech companies will change their policies to match. The era of tracking won't come to an end because of activist developers like me - it'll come to an end because we failed, and Facebook's ludicrous policies (which, to be clear, aren't really different to the policies of many tech companies) reached their damaging logical conclusion, allowing everyone to see the full implications.

So if an ideology-first approach usually fails, how did we persuade people?

The truth is, it wasn't about the ideology at all. Elgg worked because people needed to customize community spaces and we provided the only platform at the time that would let them. Latakoo worked because it allowed journalists to send video footage faster and more conveniently than any other solution. Known didn't work because we allowed the ideology to become its selling point, when we should have concentrated on allowing people to build cross-device, multi-media communities  quickly and easily (the good news is that because it's open source, there's still time for it). Unlock will work if it's the easiest and most profitable way for people to make money from their work online.

You can (and should) build a tool ethically; unless you're building for a small, niche audience, you can't make ethics be the whole tool. Having deep knowledge of, and caring deeply about, the platform doesn't absolve you from the core things you need to do when you're building any product. Which, first and foremost, is this: make something that people want. Scratch their itch, not yours. Know exactly who you're building for. And make them the ultimate referee of what your product is.

· Posts · Share this post

 

The Facebook emails

I still need to read the documents unsealed by British Parliament for myself, but they seem pretty revealing.

From the Parliamentary summary itself:

Facebook have clearly entered into whitelisting agreements with certain companies, which meant that after the platform changes in 2014/15 they maintained full access to friends data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.

[...] It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.

[...] Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.

In the New York Times:

Emails and other internal Facebook documents released by a British parliamentary committee on Wednesday show how the social media giant gave favored companies like Airbnb, Lyft and Netflix special access to users’ data.

In Forbes:

In one 2013 email from Facebook's director of platform partnerships Konstantinos Papamiltiadis, the executive tells staff that “apps that don’t spend” will have their permissions revoked.

“Communicate to the rest that they need to spend on NEKO $250k a year to maintain access to the data,” he wrote. NEKO is an acronym used at Facebook to describe app install adds, according to The Wall Street Journal.

Meanwhile, the email cache reveals that Facebook shut down Vine's access to the Facebook friends API on the day it was released. Justin Osofsky, VP for Global Operations and Corporate Development, wrote Mark Zuckerberg at the time:

Twitter launched Vine today which lets you shoot multiple short video segments to make one single, 6-second video. As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision.

Zuckerberg's reply:

Yup, go for it.

Purely coincidentally, I'm sure, Facebook changed this policy yesterday. As TechCrunch reported:

Facebook will now freely allow developers to build competitors to its features upon its own platform. Today Facebook announced it will drop Platform Policy section 4.1, which stipulates “Add something unique to the community. Don’t replicate core functionality that Facebook already provides.”

That policy felt pretty disingenuous given how aggressively Facebook has replicated everyone else’s core functionality, from Snapchat to Twitter and beyond. Facebook had previously enforced the policy selectively to hurt competitors that had used its Find Friends or viral distribution features. Apps like Vine, Voxer, MessageMe, Phhhoto and more had been cut off from Facebook’s platform for too closely replicating its video, messaging or GIF creation tools. Find Friends is a vital API that lets users find their Facebook friends within other apps.

It will be interesting to follow the repercussions of this release. My hope is that we'll finally see some action from the US government in the new year. In the meantime, it's ludicrous that it took action from the UK - and legislation from the EU - to bring some of this to light.

 

· Posts · Share this post

 

It chops you into pieces

I've long said that there are two sectors I will never work in: banking and the military.

The reason I wouldn't want to work on technology for the military is hopefully obvious: I don't want my work to contribute towards killing people, in any capacity. I haven't stood behind any modern war we've been involved in, and it's bad enough that my tax money is being used to support those efforts, as opposed to creating a strong social safety net and supporting important infrastructure like schools and public healthcare. Building software to kill people would make me a murderer, and I have no interest in being more complicit than I already am.

Banking might be less obvious. For one thing, many modern banks have actually invested in arms manufacturers and traders. But modern banks are part of a system that forces people into poverty and is fueling unprecedented inequality. We need banks, but I think very few today are ethical.

One effect of rising inequality is that, to protect yourself, you can find yourself softening your morals. Wouldn't it be nice to have one of those big company salaries?, it's easy to think. And who could blame you, when you live in a place where you can still struggle with a six figure salary? Some companies are paying half a million dollars a year to the right technical leaders. Some are paying much more than that. Wouldn't that make everything better? To not have to worry about money all the time? To feel successful?

Tying success into money is a trap, of course. Everyone's definition of success is different; mine is about the impact I make, and not the money I bring home. For others, it might be about getting to live in a certain kind of house, or having a particular family dynamic. But when you're rubbing shoulders with millionaires and billionaires when you walk down the street, not having that kind of wealth can get under your skin. What's wrong with you that you don't have a two million dollar home? That person you're friends with has the kind of comfortable lifestyle you could only dream of. Does the fact that you don't mean you're not good enough?

It's as if there are invisible billboards yelling at you to be rich. In fact, there literally are billboards on the freeway into San Francisco advertising the benefits of retiring early. And at the feet of those billboards are people who have been displaced from their homes because of personal tragedy or financial misfortune or simply not having happened to have been born into the right sort of family, who can't have the safety net or routes up that they desparately need. Around the corner from them, a place where you can get eggs on toast for $12, where people in designer hoodies discuss how to minimize their tax burden when they exercize their options.

Not being wealthy is not a value judgment at all - it just means you made different choices, with different priorities. And it's a pretty appalling way to think. Consider teachers, nurses, care workers: people at the core of society, who nothing could function without. They're not earning six figure sums and generous stock options. Perhaps they should be, but they simply don't. There's a whole cultural history of jobs that are associated with women being monetarily valued lower, but there's also this simple fact: a job that pays more money is not more valuable. We value goods in a marketplace in terms of what people are willing to pay for them, but that value is numeric and arbitrary, and doesn't relate to something or someone's meaningful value. A price tag is not social or human worth. The libertarian ideal of the invisible hand of the market, which is baked into the heart of modern American culture, is nothing more than patriarchal fascism.

Your values matter. Everyone has to do what they need to in order to survive, but nobody needs to suspend their moral compass in order to bring in hundreds of thousands of dollars to feed an economic machine that forces people onto the streets for non-compliance. It is not a failure of skills or worth to turn a position down because it does not fit your ethics or life goals. Conversely, it's not in any way glorious to take a high-value position with questionable ethics because the money and market prestige are meaningful. Success at capitalism doesn't automatically absolve anyone of moral responsibility, and doesn't mean that a person is better by any other metric. And it is not possible to become rich without someone else being adversely affected.

But those aren't the messages. Just as fashion magazines constantly broadcast the benefits of skinniness, and just as damagingly, the tech industry has upheld wealth as a virtue. It's not virtuous. Virtue would be a world where everyone has the opportunity and the outcome of living a comfortable life, where everyone is empowered with information, and nobody is effectively put to death because somebody else wants to be rich. Virtue is your core humanity, and not your market cap.

· Posts · Share this post

 

Facebook's monopoly is harming consumers

I was asked last week about the ethics of social networks: what would need to change to create a more ethical ecosystem.

Targeted display advertising, of course, has a huge part to play. Facebook created a system designed to capture the attention of its users so that they could interact with advertising that was tailored for them in order to manipulate them into an action or position. People buy advertising on Facebook to drive sales, but they also buy them to manufacture brand awareness and loyalty - and to manipulate users into adopting a politcal position. Facebook's machine was not originally built to manipulate, but its business model ratified its sociopathy.

The persuasive effect of its targeted advertising and engagement algorithms would have been diminished, however, if Facebook wasn't completely ubiquitous. In Q3 2018, it had 2.27 billion monthly active users. For context, there will be an estimated 3.2 billion people online by the end of the year: Facebook's monthly active users represent 71% of the internet. In America, it's the site most commonly used to discover news, or in other words, to learn about the world.

This is a dangerous responsibility to place in the hands of a single corporation with no meaningful competition. Yes, other social networks exist, but each serves a different purpose. Twitter is a kind of town hall zeitgeist Pandora's box full of wailing souls (sorry, a place that aims to "give everyone the power to create and share ideas and information instantly, without barriers"); Instagram (which, of course, is Facebook again) is the Vogue edition of everybody's life; Snapchat rests on its "mom don't read this" ephemerality. Facebook is designed, as its homepage used to proudly proclaim, to be a social utility that "reinforces connections to the people around you". Over time, it aims to make those social connections dependent on its service.

In a world where Facebook is a core part of life for billions of people, its policy and product decisions have an outsized effect on how its users see the world. Those decisions can certainly swing elections, but they have a measurable effect on public sentiment in other areas, too. It's not so much that the platform is reflective of the global culture; because that culture is shared and discovered on the platform, the culture reflects it. A bad actor with enough time and money can construct a viral message - or suite of messages - that can sweep across billions of people in less than a day. Facebook itself could engage in social engineering, with almost no oversight. There are few barriers; there is no real vaccine beyond a vain hope that Facebook will do the right thing.

But imagine a world where there isn't one Facebook, and we all participate in many social communities across many different platforms. Rather than one mega filter bubble, we engage with lots of bubbles, loosely joined - all controlled by a different entity, potentially in a different culture, with different priorities. In this world, the actions of a single one of these bubbles become less important. If each one is making different policy and product decisions, and is a logically separate network with its own codebase, userbase, and way of working, it becomes significantly harder for anyone to make a message ubiquitous. If each one has a different feed algorithm, while a malicious campaign could infiltrate one network, it would be much harder for it to infiltrate them all. In a healthy market, even discovering all the different communities that a user participates in could become a difficult task.

Arguably, this would require a different funding model to become the norm in Silicon Valley. Venture capital has enabled many businesses to get off the ground with the capitalization they need; it is not always the bad guy. But it also inherently encourages startups to aim towards monopoly. Venture capital funds want their investments to grow at an expontential rate. VCs want to return 3X the value of each fund inside 10 years (typically) - and because most startups fail, they're looking to invest in businesses that will return around 37X their original investment. That usually looks like owning a particular market or market segment, and that's what tends to find its ways into pitch decks. "This is a $100 billion market." Subtext: "we have the potential to capture all that". In turn, targeted advertising became popular as a way for startups to make revenue because asking customers for money creates sign-up friction and reduces growth.

So accidentally, venture capital creates Facebook-style businesses that aim to grow as big as possible without regard to the social cost. It does not easily support marketplaces with lots of different service providers competing with each other for the same market over a sustained period. And businesses in Silicon Valley have a hard time getting up and running without it, because the cost of living here is so incredibly expensive. Because of the sheer density of people who have experience building technology businesses here, as well as high-end technical talent and a general culture of helpfulness, Silicon Valley is still the best place to start this kind of business. So, VC it is, until we find some other instrument to viably fund tech companies. (Some obvious contenders are out: ICOs have rightly been slapped down by the SEC, and revenue sharing investment only really works for very small amounts of investment.)

Okay, so how about we just break Facebook up, and set a precedent for future businesses, just like we did with Microsoft in the nineties? After all, its impact is even more catastrophic than Microsoft's, and its actions are even more brazenly monopolistic. Everything else aside, consider its use of a VPN app it acquired to identify apps whose usage was threatening Facebook's, so that it could proactively acquire them and shut them down.

American anti-trust law has been set ostensibly to protect consumers, rather than competition. As Wired reported a few years ago:

Under current U.S. law, being a "monopoly" is not illegal; nor is trying to best one’s competitors through lower prices, better customer service, greater efficiency, or more rapid innovation. Consumers benefit when Apple disrupts the market with iPhones and iPads, even if this means RIM sells fewer BlackBerries or that Microsoft licenses fewer desktop operating systems. Antitrust law only springs into action against a monopoly when it destroys the ability of another company to enter the market and compete.

The key question, of course, is whether a particular monopoly is harming consumers – or merely harming its competitors for the benefit of those consumers.

With any lens except the most superficial, Facebook fails this test. Yes, its product is free and available to anyone. But we pay with our data and privacy - and ultimately, with our democracy. Facebook's dominance has adversely affected entire industries, swung elections, and fuelled genocides. In the latter case, this hasn't been in the United States - at least, not so far - and perhaps this is one of the reasons why it's escaped serious repurcussions. Its effects have been felt in different ways all over the world, and various governments have had to deal with them piecemeal. There is no jurisdiction big enough to cover its full impact. Facebook is, in some ways, more powerful than the government of any nation.

There's one thought that gives me hope. Anyone who has watched Facebook closely knows that it didn't grow through brilliant strategy and genius maneuvering. Its growth curve closely maps to the growth of the internet; it happened to be in the right place at the right time, and managed to not screw it up enough to drive people away. As people joined the internet for the first time, they needed a place to go, and Facebook was it. (The same is true of Instagram, which closely maps to the growth in smartphone camera usage.) As the internet became saturated in developed nations, Facebook's growth curve slowed, and it now needs to bring more people online in developing nations if it wants to continue dominating new markets.

That means two things: Facebook will almost inevitably stagnate, and it is possible for it to be outmaneuvered.

In particular, as new computing paradigms take hold - smart speakers, ambient computing, other devices beyond laptops and smartphones - another platform, or set of platforms, can more easily take its place. When this change inevitably happens, it is our responsibility to find a way to replace it ethically, instead of with yet another monopolistic gatekeeper.

There is work to do. It won't be easy, and the outcome is far from inevitable. But the internet is no longer about code being slung from dorm rooms and garages. It's about democracy, it's deadly serious, and it needs to be treated as such.

 

Photo by JD Lasica, shared on Wikipedia under a CC BY 2.0 license.

· Posts · Share this post

 

The unexpected question I ask myself every year

Okay, but seriously, how can I get to work on Doctor Who?

It's a dumb question, but my love for this show runs deep - I've been watching it since I was five years old at least. As a non-aggressive, third culture kid who couldn't fit in no matter how he tried, growing up in Britain in the eighties and nineties, the idea of an alien pacifist who solved problems through intelligence, kindness and empathy appealed to me. It still does. It's brilliant. The best show on TV, by far.

I love it. I love watching it. I love reading the books. I dream complete adventures, sometimes. For real.

I don't need to work on it.

Oh, but I do.

I want to play in that universe. I want to take my knowledge of its 55 years of television, and my deep feeling for the character and the whole ethos of the production, and help to build that world. I want to make things that resonate for children in the same way it resonated for me.

It's not about Daleks or Cybermen or reversing the polarity of the neutron flow. It's about the fundamental humanity of telling stories that teach empathy and peace. It's about an action show where the heroes wield understanding and intuition instead of weapons. It's about an institution that genuinely transcends time and space, after 55 years, in a way that its original creators could never have understood. It's a through line to my life and how I see the world.

It's obviously a pipe dream. Still, every year, I ask myself: "am I any closer to working on Doctor Who?"

Every year, the answer is "no".

It's not like I've been working hard to take my life in that direction. I write, for sure; I've had science fiction stories published. But I work in technology - at the intersection of media and tech, for sure, but still on the side of building software and businesses. There was a time when the show was cast aside, and enthusiasts were welcomed to participate - if not with open arms, then with a markedly lower bar than today, when it's one of the hottest shows on TV.

Someone I went to school with did end up working on the show; her dad, Michael Pickwoad, was the production designer for a time. He worked on TARDIS interiors for Matt Smith and Peter Capaldi, among other things. His daughter worked on it with him for a little bit, and was even name-checked in one episode, when her soul was sucked into the internet through the wifi.

I felt a pang of envy for a moment, but mostly I thought it was cool.

What would you even need to do to work on the show? Should I be focusing more on writing fiction? Should I try and write for something else first? Could I maybe find my way into an advisory position, helping the writers to better understand Silicon Valley? (Because, listen, Kerblam! was a good episode, but the ending ruined the parable. Did Amazon ask you to change it?) I don't understand how this industry works; I don't know where to even begin. The show isn't really even for me, anymore; I'm not the six year old watching Peter Davison on BBC1 while I sit cross-legged on the floor. I'm a grown-ass, middle aged man. And who am I to think I can even stand shoulder to shoulder with the people who do this incredible work? People like Malorie Blackman and Vinay Patel, who wrote this year's standout stories?

Like I said: it's a pipe dream. I'm fine. I don't need to be a part of this. I can just enjoy it. I can.

But.

The year is closing out. We're all preparing to turn over new leaves. A new calendar on the wall means a fresh start. There's so much to look forward to, it feels like the world is finally turning a corner, and I'm working on amazing things.

Just ... look. I just need to ask one question. I can't stop myself, as stupid as it is.

Am I any closer to working on Doctor Who?

· Posts · Share this post

 

Asking permission to be heard is an idea that needs to die

I remember reading about Tavi Gevinson when she was just starting out; a wunderkind blogger. Now her media company is winding down - but at least it's winding down on her terms.

Her goodbye letter is beautiful:

In one way, this is not my decision, because digital media has become an increasingly difficult business, and Rookie in its current form is no longer financially sustainable. And in another way, it is my decision—to not do the things that might make it financially sustainable, like selling it to new owners, taking money from investors, or asking readers for donations or subscriptions. And in yet another way, it doesn’t feel like I’m deciding not to do all that, because I have explored all of these options, and am unable to proceed with any of them.

This was what I wanted to help solve. It was my job, but it went far further than that. I was up late at night while writers turned entrepreneurs cried on my shoulder; sometimes, I cried with them. I felt every setback and every problem, always wondering if there was more that I could do. And I did this as just one part of a bigger team, which in turn was just one part of a bigger movement, all understanding the importance of media, all invested in new ways to pay for it.

It is so far from being solved.

And yes, we're talking about a fairly privileged fashion blogger from New York City. But we're also not. This experience is wider and deeper than just this one publication. And even when we are focused back in on Rookie, Gevinson's unique perspective, alongside the unique perspectives of all the previously-unpublished young women writers she supported, is worth preserving.

I want these voices - of women, of people of color, of anyone with a new perspective or an insight or just words that make you feel anything at all - to be sustainable. The world needs them. Our tapestry of culture is better for their presence. Ensuring we have a thriving media has never just been about direct journalism and reporting the news (although those things are vitally important); if we accept that media is the connective tissue of society, the way we learn about the wider world, we must also accept that a vibrancy of diverse voices is central to that understanding. Not just in terms of who gets reported on and who stories are about, but who gets to make media to begin with.

The most compelling business model for media companies in America is to be propped up by a billionaire. Overwhelmingly, they lose money, and depend on wealthy benefactors to survive - sometimes through acquisition, and other times simply through donation. There are other, incrementally devolved versions of this: in a patronage model, publications ask rich people to pay so that everyone can read. Subscription models create gated communities of information. Kickstarters ask wealthy people to benevolently pay for something to come into existence. In all of these models, young media companies, outlets for voices, must in some way contort themselves to be appealing to rich people, who are predominantly white, straight men, even if those people are not its core audience.

So, then there's this. This quote is so real, and it viscerally conjures up so many feelings for me:

One woman venture capitalist told us, after hearing my very nervous pitch, “I hate to say this because I hate that it’s true, but men who come in here pitch the company they’re going to build, while women pitch the company they’ve already built.” The men could sound delusional, but they could also sound visionary; women felt the need to show their work, to prove themselves.

Women have been told "no" so many times that they don't dare to discuss the true vision for what they want to build. I've certainly been in funding discussions where the true vision was obvious but unspoken; allowing it to flourish required creating a very safe space, and one that I'm inherently less qualified to provide. An ambition unspoken is not an ambition unconceived.

Public media, as with public art, has a measurable impact on everybody's quality of life. There should be public money available for both, as there is in other countries. The reliance on wealthy individuals to graciously provide is perverse. But here we are.

Given the strangehold that rich people and their agents have on culture, we should empower more diverse people to be able to deploy that money where it's needed. Venture capital firms should aim to have more diverse partners, to make safer spaces for more diverse founders - not just for social reasons, but because immigrants founded over half the billion dollar startups in the world and companies run by women perform better.

We can't, though, accept this status quo. Ensuring that more funding goes towards supporting diverse voices means overhauling the system of funding itself. That means removing all of the funding gatekeepers. Why are they there to begin with?

My ideal would be an independent pool of money that predominantly comes from public funds, but I recognize that this is a very European-style idea that is unlikely to gain traction in the US.

Ultimately, though, it's a half-measure. A solution to a world where the decisions over whose voices get to be heard and who gets to be distributed are related to wealth is simply this: the wealth needs to be more evenly distributed. So many of our diversity problems are because society is unequal. Income inequality continues to grow in the United States, as it does in many other places - and of course, the people who hold an increasing majority of the world's income are white men.

A world where everyone has money in their pocket to support what they care about is one where many different types of voices are supported. That, it seems to me, is the real problem to solve. We have to remove the strangehold of a very small number of rich people on all of society; a strangehold that was originally established through racism, colonialism, and oppression. We're building a world where everyone else, and every endeavor that they do not directly control, must go back to them to ask permission. Everything else - problems in our political system, problems with media business models, productivity, economic diversity, crime - can be drawn back to this. We urgently need, coherently, together, and in a way that embraces our ideological diversity and differences of contexts and backgrounds, to find a way to empower everyone to live.

 

Photo by Rita Morais on Unsplash

· Posts · Share this post

 

Teaching mission-driven founders

I was privileged to participate in Warwick Business School's Entrepreneurial Finance class yesterday. A group of students, some UK-based and some remote, convened in San Francisco to learn more about how startups raise money. My aim was to give them some storytelling ideas. I wanted to drive home that human-centered narratives are the best way to turn all the complicated interrelated complexities of a business into an easily-digestible form - and that the best startup businesses are human-centered from the outset.

Teaching accelerator classes was one of the best parts of Matter, and I miss it. I was the Director of Investments on the west coast, which meant that in addition to sourcing, evaluating, selecting and investing in the teams that would participate in the Matter accelerator, I taught workshops in the program about fundraising, and held office hours sessions about every aspect of running a startup. It meant I got to use every part of my working skills for the first time outside of being a startup founder: often I would be asked about fundraising strategy and database technologies in the same meeting. For a short while I was also the acting Director of Program, which meant that I taught a wider range of workshops. Because Matter's startups (and therefore founders) are mission-driven, it was fulfilling, meaningful work.

I sometimes dream about being able to take everything I've learned from the Silicon Valley ecosystem and bring it back to Europe; maybe have feet in both worlds, and help European startups find footholds over here. So yesterday's session was fulfilling for me in multiple ways, and I'm looking forward to doing it again.

· Posts · Share this post

 

Keeping it small is okay too

I think lifestyle businesses are massively underrated.

In contrast to a venture-funded business, whose aim is to gain as much value as quickly as possible, a lifestyle business is intended to allow its owners to maintain a certain level of income - and no more. Growth is nice, but it's not a core aim. In a VC business, you want to achieve a valuation worth billions of dollars that can be realized for you and your investors. In a lifestyle business, you want to do something meaningful that allows you to live well on an ongoing basis.

Imagine you want to create a hundred billion dollar business - a VC-powered behemoth that will be the next Silicon Valley decacorn. Where do you start?

It's not by saying "this product is for everyone". It's not even "this product is for millennials", or "this product is for people with cars". These are insanely broad categories that don't allow you to tailor your business towards the real, nuanced needs of your customers. Sure, they're very large demographic buckets, but they lack definition. It's that definition - that deep understanding of the people you want to be your initial power users - that will allow you to win. You need to have insights about those people that nobody else does, which will, in turn, allow you to serve their needs better than anybody else can.

So, you start with a small group that can sustain the early stages of your business, you serve them well, and you build a deep, loyal base.

Now, imagine you want to create a lifestyle business that will hopefully bring in $20,000 a month. That number may still seem quite high, but living costs are higher than they should be. In my neck of the woods, families earning $117,000 qualify as low income. This leads to horrible inequalities and is one of the most important domestic social justice issues of our time, but that's a topic for another post. Right now, that's the baseline. Assuming reasonable overhead costs, my $20K number might net you a take-home pay of around $10,000 a month, of which San Francisco rent might steal 40%. It's not going to make you a millionaire, but it's certainly more than most people earn, and it allows you to build a financial cushion to get through lean times.

What do you need to do?

To make that kind of money, you need to find a small group whose needs are not being met, and serve those needs better than anyone else. To do that, you need to get to know them better than anyone else, and -- you get the idea.

Billion dollar decacorns and lifestyle businesses have a lot in common. More often than not, they come from the same kernel, even if they make different trade-offs between profit and growth. The only real difference is, once you have that initial nucleus of a business built, the decisions you take next: do you keep it small and run it profitably for yourself, or do you take investment to grow it to the next level?

I have a lot of respect for the people who choose to keep it small. They might continue to grow their business organically, and end up making a billion dollars a year - the GIS firm Esri is one example of a technology company that was built exacly this way. Or they might just keep it small forever. My first job was for a local Oxford publisher that repeatedly turned down multi million pound offers. The owner, a man called John Rose, knew exactly what he wanted (and what he wanted to avoid). He became a local hub, well-known in the community he served, but never wanted to expand beyond that.

There are thousands of tiny businesses that work this way, providing value to their owners and their communities. We don't hear about them, because they don't fit into the rock star startup narrative. But they're there, ticking along, beloved by the people who need them.

 

Photo by rawpixel on Unsplash

· Posts · Share this post

 

Gefilte bubbles

My nuclear family - the one I grew up with - has four different accents. My mother's is somewhere between New England and California; my dad's is Dutch with some Swiss German and English inflections; my sister has traveled further down the road towards a Bay Area accent; and mine is just softened enough that most people think I'm from New Zealand. Thanksgiving, like Christmas, is for us a wholly appropriated holiday: not about genocide or holiness, respectively, but simply about being together as a family. Like magpies, we've taken the pieces that resonate with us, and left the rest.

Technically, I'm a Third Culture Kid: "persons raised in a culture other than their parents' or the culture of the country named on their passport (where they are legally considered native) for a significant part of their early development years". I'm not British, but grew up the place; I love it there, but I also did not assimilate.

I've never felt any particular belonging to the countries on my passports, either, which turns out to be a common characteristic among TCKs. Instead, our nationality and religion is found among shared values and the relationships we build. I've written about this before, although back then I didn't fully understand the meta-tribe to which I belonged. It's also part of the Jewish experience, and the experience of any group of people who has been forcibly moved throughout history. Yes, I'm a product of globalization, but that doesn't mean I'm also a product of privilege; migration for many, including my ancestors, has not been optional.

I was well into my thirties before I understood that my experience of culture was radically different to many other peoples'. It hadn't occurred to me that some people simply inherit norms: the practices of their communities become their practices, too; the way things were done become the way things are and will be done. If you live in this sort of cultural filter bubble, challenges to those well-established norms are threatening. We know that people prefer to consume news and information that confirms their existing beliefs; that's why misinformation can be so effective. The same confirmation bias also applies to how people choose to build relationships of all kinds with one another. It's at the heart of xenophobia and racism, at its most overt, but it also manifests in subtler ways.

I lost count of the number of people who told me I should give up my nationalities and become British, or who made fun of my name, or took issue with my lack of understanding of shared cultural norms. Food is just one example of something mundane that can be incredibly contentious: the dishes from your community carry the weight of love and history. When someone presents as being from your community - no visible differences; more or less the same accent, even if they mispronounce a word here and there - but doesn't have any of that shared understanding, it simply doesn't compute.

I'm fascinated by this survey of Third Culture Kid marriages. The TCK blogger Third Culture Mama received 130 responses from TCKs and their spouses, in an effort to discover how cross-cultural relationships can thrive. It's the first time I've seen anything like this, and I found some of the qualitative responses to be unexpectedly comforting. For example:

When multiple cultures are involved it’s easy to idealize your own culture and how you were brought up. But if you can set it aside to listen to another point of view and another way of doing things, you realize there isn’t only one right way. As a couple you need to decide to say “this is how WE do things. This is what WE believe.” Not “this is what she did. Or this is how my family did it growing up.” There is great validity in understanding both of your pasts and how you were raised. But you need to move on from there and choose a path that you go down together. Doing this takes humility, love, and a desire to do right more than to be right. Listen to one another.

Particularly in startup-land, but in the media in general, there is a glut of how-to articles that assume what worked for the author will work for you. It's a great idea to read other peoples' experiences and learn from them, but you can't apply them directly: you have to forge your own path. Rather than take someone else's pattern verbatim and throw yourself into it, you need to build something that is nurturing and right for you. That's true in relationships, and it's true in business. Over half of all billion dollar startups were founded by immigrants, and I think this mindset is one of the reasons why. As an immigrant, you don't have the luxury of following patterns; you have to weave your own from first principles. You can't make assumptions about how people will behave; you have to study them. Taking this outside perspective is a path to success for everyone.

Another response:

Ask questions, let them cook food from their childhood, look at pictures, learn key phrases in their language. Understand that we’re constantly fighting against this dichotomy of wanting to venture off, but also wanting a place to belong. Realize that we approach emotional intimacy and relationships very differently.

For me, the relationships that have worked are the ones where we've made the space to create our own culture together. I'm drawn to outsiders and people who are willing to question established norms, and over time, through trial and error, bad interactions and good, I've found that I find slavish adherence to cultural norms in a person as threatening as some people find the opposite in me. I've decided that the edge of established culture is where the interesting work happens, and where some of the most interesting people can be found.

In other words, my filter bubble is my psychological safety zone. It's an emotional force field, just as it is for everyone. We all choose who we interact with, who we listen to, and the spaces that we inhabit. The important thing is not that we blow those bubbles to smithereens, but that we see them for what we are, and - just as those happily married TCKs have - let people in to help us grow and change them.

This weekend, children were shot with rubber bullets and tear gas at the US border with Mexico. The root of America's refusal to let them in is a fear of a disruption to those norms. It's in vain. Populations have been ebbing and flowing for as long as there have been people. America is changing, just as all countries are changing, how they always have been, and how they always will. And people like me - those of us with no nationality and no religion, but an allegiance to relationships and the cultures we create together - are growing in number. Selfishly, but also truthfully, I believe it's all for the better.

 

Photo by Elias Castillo on Unsplash

· Posts · Share this post

 

How machine learning can reinforce systemic racism

Over Thanksgiving, the Washington Post ran a profile of the babysitting startup Predictim:

So she turned to Predictim, an online service that uses “advanced artificial intelligence” to assess a babysitter’s personality, and aimed its scanners at one candidate’s thousands of Facebook, Twitter and Instagram posts.

The system offered an automated “risk rating” of the 24-year-old woman, saying she was at a “very low risk” of being a drug abuser. But it gave a slightly higher risk assessment — a 2 out of 5 — for bullying, harassment, being “disrespectful” and having a “bad attitude.”

Machine learning works by making predictions based on a giant corpus of existing data, which grows, is corrected, and becomes more accurate over time. If the algorithm's original picks are off, the user lets the software know, and this signal is incorporated back into the corpus. So to be any use at all, the system broadly depends on two important factors: the quality of the original data, and the quality of the aggregate user signal.

In the case of Predictim, it needs to have a great corpus of data about a babysitter's social media posts and how it relates to their real-world activity. Somehow, it needs to be able to find patterns in the way they use Instagram, say, and how that relates to whether they're a drug user or have gone to jail. Then, assuming Predictim has a user feedback component, the users need to accurately gauge whether the algorithm made a good decision. Whereas in many systems a data point might be reinforced by hundreds or thousands of users giving feedback, presumably a babysitter has comparatively fewer interactions with parents. So the quality of each instance of that parental feedback is really important.

It made me think of COMPAS, a commercial system that provides an assessment of how likely a criminal defendant is to recidivate. This tool is just one that courts are using to actually adjust their sentences, particularly with respect to parole. Unsurprisingly, when ProPublica analyzed the data, inaccuracies fell along racial lines:

Black defendants were also twice as likely as white defendants to be misclassified as being a higher risk of violent recidivism. And white violent recidivists were 63 percent more likely to have been misclassified as a low risk of violent recidivism, compared with black violent recidivists.

It all comes down to that corpus of data. And when the underlying system of justice is fundamentally racist - as it is in the United States, and in most places - the data will be too. Any machine learning algorithm supported by that data will, in turn, make racist decisions. The biggest difference is that while we've come to understand that the human-powered justice system is beset with bias, that understanding with respect to artificial intelligence is not yet widespread. For many, in fact, the promise of artificial intelligence is specifically - and erroneously - that it is unbiased.

Do we think parents - particularly in the affluent, white-dominated San Francisco Bay Area communities where Predictim is likely to launch - are more or less likely to give positive feedback to babysitters from communities of color? Do we think the algorithm will mark down people who use language most often used in underrepresented communities in their social media posts?

Of course, this is before we even touch the Minority Report pre-crime implications of technologies like these: they aim to predict how we will act, vs how we have acted. The only possible outcome is that people whose behavior fits within a narrow set of norms will more easily find gainful employment, because the algorithms will be trained to support this behavior, while others find it harder to find jobs they might, in reality, be able to do better.

It also incentivizes a broad surveillance society and repaints the tracking of data about our actions as a social good. When knowledge about the very existence of surveillance creates a chilling effect on our actions, and knowledge about our actions can be used to influence democratic elections, this is a serious civil liberties issue.

Technology can have a part to play in building safer, fairer societies. But the rules they enforce must be built with care, empathy, and intelligence. There is an enormous part to play here not just for user researchers, but for sociologists, psychologists, criminal justice specialists, and representatives from the communities that will be most affected. Experts matter here. It's just one more reason that every team should incorporate people from a wide range of backgrounds: one way for a team to make better decisions on issues with societal implications is for them to be more inclusive.

· Posts · Share this post

 

I'm going dark on social media for the rest of 2018.

For a host of reasons, I've decided to go dark on social media for the remainder of 2018. If my experiment is successful beyond that time, I'll just keep it going.

Originally, I'd intended to do this just for the month of December, but as I sat around the Thanksgiving dinner table yesteryday, surrounded by family and friends, I asked myself: "why not now?"

So, now is the time.

There are two reasons:

The first is that, ordinarily, if a company was found to be furthering an anti-semitic smear in order to protect itself from accusations that it had allowed illegal political advertising in order to influence an election, I probably wouldn't buy goods or services from that company. Particularly if they tried hard to hide that news. The fact that this company has ingrained itself in nearly every aspect of modern life doesn't mean it should be excused - in fact, it makes its actions exponentially more disturbing.

Similarly, other social networks have not exactly shown themselves to be exemplars. While I firmly believe that the web is a net positive for democracy which has provided opportunities for everyone to have a voice, social networking companies have largely shirked the responsibilities of the privileged positions they have found themselves in. We use them more than any other source to learn about the world - but they've chosen to serve us with algorithms that are optimized to maximize our engagement with display ads rather than nurture our curiosity and empathy. Emotive content tends to rise to the top, which has real effects: we're more divided than ever before in the west, and in countries like Myanmar, social networking has been an ingredient in genocide.

I don't want my engagement, or engagement in the content I contribute, to add value to this machine.

The second reason is that it doesn't make me feel good. Partially this is because of the emotive content the algorithms serve to me, which takes a real emotional toll. Partially it's because the relationships you maintain on social networks are shallow. In some cases, they are shadows of real, deeper relationships, but they don't serve those relationships well; posting feels like emotional labor, but has little of the emotional effect or intimacy of real communication. It's an 8-bit approximation of friendship where the conversations are performative because they're always in front of an audience.

One of the things that was stopping me from withdrawing from social media is a worry that people will forget about me. Many of my friends are overseas, and we don't see each other on a regular basis. But I've decided that this is manufactured FOMO; my really meaningful relationships will continue regardless of which social networks I happen to use. The idea that Facebook is an integral part of my friendships seems more toxic the more I think about it.

Finally, I'll admit it: I'm kind of depressed. Social networking has been shown to make people more so. Cutting it out for a while seems like an okay thing to try.

I removed all my social apps on my phone and replaced them with news sources and readers. So here's where to find me for the next little while:

I'm cutting out Facebook, Twitter, Instagram, LinkedIn, and Mastodon completely. (Mastodon doesn't suffer from the organizational issues I described above, but by aping commercial social networking services, it suffers from the same design flaws.) As of tonight, I won't be logging into those platforms on any device, and I won't receive comments, likes, reshares, etc, on any of them.

I will be posting regularly on my blog here at werd.io. If you use a feed reader (I use NewsBlur and Reeder together), I have an RSS feed. Yeah, we still have those in 2018. But if you don't, you can also get new posts in your email inbox by subscribing over here. I've set it up so you can just reply to any message and I'll get it immediately.

You can always email me at ben@benwerd.com, or text me on Signal at +1 (510) 283-3321.

I'm not removing any accounts for now - I'm simply logging out. If this experiment continues, I'll go so far as to remove my information.

Please do say hi using any of those methods. And if we find ourselves in the same city, let's hang out. I'm hoping that this experiment will lead to more, deeper relationships. But for now: this is why you're not going to see my posts in your usual feeds.

· Posts · Share this post

 

The Tech Correction

If I participated in public markets at all (I don't), I might have gleefully shorted Facebook's stock already. But it looks like it's too late for that to be as effective as it might have been.

As Fred Wilson pointed out today:

Apple is down almost 25% in the last two months.

Facebook is down about 40% since July.

Bitcoin is down about 80% from its highs last December.

Ethereum is down about 90% from its highs in January.

[...] But the thing to understand more broadly about what is going on right now is that big sophisticated investors are reducing their risk exposure across all asset classes and have been doing that for some time. The pace of the “risk off” trade is accelerating. Which means a flight to safety is going on. And when that is happening, you really need conviction to be buying.

For investors who aren't deeply connected. to the industry, technology isn't the obvious bet it once was. The public markets are taking a beating in general - the downturn certainly isn't limited to technology - but the result is that money isn't free-flowing. Lots of people have recently raised new venture capital funds, and there's still a lot of money to deploy, but much of it will be deployed with increased caution.

While there are certainly hard times ahead for many companies, and with them, workers, I think there will be some silver linings. The rapid influx of cash has not made the tech industry a fun place to work or live; it's incentivized founders to create morally rudderless companies, and the deep underlying inequalities make it hard for anyone not carrying a startup salary to rent, let alone buy, a home. As one VC, who will remain nameless, put it to me when I first started as an investor: "you're idealistic now, but soon you'll realize that you're just moving money around for rich people."

I think there will be four effects:

1. Startups that could only exist through enormous funding rounds will dwindle, replaced by companies that more closely resemble traditional businesses - and companies run by incredibly driven founders who will make their vision a reality regardless of circumstances.

2. Alternative financing will become more mainstream, as venture capital refocuses back on seed stage and beyond, leaving a hole at the earliest stages.

3. We'll see fewer people move to Silicon Valley to make their fortunes; people will continue to move here because they love technology and its implications for improving peoples' lives.

4. We may see rents and home prices decrease. Bad news for existing owners, but great news for people who aren't already on the latter. This, in turn, will mean more people are able to risk starting their own businesses, and we'll see an increase in really interesting new ventures a few years out. The higher the cost of living is, the less freedom people have to experiment.

Don't get me wrong: I'm not cheerleading for broad devaluation in my industry. But I think if there is going to be a correction, there is an opportunity for it to be more than financial: a way to rethink how businesses are made in the technology industry. It'll trim off our worst excesses by necessity, but the industry itself isn't going away. The question is, when the good times return - and they will return - will we learn from this bull run, or will we make wiser choices?

· Posts · Share this post

 

Media for the people

Yesterday, in the afternoon, I collapsed. Everything seemed overwhelming and sad.

Today, I'm full of energy again, and I think there's only one kind of work that matters. The work of empowerment.

Broadly: How can we return to a functional democracy that works for everyone?

Narrowly: How can we make sure this administration is not able to follow its authoritarian instincts, how can we make sure they are nowhere near power in 2020, and how can we make sure this never happens again?

A huge amount of this is fixing the media. Not media companies - but the fabric of how we get our information and share with each other. I've been focused on this for my entire career: Elgg, Latakoo, Known, Medium, Matter and Unlock all deal with this central issue.

A convergence of financial incentives has created a situation where white supremacy and authoritarianism can travel across the globe in the blink of an eye - and can also travel faster than more nuanced ideas. Fascist propaganda led directly to modern advertising, and modern advertising has now led us right back to fascist propaganda, aided and abetted by people who saw the right to make a profit as more important than the social implications of their work.

I think this is the time to take more direct action, and to build institutions that don't just speak truth to power, but put power behind the truth. Stories are how we learn, but our actions define us.

Non-violent resistance is the only way to save democracy. But we need it in every corner of society, and in overwhelming numbers.

There are people out on the streets today, who have been fighting this fight for longer than any of us. How can we help them be more effective?

How can we help people who have never been political before in their lives to take a stand?

How can we best overcome our differences and come together in the name of democracy, freedom, and inclusion?

And how can we actively dismantle the apparatus of oppression?

It's time to create a new kind of media that presents a real alternative to the top-down structures that have so disserved us. One that is by the people, for the people, and does not depend on wealthy financial interests.

And with it, a new kind of democracy that is not just representative, but participative. For everyone, forever.

· Posts · Share this post

 

Gab and the decentralized web

As a proponent of the decentralized web, I've been thinking a lot about the aftermath of the domestic terrorism that was committed in Pittsburgh at the Tree of Life synagogue over the weekend, and how it specifically relates to the right-wing social network Gab.

In America, we're unfortunately used to mass shootings from right-wing extremists, who have committed more domestic acts of terror than any other group. We're also overfamiliar with ethnonationalists and racist isolationists, who feel particularly emboldened by the current President. Lest we forget, when fascists marched in the streets yelling "the Jews will not replace us", he announced that "you had very fine people on both sides". The messaging could not be more clear: the President is not an enemy of hate speech.

As the modern equivalent of the public square, social networking services have been under a lot of pressure to remove hate speech from their platforms. Initially, they did little; over time, however, they began to remove many of the worst offenders. Hence Gab, which was founded as a kind of refuge for people whose speech might otherwise be removed by the big platforms.

Gab claims it's a neutral free speech platform in the spirit of the First Amendment. (Never mind that the First Amendment protects you from the government curtailing your speech, rather than corporations enacting policies for private spaces that they own and control.) But anyone who has spent 30 seconds there knows this isn't quite right. This weekend's shooter chose to post there before committing his atrocity; afterwards, many other users proclaimed him to be a hero.

It's an online cesspit, home to of some of the worst of humanity. These are people who refer to overt racism as "wrongthink", and mock people who are upset by it. As Huffington Post recently reported about its CEO, Andrew Torba:

[...] As Gab’s CEO, he has rooted for prominent racists, vilified minorities, fetishized “trad life” in which women stay at home with the kids, and fantasized about a second American civil war in which the right outguns the left.

Gab is gone for now - a victim of its service providers pulling the plug in the wake of the tragedy - but it'll be back. Rather than deplatforming, the way to fight this speech, it claims, is with more speech. In my opinion, this is a trap that falsely sets up the two oppositing sides here as being equivalent. Bigotry is not an equal idea, but it's in their interests to paint it as such. While it's pretty easy to debate bigots on an equal platform and win, doing so unintentionally elevates their standing. Simply put, their ideas shouldn't be given oxygen. A right to freedom of speech is not the same as a right to be amplified.

I found this piece by an anonymous German student in Saxony instructive:

We also have to understand that allowing nationalist slogans to gain currency in the media and politics, allowing large neo-Nazi events to take place unimpeded and failing to prosecute hate crimes all contribute to embolden neo-Nazis. I see parallels with an era we thought was confined to the history books, the dark age before Hitler.

An often-repeated argument about deplatforming fascists is that we'll just drive them underground. In my opinion, this is great: when we're literally talking about Nazis, driving them underground is the right thing to do. Yes, you'll always have neo-Nazis somewhere. But the more they're exposed to the mainstream, the more their movement may gain steam. This isn't an academic problem, or a problem of optics: give Nazis power and people will die. These are people who want to create ethnostates; they want to prioritize people based on their ethnicity and background. These movements start in some very dark places, and often end in genocide.

When we talk about a decentralized social web, the framing is usually that it's one free from censorship; where everyone has a home. I broadly agree with that idea, but I also think the discussion must become more nuanced in the face of communities like Gab.

I agree wholeheartedly that the majority of our global discourse can't be trusted to a small handful of very large, monocultural companies that answer to their shareholders over the needs of the public. The need to make user profiles more valuable to advertisers has, for example, seen transgender users thrown off the platform for not using their deadnames. In a world where you need to be on social media to effectively participate in a community, that has had a meaningful effect on already vulnerable communities.

There's no doubt that this kind of unacceptable bigotry at the hands of surveillance capitalism would, indeed, be prevented by decentralization. But removing silos would also, at least in theory, enable and protect fascist movements, and give racists like this weekend's shooter a place to build unhindered community.

We must consider the implications of removing these gatekeepers very deeply - and certainly more deeply than we have been already.

A common argument is that the web is just a tool, oblivious to what people use it for. This is similar to the argument that was made about algorithms, until it became obvious that they were built by people and based on their assumptions and biases. Nothing created by people is unbiased; everything is in part derived from the context and assumptions of its creators. By being more aware of our context and the assumptions we're bringing to the table, we can hopefully make better decisions, and see potential problems with our ideas sooner. Even if there isn't a perfect solution, understanding the ethics of the situation allows us to make more informed decisions.

On one side, by creating a robust decentralized web, we could create a way for extremist movements to thrive. On another, by restricting hate speech, we could create overarching censorship that genuinely guts freedom of speech protections, which would undermine democracy itself by restricting who can be a part of the discourse. Is there a way to avoid the second without the first being an inevitability? And is it even possible, given the possible outcomes, to return to our cozy idea of the web as being a force for peace through knowledge?

These are complicated ethical questions. As builders of software on the modern internet, we have to know that there are potentially serious consequences to the design decisions we make. Facebook started as a prank by a college freshman and now has a measurable impact on genocide in Myanmar. While it's obvious to me that everyone having unhindred access to knowledge is a net positive that particularly empowers disadvantaged communities, and that social media has allowed us to have access to new voices and understand a wider array of lived experiences, it has also been used to spread hate, undermine elections, and disempower whole communities. Decentralizing the web will allow more people to share on their own terms, using their own voices; it will also remove many of the restrictions to the spread of hatred.

Wherever we end up, it's clear that President Trump is wrong about the alt-right: these aren't very fine people. These are some of the worst people in the world. Their ideology is abhorrent and anti-human; their messages are obscene.

No less than the future of democratic society is at stake. And a society where the alt-right wins won't be worth living in.

Given that, it's tempting to throw up our hands and say that we should ban them from speaking anywhere online. But if we do that, the consequence is that there has to be a mechanism for censorship built into the web, and that there should be single points of failure that could be removed to prevent any community from speaking. Who gets to control that? And who says we should get to have this power?

· Posts · Share this post

 

We fought wars to stop nationalists. Now the President is one

“Patriotism is when love of your own people comes first; nationalism, when hate for people other than your own comes first.” - Charles de Gaulle

“You know, they have a word. It sort of became old-fashioned. It’s called a nationalist. And I say really, we’re not supposed to use that word. You know what I am? I am a nationalist. Use that word.” - Donald J. Trump

I am not a nationalist. But I'll willingly use that word to describe Trump, his government, and the people who continue to support him.

This election in November is nothing less than a referendum on what America is, stands for, and should be in the future. Trump knew what he was doing when he described himself as a nationalist; it's part of a long-running push of the Overton Window to the far-right. And he was using it in the context of talking about a caravan of refugees who are reportedly hoping to enter America. It's his most baldly white supremacist statement yet.

He's not alone. Together with Britain, Brazil, Poland, and other countries around the world, Trump is part of a far-right resurgence. On the internet, and therefore in the backchannel to society, the neo-reactionary and alt-right movements are gaining steam.

Yes, many of the supporters are people who have been overlooked by a negligent Democratic Party. But in 1930s Germany, many of the supporters were people who were similarly feeling the effects of the Treaty of Versailles. This idea doesn't undermine the insidious evil of racism and bigotry. It doesn't override the cynicism and manipulation of the people who would use that discontent to fuel a movement based on hatred.

And we have to stop them. We have to.

I think of my great grandfather, who fled Pogroms in Ukraine with his family. My grandfather, who was captured by the Nazis as a Prisoner of War and is ashamed of denying his own Jewishness to survive. My other grandfather, who led the resistance against the Japanese in his part of Indonesia. My grandmother, who somehow shepherded her children through years in an internment camp, sometimes by gathering snails and cooking them, and sometimes by asking her twelve year old daughter to sneak through the sewers to find food. And my dad, who lived through it all and took care to make sure I understood why inclusion and fairness are so important.

The discussions we're having aren't arbitrary or academic. I think of my friends who are trans and being threatened with erasure; who are gay; who are people of color; who are of middle-eastern descent; who aren't part of this vision of a white picket fence America. This is about them. In some cases, it threatens their lives.

The first thing we need to do is vote in great numbers (although Trump is already discussing invalidating the election results). The second thing is to support groups that protect the civil liberties of these targeted communities; groups like the ACLU, the Southern Poverty Law Center, the Transgender Law Center, and Black Lives Matter.

But those are just the first steps. Then - no matter what happens - comes a bigger struggle. Equality, peace, inclusion and empathy are too important to let fall to a transient movement based on fear and hatred. It's not enough to resist; the goal must be to prevent. A peaceful society where everyone is welcome and there are opportunities for all people is what's at stake.

· Posts · Share this post

 

Desirability, Viability, Feasibility, Sustainability

Building a product as part of any kind of business is risky. Most new businesses fail, for a variety of reasons. Your job in the early stages is to mitigate those risks and navigate your company to a point where you're building something that people actually want, that can serve as the heart of a viable business, and which you can provide at scale with the team, resources, and time reasonably at your disposal.

One of the things I learned while investing at Matter was that your mindset matters more than anything. The founders who were most likely to succeed were able to identify their core assumptions, test those assumptions, be honest with themselves when they had it wrong, and act quickly to course-correct - based on imperfect information. Conversely, the founders most likely to fail were the ones who refused to face negative feedback and carried on with their vision. The former wanted to build a successful company; the latter wanted to pretend to be Steve Jobs.

De-risking a venture is all about continuously evaluating it through three distinct lenses:

Desirability: are you building something that meets a real user's needs? (Will the dog hunt?)

Viability: if you are successful, can your venture succeed as a profitable, growing business? (Will the dog eat the dog food?)

Feasibility: can you provide this service at scale with the team, time, and resources reasonably at your disposal? (Can we build the dog?)

Building a product through an iterative, human-centered process means putting on each one of these hats in turn. Is this product desirable, leaving aside viability and feasibility considerations? If not, what changes do you need in order to make it so? And then repeat for viability and feasibility.

This is at the heart of the design thinking process taught by Matter and others. It changed the way I think about building products forever.

I used to believe that if you just got the right smart people in a room, they could produce something great together. I wanted to build something and then put it out into the world. That's both a risky and egotistical strategy: it implies that you think you're so smart that you know what everybody wants. It's also often undertaken with a "scratch your own itch" mentality: build something to solve your own needs. As a result, the needs of wealthy San Franciscan millennials who went to Stanford are significantly overserved.

Market realities usually bring people back down to earth, but if you've spent a year developing a product, you've already burned a lot of time and resources. Conversely, if you're testing on day one, and day two, and day three, and so on, you don't need to wait to understand how people will react.

It's a great framework. There is, however, a missing lens.

I was pleased to see that Gartner has listed ethics and privacy as one of its ten key strategic technology trends for 2019. The world has changed, and market demands for technology products are very different to even three years ago. In the wake of countless data leaks and a compromised election, people are looking for more respectful software:

Technology development done in a vacuum with little or no consideration for societal impacts is therefore itself the catalyst for the accelerated concern about digital ethics and privacy that Gartner is here identifying rising into strategic view.

The human-centered design thinking process is correct. But it needs a fourth step that makes evaluating societal impact a core part of the process.

 

In addition to desirability, viability, and feasibility, I define the fourth step as follows:

Sustainability: does this venture have a non-negative social and environmental impact, and does it respect the human rights of the user?

Of course, it could easily be argued that "non-negative" should be "positive" here - and for mission-driven ventures it probably should be. Unfortunately, in our current climate, non-negative is such a step up from the status quo that I'm inclined to think that asking every new business to have a meaningfully positive impact is unrealistic. It would be nice if this wasn't the case. A positive impact also leads to questions like: how can we quantify our impact? Those are good questions to ask, but not necessarily core to the heart of every venture.

If you're confused about how "human rights" are defined, the UN Universal Declaration of Human Rights is a good resource. It was written in 1945, after the Second World War, and covers everything from equality through privacy, freedom from discrimination, and the right to a real court trial. There's also the European Convention on Human Rights, which has a broader scope, while being more narrowly focused on citizens of Europe. The purpose of including human rights in this context is to force questions like: are we discriminating against certain groups of people? And: can our platform be used to further genocide?

The technology industry used to have the luxury of operating in a vacuum, without having to consider the broader societal impact in which it operates. Its success means that its products are ingrained in every aspect of our lives. This brings new responsibilities, and the old days, when engineers and technologists could afford to be apolitical and apart from the world, are long gone. It's time that the ways in which we build products are brought up to date with our new reality.

· Posts · Share this post

 

Start with the spark, not the fire

It took me too long to realize I had my head in the clouds.

When I co-founded Known, I had a huge vision: a world where everyone had full control of their identity and content online. Anyone could create a stream of content anywhere - on a web host, on a device they kept in their living room, on their pick of services - and access it using whatever aspects of their identity they wanted to share. The whole web would become a collaborative canvas which would revolutionize business, creativity, and the internet itself. We wouldn't be beholden to these giant, centralized silos of data and value any longer.

It was an exciting vision - which led to a few obvious questions.

Like: where will you start?

How, exactly, do you get there?

How will you make money in the meantime?

Who is this for? No, not "content creators"; not "millennials"; certainly not "everyone". Who exactly will use this tomorrow? Two years from now?

We were lucky that Matter bought into our vision. Its accelerator changed how I think about building products, and literally changed my life (long before I joined the team). Part of the structure included a monthly venture design review, where we would pitch an experimental version of our venture, and a panel of experts (investors, founders, mentors) would give us their brutally honest feedback.

The first time we pitched our startup at a venture design review, we were eviscerated. We hadn't answered any of those questions. We did have working code, but we couldn't articulate who it was for, and how it connected to this bigger vision. It was the first time we received truly honest feedback, and it felt like a punch in the stomach.

It's not enough to have working code. It's not enough to have a vision. You've got to have a holistic, concrete understanding of your entire venture and the context it sits within.

Your vision can be a raging fire that might change the world. But you can't have a fire without a spark that takes hold.

So, I learned not to let go of that vision, but to take my head out of the clouds and bring myself down to earth. It's easy to have a big, romantic notion; it's much harder to put the actual nuts and bolts together to get a real venture off the ground. To do that effectively, you have to find: the real people you want to serve, get to know them personally and gain really unique insights about their needs, and then build the smallest possible thing that will meet those needs.

That smallest possible thing is probably embarrassing to you. It's almost certainly the grand invention you had imagined. But as Paul Graham once wrote:

Don't be discouraged if what you produce initially is something other people dismiss as a toy. In fact, that's a good sign. That's probably why everyone else has been overlooking the idea. The first microcomputers were dismissed as toys. And the first planes, and the first cars. At this point, when someone comes to us with something that users like but that we could envision forum trolls dismissing as a toy, it makes us especially likely to invest.

Microcomputers, planes, and cars all started as something small for a very limited audience, but they've rewritten how all of human society works.

Conversely, take the Segway: a product whose inventor dreamed would change how cities were designed. It had a grand vision but failed to understand its core users or create a strong hypothesis of how it would grow. They're now the domain of mall cops and goofy city tours. The company now makes those electric scooters used by startups like Lime and Bird, which were created with a concrete human use case in mind. But they orginally started fire-first, rather than spark-first, and faltered.

You have to nail the spark before you can grow. I still speak to a lot of startups, and many of them fail to understand this. They want to go big first; the vision is the fun bit, and is the emotional core that drove them to found their venture to begin with. It's where a whimsical idea hits the road and becomes real work. But it might be the most important business lesson I ever learned.

· Posts · Share this post