Skip to main content
 

Matter and Energy

Corey has made an announcement about Matter over on its Medium publication:

I’m proud of the impact we’ve made, the team we’ve built, and the people and organizations that we have transformed. But now it’s time for me to flare.

For the last two years I have tried to secure Matter’s future. While I succeeded at figuring out how to successfully expand Matter and its unique culture to NYC and to tranform Matter into an organization that could continue to operate if I were hit by a bus, I have yet to successfully raise Matter Fund III and we’ve come to the end of our runway.

I've had the privilege of being on many sides of the Matter table. My third startup, Known, was funded by it, and I took part in the third accelerator class. I was a mentor and occasional advisor. And then I was asked to join the team and was the west coast Director of Investments. (I had an abstract ambition to one day come back as an LP, completing the set, but oh well.)

When I discovered Matter, it felt like an oasis: here was an accelerator that wanted to make the world more informed, empowered, and connected. An investment community that prized empathy and inclusivity. And rather than talking about this vaguely, or staying at 30,000 feet, the hands-on accelerator process was designed to give entrepreneurs the tools they needed to test their assumptions and succeed in the real world. It was Corey's process, and it worked - both for the entrepreneurs who took part in it, and for the media partners like KQED and the Associated Press who used it to improve their own internal innovation.

It changed my life.

Not just by teaching me the principles of human-centered venture design, although it did do that. Not just by taking a bet on my work, although I will always be grateful. But more than anything else, by introducing me to an amazing community of people who I'm proud to call my friends - and then allowing me to help build it.

It was meaningful work that allowed me to use every part of myself. It pushed me in ways I'd never been pushed before, and in the same way that participating in the accelerator transformed the way I'll build products and ventures forever, I am a much better person for having worked on that team. It was very far from just being a job. I worked and cried with founders late into the night. I helped some of the world's biggest media institutions work on their most existential problems. I helped bring Chelsea Manning to demo day. And I did it as part of a group of people who I still can't believe I got to be part of. Years later, I honestly still can't believe my luck.

I'm very grateful to Corey for founding this community, which I believe I'll be part of for the rest of my life. In the same way people talk about the PayPal Mafia, I think people will start talking about the Matter Underground in media circles: people who were part of the Matter community, changed by its values, and then went on to change media and technology for good. That also goes for the incredible people I worked with, which certainly includes Corey. I wish him the best in whatever he chooses to do next, and I can't wait to hear about it.

I'll finish by including this photo, which I stole from his post. It represents my single proudest moment in my professional career to date: the day that a group of people that I helped source, select and invest in walked through the garage door. Each one of them had a mission with the potential to create a more informed, inclusive, and empathetic society. And each of them was an amazing individual who I'm proud to know.

Corey's full post is here.

· Posts · Share this post

 

My Oxford

VICE has a long-running series where British writers bring photographers back to their hometowns, which I stumbled into this morning via Metafilter. It's stunning, and while I didn't recognize the Edinburgh entry almost at all, there was another piece that unexpectedly took my breath away.

I'm from Oxford as much as I'm from anywhere, but I've never read a piece that captured my experience of the city. Instead, it's always the opulent, ancient buildings of the university, the famous writers like CS Lewis and JRR Tolkein, or the plummy, upper middle class concerns of the North Oxford set. An outsider could be forgiven for thinking that the city was all cream teas and tennis.

Not so much. As Nell Frizzell writes:

Say its name and people will think of spires, books, bicycles, punting, philosophers and meadows. Few will think of cheap European lager, samosas, hardware shops, GCSEs, underage drinking, the number 3a bus or going twos on a roll-up beside a mental health hospital. They may not even think of the car factory, the warehouses on Botley Road, Powell's timber merchants, the registry office in Clarendon Shopping Centre, The Star pub, plantain sandwiches or Fred's Discount Store. But that's the Oxford I grew up in.

Me too. Nell's description of east Oxford is spot on, although we moved in different social circles; there was no cocaine in my world, even if we also gathered at exactly the same pub. All of these places are my places too, and the things she cares about in her hometown are things I care about also. In a life where I've lived in multiple countries and never quite found myself fitting in, including in the place I grew up, that's an incredible rareity.

I'm very glad I moved away - living in a variety of places has been right for me, and I expect I'll continue to move around. Having no nationality and no religion means that the pull to travel and exist in different contexts is strong. And Oxford really does have some deep problems. But that doesn't mean I don't miss it, too.

Because of the images that Oxford conjures in the minds of people who have never been there (and even some who have), I find it hard to explain where I came from. I can immediately taste the samosas and smell the beer-stained floorboards, but it's hard to convey. Now, at least, I have something I can point to; a description I actually recognize.

If you're interested in a realer Britain, the whole series is worth reading.

· Posts · Share this post

 

Open APIs and the Facebook Trash Fire

The New York Times report on Facebook's ongoing data sharing relationships is quite something. The gist is that even while it claimed that its data sharing relationships had been terminated in 2015 - to users and to governments around the world - many were still active into this year. Moreover, these relationships were established in such a way as to hide the extent of the data sharing from users, possibly in contravention of GDPR and its reporting responsibilities to the FTC:

“This is just giving third parties permission to harvest data without you being informed of it or giving consent to it,” said David Vladeck, who formerly ran the F.T.C.’s consumer protection bureau. “I don’t understand how this unconsented-to data harvesting can at all be justified under the consent decree.”

The company's own press release response to the reporting attempts to sugarcoat the facts, but essentially agrees that this happened. Data was shared with third parties during the period when the company declared that this wasn't happening, and often without user permission or understanding.
Back to the NYT article to make the implications clear:

The social network allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.

The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.

In September 2007, I flew out to Silicon Valley to participate in something called the Data Sharing Summit, organized by Marc Canter. At the time, I was working on Elgg, and we believed strongly in establishing open APIs so that people wouldn't be siloed into their social networks and web services. I met people there who have remained friends for the rest of my career. And all of us wanted open access to APIs so that users could move their data around, and so that startups wouldn't have as a high a barrier to entry into the market.

That was an ongoing meme in the industry ten years ago: open data, open APIs. It's one that has clearly informed Facebook's design and influential decisions. I certainly bought into it. And to some extent I still do, although I'd now prefer to go several steps further and architect systems with no central point of control or data storage at all. But such systems - whether centralized or decentralized - need to center around giving control to the user. Even at the Data Sharing Summit, we quickly realized that data control was a more meaningful notion than data ownership. Who gets to say what can happen to my data? And who gets to see it?

Establishing behind-the-scenes reciprocal data sharing agreements with partners breaks the implicit trust contract that a service has with its users.

Facebook clued us in to how much power it held in 2011, when it introduced its timeline feature. I managed to give this fairly asinine quote to the New York Times back then:

“We’ve all been dropping status updates and photos into a void,” said Ben Werdmuller, the chief technology officer at Latakoo, a video service. “We knew we were sharing this much, of course, but it’s weird to realize they’ve been keeping this information and can serve it up for anyone to see.”

Mr. Werdmuller, who lives in Berkeley, Calif., said the experience of browsing through his social history on Facebook, complete with pictures of old flames, was emotionally evocative — not unlike unearthing an old yearbook or a shoebox filled with photographs and letters.

My point had actually not so much been about "old flames" as about relationships: it became clear that Facebook understood everyone you had a relationship with, not just the people you had added as a friend. Few pieces dove into the real implications of having all that data in one place, because at the time it seemed like the stuff of dystopian science fiction. Some of us were harping on about it, but it was so far outside of mainstream discourse that it sounded crazy. But here we are, in 2018, and we've manifested the panopticon.

In the same way that the timeline made the implications of posting on Facebook clear, this year's revelations represent another sea change in our collective understanding. Last time - and every time there has been this kind of perspective shift - the Overton window has shifted and we've collectively adjusted our expectations to incorporate it. I worry that by next election, we'll be fairly used to the idea of extensive private surveillance (as a declared fact rather than ideological speculation), and the practice will continue. And then the next set of perspective shifts will be genuinely horrifying.

Questions left unanswered: what information is Facebook sharing with Palantir, or the security services? To what extent are undeclared data-sharing relationships used to deport people, or to identify individuals who should be closely monitored? Is it used to identify subversives? And beyond the effects of data sharing, given what we know about the chilling effects surveillance has on democracy, what effect on democratic discourse has the omnipresence of the social media feed already had - and to what extent is this intentional?

I'm done assuming good faith; I'm done assuming incompetence; I'm done assuming ignorance. I hope you are too.

 

Image: Elevation, section and plan of Jeremy Bentham's Panopticon penitentiary, drawn by Willey Reveley, 1791, from Wikipedia

· Posts · Share this post

 

Unlock and Joint Ownership

Since August, I've been helping my friend Julien Genestoux at his startup Unlock.

Unlock is a protocol which enables creators to monetize their content with a few lines of code in a fully decentralized way. In the initial version, anyone can sell their work on the internet by adding two lines of code. Those are the only steps: create your content; add code; you're ready to accept payment. It's blockchain-based, so it's equally accessible to everyone in the world, both to buy and to sell.

It's really a decentralized protocol for access control. There are two elements to consider: a lock, and a set of  keys. You place a lock on some content to protect it; anyone with a key for that particular lock can access it. Publishers can use the same lock for as many different items of content as they want, and anyone with an appropriate key can access all of it. Content could be an article, a video, a podcast, or a software application. It can also be a mailing list, which is on the roadmap for 2019.

It's an open protocol at heart, which means it starts to get really interesting when other people begin to build on it. The initial Unlock code is a paywall; you can run our hosted version, or you can install the software and run your own. But you can also take the Unlock blockchain and structure and build something completely new. Over time, there will be more Unlock code and libraries that you can use as building blocks. Unlock, Inc doesn't need to be the central hub, and it doesn't need to own the blockchain. Unlike a service like Twitter, where the underlying company gets value by controlling access (and running ads), and therefore developers may get burned if they use it to underpin their products, Unlock the company is physically incapable of exerting central control over the Unlock Protocol.

I think what I've described is a good thing for the web - Unlock is the low-friction payments layer that should have been there from the very beginning - but much more is possible, and this isn't a "decentralize all the things!" argument. There are concrete benefits for businesses today. One thing I'm particularly excited about is that, because the blockchain is both transparent and decentralized, jointly-owned content becomes much more possible.

Two hypothetical examples:

Radiotopia is a podcast co-operative. Each podcast is wholly owned by its producer, but they raise money together and distribute funds as a stipend between them. Right now, they're fundraising using CommitChange; funds presumably pool to one central point - someone holds a bank account - and then are distributed by a human. But what if they could raise money by creating a lock that people purchase keys for, and the proceeds from that lock were automatically and transparently sent to every member of the Radiotopia network? They could still use CommitChange as a front end (particularly as it's based on the open source Houdini project), but their accounting and payments overhead would be dramatically lower. Each member of the network would also be able to trust that payments were made to them immediately and automatically. And for new networks - baby Radiotopias - creating a bundled content network becomes just a case of deciding to work together.

Project Facet is an open source project for collaborative journalism. Increasingly, in a world of budget cuts and changing business models, newsrooms need to collaborate to produce investigative reporting. Right now, they pool resources in informal ways, and produce separate stories based on the reporting. With the Unlock Protocol, they could collaborate on the substance of the stories themselves, and put them under a shared lock that automatically pools the revenue between the participating organizations. This would be much harder in a universe where you'd have a custodial bank account and an accountant who made payments; here it could be fully transparent, and fully automatic.

These are purely hypothetical, and non-exclusive; much more is possible. Just a flexible paywall, or paid-for mailing lists, are exciting. The point is that we can think beyond how we've traditionally restricted access, and how we've transferred value. Personally, in my work, I'm most motivated by concrete human use cases - and Unlock illustrates how blockchain services have a lot of potential. This isn't an ICO, and it's not a speculative coin play. It's a way for creators to pool and share value, and make money from their work in a flexible way. And that's exciting to me.

The code is fully open; you can get involved here.

 

Photo by Francois Hurtaud on Unsplash

· Posts · Share this post

 

The Trolls from Olgino, the Sabateurs from Menlo Park

There's a lot in the news this morning about online influence campaigns conducted by the Internet Research Agency, a propaganda firm with close ties to the Russian government. Two reports were prepared for the Senate Intelligence Committee: one by an Austin-based security firm called New Knowlege, and the other by the Oxford Internet Institute's Computational Propaganda Project.

As of today, both are freely available online. Here's the full New Knowlege report; here's the full Oxford Institute Institute report.

This is the first time we've really heard about Instagram being used for an influence campaign, but it shouldn't be a surprise: if I say the word "influencer", it's probably the first platform that you think of. Like any decent digital advertising campaign, this one was cross-platform, recognizing that different demographics and communities engage on different sites. In a world where 44% of users aged 18 to 29 have logged out of the Facebook mothership, any campaign hoping to reach young people would to include Instagram. And of course, that's why Facebook bought the service to begin with.

News stories continue to paint this as some kind of highly sophisticated propaganda program masterminded by the Russian government. And it does seem like the Russian government was involved in this influence campaign. But this is how modern digital campaigns are run. People have been building Facebook Pages to gain as many likes as possible since the feature was released, precisely so they can monetize their posts and potentially sell them on to people who need to reach a large audience quickly. Influencers - people who are paid to seed opinions online - will represent $10 billion in advertising spending by 2020.

It is, of course, deeply problematic that a foreign influence campaign was so widespread and successful in the 2016 election - I have no desire to downplay this, particularly in our current, dire, political environment. But I also think we're skimming the surface: because of America's place in the world, it's highly likely that there were many other parallel influence campaigns, both from foreign and domestic sources. And all of us are subject to an insidious kind of targeted marketing for all kinds of things - from soft drinks to capitalism itself - from all kinds of sources.

The Iowa Writers' Workshop is one of the most influential artistic hubs of the twentieth century. Over half of the creative writing programs founded after its creation were done so by Iowa graduates; it helped spur the incredible creative boom in American literature over the next few decades. And its director, Paul Engle, funded it by convincing American institutions - like the CIA and the Rockefeller Foundation - that literature from an American, capitalist perspective would help fight communism. It could be argued that much of the literature that emerged from the Workshop's orbit was an influence campaign. More subtle and independent than the social media campaigns we see today, for sure, but with a similar intent: influence the opinions of the public in the service of a political goal.

And of course, Joseph Goebbels was heavily influenced in his approach by Edward Bernays, the American founder of modern public relations, who realized he could apply the principles of propaganda to marketing. Even today, that murderous legacy lives on: the Facebook misinformation campaigns around the genocide in Myanmar are its spiritual successor.

So political influence campaigns are not new, and they have the potential to do great harm. The Russian influence campaign is probably not even the most recent event in the long history of information warfare. While it's important to identify that this happened, and certainly to root out collusion with American politicians who may have illegally used this as a technique to win elections, I think it's also important to go a level deeper and untangle the transmisison vector as well as this particular incident.

Every social network subsists on influence campaigns to different degrees. There's no doubt that Facebook's $415 billion market cap is fuelled by companies who want to influence the feed where half of adultsdisproportionately from lower incomes - get their news. That's Facebook's economic engine; it's how it was designed to work. The same is true of Instagram, Twitter, etc etc, with the caveat that a feed with a lower population density is less valuable, and less able to have a measurable impact on the public discourse at large. There's one exception: while Twitter has significantly lower user numbers, it is heavily used by journalists and educators, who are then liable to share information gleaned there. Consider the number of news stories of the form, "here's what Trump tweeted today," which are then read by people who have never logged on to Twitter and wouldn't otherwise have seen the tweets.

The root cause of these misinformation campaigns is that people will do whatever they can to obtain, or hold onto, power. I don't think solving this is going to be possible during the entire remaining span of human civilization. So instead, let's think about how we can limit the "whatever they can" portion of the sentence. If people are going to use every means at their disposal to obtain power, how can we safety-check the tools we make in order to inhibit people from using them for this purpose?

Moving on from targeted advertising is a part of the answer. So is limiting the size of social networks: Facebook's 2.27 billion monthly active users are a disease vector for misinformation. As I've written before, its effective monopoly is directly harmful. Smaller communities, loosely joined, running different software and monetized in different ways, would make it much harder for a single campaign to spread to a wide audience. Influence campaigns would continue to run, but they would encounter community borders much more quickly.

A final piece is legislation. It's time for both privacy and transparency rules to be enacted around online advertising, and around user accounts. For their protection, users need to know if a message was posted by a human; they also need to know who placed an advertisement. And advertising for any kind of political topic in an election period should be banned outright, no matter who placed it, as it was in the UK. You can't have a democratic society without free and open debate - and you can't have free and open debate if one side is weighted with the force of millions of dollars, Facebook's market cap be damned.

 

Photo by Jakob Owens on Unsplash

· Posts · Share this post

 

Checking in on my social media fast

Three weeks ago, I decided to go dark on social media. No convoluted account deletion process; no backups. I just logged out everywhere, and deleted all my apps. It's one of the best things I've ever done.

I thought I'd check in with a quick breakdown: what worked, and what didn't. Here we go.

 

What worked

I haven't logged into Twitter, Facebook, or Instagram. I feel much calmer for it. I also feel better for not contributing to the Facebook machine. And I've gained 7 to 10 hours a week in time I'm not looking at my phone.

Crucially, I don't feel like I'm missing out or going to be forgotten, which were two of the things I was afraid of. I miss the hour-to-hour outrage but am on top of the important news. Lots of people have reached out to me; I've reached out to others; I've had the most non-work one on one email conversations in a decade. It's led to lunches, meeting up with people for dinner that I haven't seen in ages - it's been genuinely great.

The way I use my phone when I am looking at my phone has changed, too. I'm reading a lot more news and long-form content. I treated myself to a New Yorker digital subscription, which has been nourishing. (I've also got subscriptions to the NYT, Washington Post, and WSJ, and realizing that I'm missing a more international perspective. Recommendations needed!) I'm still thinking about this James Baldwin essay. I've started heavily using Pocket to save articles I might want to do something with later. Have you read the Laurie Penny blockchain cruise piece? You really have to.

And I'm blogging a lot more. For the first week or so, I felt compelled to write something every day. I'm definitely not doing that now, but not tweeting lets the thoughts bubble up until they're something a little more substantial. I've also branched out into writing things for other outlets; I'm hoping one will show up today. But the best part about blogging is that writing helps me order my thoughts and go deeper on topics I'm interested in. It also, for more personal subjects, helps me process.

 

What didn't work

I had to log back into LinkedIn. Of all the social networks, I'm sad that this is the one that proved indispensible. But it turns out I don't have a lot of peoples' email addresses, so when I needed to reach out to someone, I couldn't do it any other way. I've accepted my fate here for now, but I'm fairly uncomfortable with Microsoft being at the center of my professional relationships, so I'll need to figure something else out.

And I can't help it: I check Google Analytics for my blog. It's taken the place of hoping for interactions on my tweets, and the little realtime graph still provides enough of a dopamine rush to give me a hit. I need to wean myself away - perhaps by simply removing Google Analytics from my site. (Arguably, if I'm serious about decentralization and privacy, this is something I should do anyway - so I've just talked myself into it. It'll be gone today.)

I still spend far too much time looking at my phone. I thought about illustrating this piece with some stats, but I decided not to. They're embarrassing.

Finally: my blog is still mostly about tech. Or at least, it has been - but that's not the entirety of what I read and think about. So I'm trying to figure out if I want to have two outlets, or if anyone cares whether I digress from user privacy to talk about writing for Doctor Who or - and this might be a piece that happens soon - making pad thai for my mother. In some ways, I feel like I need to ask your permission to do this, which is sad, and I shouldn't. (So, again: I've just talked myself into not worrying about it.)

In other words: I haven't been bold enough. I could go further. So, I will.

 

Conclusions so far

This change has been more positive than expected. I'll probably keep it up in the new year, perhaps with some tweaks. Give it a try!

· Posts · Share this post

 

With RAD, podcasters can finally learn who's listening

NPR announced Remote Audio Data today: a technology standard for sending podcast audience analytics back to their publishers. Podcasting is one of the few truly decentralized publishing ecosystems left on the web, and it's a relief to see that this is as decentralized as it should be.

Moreover, it's exactly the role public media should be playing: they convened a group of interested parties and created an underlying open source layer that benefits everyone. One of the major issues in the podcast ecosystem is that nobody has good data about who's actually listening; most people use Apple's stats, look at their download numbers, and make inferences. This will change the game - and in a way that directly benefits podcast publishers rather than any single central gatekeeper.

What's not listed in the spec is a standard way to disclose to the listener that their analytics are being shared. This may fall afoul of GDPR and similar legislation if not handled properly; to be honest, I'd hope that any ethical podcast player would ask permission to send this information, giving me the opportunity to tell it not to. Still, at least in the five minutes that everyone isn't sending their listening data to be processed by Google Analytics, this is an order of magnitude better than using Apple as a clearinghouse.

Here's a quick technical overview of how it works:

While MP3 files mostly contain audio, they can also contain something called an ID3 tag for human-readable information like song title, album name, artist, and genre. RAD adds a JSON-encoded remoteAudioData field, which in turn contains two arrays: trackingUrls and events. It can also list a custom podcastId and episodeId. Events have an optional label and mandatory eventTime, expressed as hh:mm:ss.sss, and can have any number of other keys and values.

The example data from the spec looks like this:

{
 "remoteAudioData": {
   "podcastId":"510298",
   "episodeId":"497679856",
   "trackingUrls": [
     "https://tracking.publisher1.org/remote_audio_data",
     "https://tracking.publisher2.org/remote_audio_data",
     "https://tracking.publisherN.org/remote_audio_data",
   ],
   "events": [
     {
       "eventTime":"00:00:00.000",
       "label":"podcastDownload",
       "spId":"0",
       "creativeId":"0",
       "adPosition":"0",
       "eventNum":"0"
     },
     {
       "eventTime":"00:00:05.000",
       "label":"podcastStart",
       "spId":"0",
       "creativeId":"0",
       "adPosition":"0",
       "eventNum":"1"
     },
     {
       "eventTime":"00:05:00.000",
       "label":"breakStart",
       "spId":"123456",
       "creativeId":"1234567",
       "adPosition":"1",
       "eventNum":"2"
     },
     {
       "eventTime":"00:05:15.000",
       "label":"breakEnd",
       "spId":"123456",
       "creativeId":"1234567",
       "adPosition":"1",
       "eventNum":"3"
     }
   ]
 }
}

The podcast player sends a POST request to the URLs listed in trackingURLs, wrapped in a session ID and optionally containing the episodeId and podcastId. By default the player should send this at least once per hour, although the MP3 file can specify a different duration by including a submissionInterval parameter. The intention is that the podcast player stores events and can send them asynchronously, because podcasts are often listened to when there's no available internet connection. After a default of two weeks without sending, events are discarded.

Here's an example JSON string send to a reportingUrl from the spec:

{
 "audioSessions": [
   {
     "podcastId": "510313",
     "episodeId": "525083696",
     "sessionId": "A489C3AD-04AA-4B5F-8289-4D3D2CFE4CFB",
     "events": [
       {
         "sponsorId": "0",
         "creativeId": "0",
         "eventTime": "00:00:00.000",
         "adPosition": "0",
         "label": "podcastDownload",
         "eventNum": "0",
         "timestamp": "2018-10-24T11:23:07+04:00"
       },
       {
         "sponsorId": "0",
         "creativeId": "0",
         "eventTime": "00:00:05.000",
         "adPosition": "0",
         "label": "podcastStart",
         "eventNum": "1",
         "timestamp": "2018-10-24T11:23:08+04:00"
       },
       {
         "sponsorId": "111128",
         "eventTime": "00:00:05.000",
         "adPosition": "1",
         "label": "breakStart",
         "creativeId": "1111132",
         "eventNum": "2",
         "timestamp": "2018-10-24T11:23:09+04:00"
       },
       {
         "label": "breakEnd",
         "sponsorId": "111128",
         "eventTime": "00:00:05.000",
         "adPosition": "1",
         "creativeId": "1111132",
         "eventNum": "3",
         "timestamp": "2018-10-24T11:23:10+04:00"
         }
     ]
   },
   {
     "podcastId": "510314",
     "episodeId": "525083697",
     "sessionId": "778A4569-4B06-469B-8686-519C3B43C31F",
     "events": [
       {
         "sponsorId": "0",
         "eventTime": "00:00:00.000",
         "adPosition": "0",
         "creativeId": "0",
         "eventNum": "0",
         "timestamp": "2018-10-24T11:23:11+04:00"
       }
     ]
    },
   {
     "podcastId": "510315",
     "episodeId": "525083698",
     "sessionId": "F825BE2B-9759-438A-A67E-9C2D54874B4F",
     "events": [
       {
         "sponsorId": "0",
         "eventTime": "00:00:00.000",
         "adPosition": "0",
         "label": "podcastDownload",
         "creativeId": "0",
         "eventNum": "0",
         "timestamp": "2018-10-24T11:23:12+04:00"
       }
     ]
   }
 ]
}

It's a very simple, smart solution. There's more information at the RAD homepage, and mobile developers can grab Android or iOS SDKs on GitHub.

· Posts · Share this post

 

Persuading people to use ethical tech

I've been in the business of getting people to use ideologically-driven technology for most of my career (with one or two exceptions). Leaving out the less ideologically driven positions, it goes something like this:

Elgg: We needed to convince people that, if they're going to run an online community, they should use one that allows them to store their own data anywhere, embraces open standards, and can run in any web browser (which, at the height of Internet Explorer's reign, was a real consideration).

Latakoo: In a world where journalism is experiencing severe budget cuts, we needed to persuade newsrooms that they shouldn't buy technology with astronomically expensive licenses and then literally build it into the architecture of their buildings (when I first discovered that this was happening, it took a while for my jaw to return to the un-dropped position).

Known: We needed to convince people that, if they're going to run an online community-- oh, you get the idea.

Matter: We needed to convince investors that they should put their money into startups that were designed to have a positive social mission as well as perform well financially - and that media was a sound sector to put money into to begin with.

Unlock: We need to persuade people that they should sell their work online through an open platform with no middleman, rather than a traditional payment processor or gateway.

That's a lot of ice skating uphill!

So how do you go about selling these ideas?

One of the most common ideas I've heard from other startup founders is the idea of "educating the market". If people only knew how important web standards were, or if they only knew more about privacy, or about identity, they would absolutely jump on board this better solution we've made for them in droves. We know what's best for them; once they're smarter, they'll know what's best for them too - and it's us!

Needless to say, it rarely works.

The truth comes down to this: people have stuff to do. Everyone has their own motivations and needs, and they probably don't have time to think about the issues that you hold dear to your heart. Your needs - your worries about how technology is built and used, in thise case - are not their needs. And the only way to persuade people to use a product for it to meet their deeply-held, unmet needs.

If you have limited resources, you're probably not going to pull the market to you. But if you understand the space well and understand people well, you can make a strong hypothesis about whether the market is going to come to you at some point. If you think the market is going to want what you're building two or three years out, and you can demonstrate why this is the case (i.e., it's a hypothesis founded on research, not just a hunch) - then that's a good time to start working on a product.

Which is why, while many of us were crowing over the need for web products that don't spy on you for decades, it's taken the aftermath of the 2016 election for many people to come around. Most people aren't there yet, but the market is changing, and tech companies will change their policies to match. The era of tracking won't come to an end because of activist developers like me - it'll come to an end because we failed, and Facebook's ludicrous policies (which, to be clear, aren't really different to the policies of many tech companies) reached their damaging logical conclusion, allowing everyone to see the full implications.

So if an ideology-first approach usually fails, how did we persuade people?

The truth is, it wasn't about the ideology at all. Elgg worked because people needed to customize community spaces and we provided the only platform at the time that would let them. Latakoo worked because it allowed journalists to send video footage faster and more conveniently than any other solution. Known didn't work because we allowed the ideology to become its selling point, when we should have concentrated on allowing people to build cross-device, multi-media communities  quickly and easily (the good news is that because it's open source, there's still time for it). Unlock will work if it's the easiest and most profitable way for people to make money from their work online.

You can (and should) build a tool ethically; unless you're building for a small, niche audience, you can't make ethics be the whole tool. Having deep knowledge of, and caring deeply about, the platform doesn't absolve you from the core things you need to do when you're building any product. Which, first and foremost, is this: make something that people want. Scratch their itch, not yours. Know exactly who you're building for. And make them the ultimate referee of what your product is.

· Posts · Share this post

 

The Facebook emails

I still need to read the documents unsealed by British Parliament for myself, but they seem pretty revealing.

From the Parliamentary summary itself:

Facebook have clearly entered into whitelisting agreements with certain companies, which meant that after the platform changes in 2014/15 they maintained full access to friends data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.

[...] It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.

[...] Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.

In the New York Times:

Emails and other internal Facebook documents released by a British parliamentary committee on Wednesday show how the social media giant gave favored companies like Airbnb, Lyft and Netflix special access to users’ data.

In Forbes:

In one 2013 email from Facebook's director of platform partnerships Konstantinos Papamiltiadis, the executive tells staff that “apps that don’t spend” will have their permissions revoked.

“Communicate to the rest that they need to spend on NEKO $250k a year to maintain access to the data,” he wrote. NEKO is an acronym used at Facebook to describe app install adds, according to The Wall Street Journal.

Meanwhile, the email cache reveals that Facebook shut down Vine's access to the Facebook friends API on the day it was released. Justin Osofsky, VP for Global Operations and Corporate Development, wrote Mark Zuckerberg at the time:

Twitter launched Vine today which lets you shoot multiple short video segments to make one single, 6-second video. As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision.

Zuckerberg's reply:

Yup, go for it.

Purely coincidentally, I'm sure, Facebook changed this policy yesterday. As TechCrunch reported:

Facebook will now freely allow developers to build competitors to its features upon its own platform. Today Facebook announced it will drop Platform Policy section 4.1, which stipulates “Add something unique to the community. Don’t replicate core functionality that Facebook already provides.”

That policy felt pretty disingenuous given how aggressively Facebook has replicated everyone else’s core functionality, from Snapchat to Twitter and beyond. Facebook had previously enforced the policy selectively to hurt competitors that had used its Find Friends or viral distribution features. Apps like Vine, Voxer, MessageMe, Phhhoto and more had been cut off from Facebook’s platform for too closely replicating its video, messaging or GIF creation tools. Find Friends is a vital API that lets users find their Facebook friends within other apps.

It will be interesting to follow the repercussions of this release. My hope is that we'll finally see some action from the US government in the new year. In the meantime, it's ludicrous that it took action from the UK - and legislation from the EU - to bring some of this to light.

 

· Posts · Share this post

 

Facebook's monopoly is harming consumers

I was asked last week about the ethics of social networks: what would need to change to create a more ethical ecosystem.

Targeted display advertising, of course, has a huge part to play. Facebook created a system designed to capture the attention of its users so that they could interact with advertising that was tailored for them in order to manipulate them into an action or position. People buy advertising on Facebook to drive sales, but they also buy them to manufacture brand awareness and loyalty - and to manipulate users into adopting a politcal position. Facebook's machine was not originally built to manipulate, but its business model ratified its sociopathy.

The persuasive effect of its targeted advertising and engagement algorithms would have been diminished, however, if Facebook wasn't completely ubiquitous. In Q3 2018, it had 2.27 billion monthly active users. For context, there will be an estimated 3.2 billion people online by the end of the year: Facebook's monthly active users represent 71% of the internet. In America, it's the site most commonly used to discover news, or in other words, to learn about the world.

This is a dangerous responsibility to place in the hands of a single corporation with no meaningful competition. Yes, other social networks exist, but each serves a different purpose. Twitter is a kind of town hall zeitgeist Pandora's box full of wailing souls (sorry, a place that aims to "give everyone the power to create and share ideas and information instantly, without barriers"); Instagram (which, of course, is Facebook again) is the Vogue edition of everybody's life; Snapchat rests on its "mom don't read this" ephemerality. Facebook is designed, as its homepage used to proudly proclaim, to be a social utility that "reinforces connections to the people around you". Over time, it aims to make those social connections dependent on its service.

In a world where Facebook is a core part of life for billions of people, its policy and product decisions have an outsized effect on how its users see the world. Those decisions can certainly swing elections, but they have a measurable effect on public sentiment in other areas, too. It's not so much that the platform is reflective of the global culture; because that culture is shared and discovered on the platform, the culture reflects it. A bad actor with enough time and money can construct a viral message - or suite of messages - that can sweep across billions of people in less than a day. Facebook itself could engage in social engineering, with almost no oversight. There are few barriers; there is no real vaccine beyond a vain hope that Facebook will do the right thing.

But imagine a world where there isn't one Facebook, and we all participate in many social communities across many different platforms. Rather than one mega filter bubble, we engage with lots of bubbles, loosely joined - all controlled by a different entity, potentially in a different culture, with different priorities. In this world, the actions of a single one of these bubbles become less important. If each one is making different policy and product decisions, and is a logically separate network with its own codebase, userbase, and way of working, it becomes significantly harder for anyone to make a message ubiquitous. If each one has a different feed algorithm, while a malicious campaign could infiltrate one network, it would be much harder for it to infiltrate them all. In a healthy market, even discovering all the different communities that a user participates in could become a difficult task.

Arguably, this would require a different funding model to become the norm in Silicon Valley. Venture capital has enabled many businesses to get off the ground with the capitalization they need; it is not always the bad guy. But it also inherently encourages startups to aim towards monopoly. Venture capital funds want their investments to grow at an expontential rate. VCs want to return 3X the value of each fund inside 10 years (typically) - and because most startups fail, they're looking to invest in businesses that will return around 37X their original investment. That usually looks like owning a particular market or market segment, and that's what tends to find its ways into pitch decks. "This is a $100 billion market." Subtext: "we have the potential to capture all that". In turn, targeted advertising became popular as a way for startups to make revenue because asking customers for money creates sign-up friction and reduces growth.

So accidentally, venture capital creates Facebook-style businesses that aim to grow as big as possible without regard to the social cost. It does not easily support marketplaces with lots of different service providers competing with each other for the same market over a sustained period. And businesses in Silicon Valley have a hard time getting up and running without it, because the cost of living here is so incredibly expensive. Because of the sheer density of people who have experience building technology businesses here, as well as high-end technical talent and a general culture of helpfulness, Silicon Valley is still the best place to start this kind of business. So, VC it is, until we find some other instrument to viably fund tech companies. (Some obvious contenders are out: ICOs have rightly been slapped down by the SEC, and revenue sharing investment only really works for very small amounts of investment.)

Okay, so how about we just break Facebook up, and set a precedent for future businesses, just like we did with Microsoft in the nineties? After all, its impact is even more catastrophic than Microsoft's, and its actions are even more brazenly monopolistic. Everything else aside, consider its use of a VPN app it acquired to identify apps whose usage was threatening Facebook's, so that it could proactively acquire them and shut them down.

American anti-trust law has been set ostensibly to protect consumers, rather than competition. As Wired reported a few years ago:

Under current U.S. law, being a "monopoly" is not illegal; nor is trying to best one’s competitors through lower prices, better customer service, greater efficiency, or more rapid innovation. Consumers benefit when Apple disrupts the market with iPhones and iPads, even if this means RIM sells fewer BlackBerries or that Microsoft licenses fewer desktop operating systems. Antitrust law only springs into action against a monopoly when it destroys the ability of another company to enter the market and compete.

The key question, of course, is whether a particular monopoly is harming consumers – or merely harming its competitors for the benefit of those consumers.

With any lens except the most superficial, Facebook fails this test. Yes, its product is free and available to anyone. But we pay with our data and privacy - and ultimately, with our democracy. Facebook's dominance has adversely affected entire industries, swung elections, and fuelled genocides. In the latter case, this hasn't been in the United States - at least, not so far - and perhaps this is one of the reasons why it's escaped serious repurcussions. Its effects have been felt in different ways all over the world, and various governments have had to deal with them piecemeal. There is no jurisdiction big enough to cover its full impact. Facebook is, in some ways, more powerful than the government of any nation.

There's one thought that gives me hope. Anyone who has watched Facebook closely knows that it didn't grow through brilliant strategy and genius maneuvering. Its growth curve closely maps to the growth of the internet; it happened to be in the right place at the right time, and managed to not screw it up enough to drive people away. As people joined the internet for the first time, they needed a place to go, and Facebook was it. (The same is true of Instagram, which closely maps to the growth in smartphone camera usage.) As the internet became saturated in developed nations, Facebook's growth curve slowed, and it now needs to bring more people online in developing nations if it wants to continue dominating new markets.

That means two things: Facebook will almost inevitably stagnate, and it is possible for it to be outmaneuvered.

In particular, as new computing paradigms take hold - smart speakers, ambient computing, other devices beyond laptops and smartphones - another platform, or set of platforms, can more easily take its place. When this change inevitably happens, it is our responsibility to find a way to replace it ethically, instead of with yet another monopolistic gatekeeper.

There is work to do. It won't be easy, and the outcome is far from inevitable. But the internet is no longer about code being slung from dorm rooms and garages. It's about democracy, it's deadly serious, and it needs to be treated as such.

 

Photo by JD Lasica, shared on Wikipedia under a CC BY 2.0 license.

· Posts · Share this post

 

The unexpected question I ask myself every year

Okay, but seriously, how can I get to work on Doctor Who?

It's a dumb question, but my love for this show runs deep - I've been watching it since I was five years old at least. As a non-aggressive, third culture kid who couldn't fit in no matter how he tried, growing up in Britain in the eighties and nineties, the idea of an alien pacifist who solved problems through intelligence, kindness and empathy appealed to me. It still does. It's brilliant. The best show on TV, by far.

I love it. I love watching it. I love reading the books. I dream complete adventures, sometimes. For real.

I don't need to work on it.

Oh, but I do.

I want to play in that universe. I want to take my knowledge of its 55 years of television, and my deep feeling for the character and the whole ethos of the production, and help to build that world. I want to make things that resonate for children in the same way it resonated for me.

It's not about Daleks or Cybermen or reversing the polarity of the neutron flow. It's about the fundamental humanity of telling stories that teach empathy and peace. It's about an action show where the heroes wield understanding and intuition instead of weapons. It's about an institution that genuinely transcends time and space, after 55 years, in a way that its original creators could never have understood. It's a through line to my life and how I see the world.

It's obviously a pipe dream. Still, every year, I ask myself: "am I any closer to working on Doctor Who?"

Every year, the answer is "no".

It's not like I've been working hard to take my life in that direction. I write, for sure; I've had science fiction stories published. But I work in technology - at the intersection of media and tech, for sure, but still on the side of building software and businesses. There was a time when the show was cast aside, and enthusiasts were welcomed to participate - if not with open arms, then with a markedly lower bar than today, when it's one of the hottest shows on TV.

Someone I went to school with did end up working on the show; her dad, Michael Pickwoad, was the production designer for a time. He worked on TARDIS interiors for Matt Smith and Peter Capaldi, among other things. His daughter worked on it with him for a little bit, and was even name-checked in one episode, when her soul was sucked into the internet through the wifi.

I felt a pang of envy for a moment, but mostly I thought it was cool.

What would you even need to do to work on the show? Should I be focusing more on writing fiction? Should I try and write for something else first? Could I maybe find my way into an advisory position, helping the writers to better understand Silicon Valley? (Because, listen, Kerblam! was a good episode, but the ending ruined the parable. Did Amazon ask you to change it?) I don't understand how this industry works; I don't know where to even begin. The show isn't really even for me, anymore; I'm not the six year old watching Peter Davison on BBC1 while I sit cross-legged on the floor. I'm a grown-ass, middle aged man. And who am I to think I can even stand shoulder to shoulder with the people who do this incredible work? People like Malorie Blackman and Vinay Patel, who wrote this year's standout stories?

Like I said: it's a pipe dream. I'm fine. I don't need to be a part of this. I can just enjoy it. I can.

But.

The year is closing out. We're all preparing to turn over new leaves. A new calendar on the wall means a fresh start. There's so much to look forward to, it feels like the world is finally turning a corner, and I'm working on amazing things.

Just ... look. I just need to ask one question. I can't stop myself, as stupid as it is.

Am I any closer to working on Doctor Who?

· Posts · Share this post

 

Asking permission to be heard is an idea that needs to die

I remember reading about Tavi Gevinson when she was just starting out; a wunderkind blogger. Now her media company is winding down - but at least it's winding down on her terms.

Her goodbye letter is beautiful:

In one way, this is not my decision, because digital media has become an increasingly difficult business, and Rookie in its current form is no longer financially sustainable. And in another way, it is my decision—to not do the things that might make it financially sustainable, like selling it to new owners, taking money from investors, or asking readers for donations or subscriptions. And in yet another way, it doesn’t feel like I’m deciding not to do all that, because I have explored all of these options, and am unable to proceed with any of them.

This was what I wanted to help solve. It was my job, but it went far further than that. I was up late at night while writers turned entrepreneurs cried on my shoulder; sometimes, I cried with them. I felt every setback and every problem, always wondering if there was more that I could do. And I did this as just one part of a bigger team, which in turn was just one part of a bigger movement, all understanding the importance of media, all invested in new ways to pay for it.

It is so far from being solved.

And yes, we're talking about a fairly privileged fashion blogger from New York City. But we're also not. This experience is wider and deeper than just this one publication. And even when we are focused back in on Rookie, Gevinson's unique perspective, alongside the unique perspectives of all the previously-unpublished young women writers she supported, is worth preserving.

I want these voices - of women, of people of color, of anyone with a new perspective or an insight or just words that make you feel anything at all - to be sustainable. The world needs them. Our tapestry of culture is better for their presence. Ensuring we have a thriving media has never just been about direct journalism and reporting the news (although those things are vitally important); if we accept that media is the connective tissue of society, the way we learn about the wider world, we must also accept that a vibrancy of diverse voices is central to that understanding. Not just in terms of who gets reported on and who stories are about, but who gets to make media to begin with.

The most compelling business model for media companies in America is to be propped up by a billionaire. Overwhelmingly, they lose money, and depend on wealthy benefactors to survive - sometimes through acquisition, and other times simply through donation. There are other, incrementally devolved versions of this: in a patronage model, publications ask rich people to pay so that everyone can read. Subscription models create gated communities of information. Kickstarters ask wealthy people to benevolently pay for something to come into existence. In all of these models, young media companies, outlets for voices, must in some way contort themselves to be appealing to rich people, who are predominantly white, straight men, even if those people are not its core audience.

So, then there's this. This quote is so real, and it viscerally conjures up so many feelings for me:

One woman venture capitalist told us, after hearing my very nervous pitch, “I hate to say this because I hate that it’s true, but men who come in here pitch the company they’re going to build, while women pitch the company they’ve already built.” The men could sound delusional, but they could also sound visionary; women felt the need to show their work, to prove themselves.

Women have been told "no" so many times that they don't dare to discuss the true vision for what they want to build. I've certainly been in funding discussions where the true vision was obvious but unspoken; allowing it to flourish required creating a very safe space, and one that I'm inherently less qualified to provide. An ambition unspoken is not an ambition unconceived.

Public media, as with public art, has a measurable impact on everybody's quality of life. There should be public money available for both, as there is in other countries. The reliance on wealthy individuals to graciously provide is perverse. But here we are.

Given the strangehold that rich people and their agents have on culture, we should empower more diverse people to be able to deploy that money where it's needed. Venture capital firms should aim to have more diverse partners, to make safer spaces for more diverse founders - not just for social reasons, but because immigrants founded over half the billion dollar startups in the world and companies run by women perform better.

We can't, though, accept this status quo. Ensuring that more funding goes towards supporting diverse voices means overhauling the system of funding itself. That means removing all of the funding gatekeepers. Why are they there to begin with?

My ideal would be an independent pool of money that predominantly comes from public funds, but I recognize that this is a very European-style idea that is unlikely to gain traction in the US.

Ultimately, though, it's a half-measure. A solution to a world where the decisions over whose voices get to be heard and who gets to be distributed are related to wealth is simply this: the wealth needs to be more evenly distributed. So many of our diversity problems are because society is unequal. Income inequality continues to grow in the United States, as it does in many other places - and of course, the people who hold an increasing majority of the world's income are white men.

A world where everyone has money in their pocket to support what they care about is one where many different types of voices are supported. That, it seems to me, is the real problem to solve. We have to remove the strangehold of a very small number of rich people on all of society; a strangehold that was originally established through racism, colonialism, and oppression. We're building a world where everyone else, and every endeavor that they do not directly control, must go back to them to ask permission. Everything else - problems in our political system, problems with media business models, productivity, economic diversity, crime - can be drawn back to this. We urgently need, coherently, together, and in a way that embraces our ideological diversity and differences of contexts and backgrounds, to find a way to empower everyone to live.

 

Photo by Rita Morais on Unsplash

· Posts · Share this post

 

Gefilte bubbles

My nuclear family - the one I grew up with - has four different accents. My mother's is somewhere between New England and California; my dad's is Dutch with some Swiss German and English inflections; my sister has traveled further down the road towards a Bay Area accent; and mine is just softened enough that most people think I'm from New Zealand. Thanksgiving, like Christmas, is for us a wholly appropriated holiday: not about genocide or holiness, respectively, but simply about being together as a family. Like magpies, we've taken the pieces that resonate with us, and left the rest.

Technically, I'm a Third Culture Kid: "persons raised in a culture other than their parents' or the culture of the country named on their passport (where they are legally considered native) for a significant part of their early development years". I'm not British, but grew up the place; I love it there, but I also did not assimilate.

I've never felt any particular belonging to the countries on my passports, either, which turns out to be a common characteristic among TCKs. Instead, our nationality and religion is found among shared values and the relationships we build. I've written about this before, although back then I didn't fully understand the meta-tribe to which I belonged. It's also part of the Jewish experience, and the experience of any group of people who has been forcibly moved throughout history. Yes, I'm a product of globalization, but that doesn't mean I'm also a product of privilege; migration for many, including my ancestors, has not been optional.

I was well into my thirties before I understood that my experience of culture was radically different to many other peoples'. It hadn't occurred to me that some people simply inherit norms: the practices of their communities become their practices, too; the way things were done become the way things are and will be done. If you live in this sort of cultural filter bubble, challenges to those well-established norms are threatening. We know that people prefer to consume news and information that confirms their existing beliefs; that's why misinformation can be so effective. The same confirmation bias also applies to how people choose to build relationships of all kinds with one another. It's at the heart of xenophobia and racism, at its most overt, but it also manifests in subtler ways.

I lost count of the number of people who told me I should give up my nationalities and become British, or who made fun of my name, or took issue with my lack of understanding of shared cultural norms. Food is just one example of something mundane that can be incredibly contentious: the dishes from your community carry the weight of love and history. When someone presents as being from your community - no visible differences; more or less the same accent, even if they mispronounce a word here and there - but doesn't have any of that shared understanding, it simply doesn't compute.

I'm fascinated by this survey of Third Culture Kid marriages. The TCK blogger Third Culture Mama received 130 responses from TCKs and their spouses, in an effort to discover how cross-cultural relationships can thrive. It's the first time I've seen anything like this, and I found some of the qualitative responses to be unexpectedly comforting. For example:

When multiple cultures are involved it’s easy to idealize your own culture and how you were brought up. But if you can set it aside to listen to another point of view and another way of doing things, you realize there isn’t only one right way. As a couple you need to decide to say “this is how WE do things. This is what WE believe.” Not “this is what she did. Or this is how my family did it growing up.” There is great validity in understanding both of your pasts and how you were raised. But you need to move on from there and choose a path that you go down together. Doing this takes humility, love, and a desire to do right more than to be right. Listen to one another.

Particularly in startup-land, but in the media in general, there is a glut of how-to articles that assume what worked for the author will work for you. It's a great idea to read other peoples' experiences and learn from them, but you can't apply them directly: you have to forge your own path. Rather than take someone else's pattern verbatim and throw yourself into it, you need to build something that is nurturing and right for you. That's true in relationships, and it's true in business. Over half of all billion dollar startups were founded by immigrants, and I think this mindset is one of the reasons why. As an immigrant, you don't have the luxury of following patterns; you have to weave your own from first principles. You can't make assumptions about how people will behave; you have to study them. Taking this outside perspective is a path to success for everyone.

Another response:

Ask questions, let them cook food from their childhood, look at pictures, learn key phrases in their language. Understand that we’re constantly fighting against this dichotomy of wanting to venture off, but also wanting a place to belong. Realize that we approach emotional intimacy and relationships very differently.

For me, the relationships that have worked are the ones where we've made the space to create our own culture together. I'm drawn to outsiders and people who are willing to question established norms, and over time, through trial and error, bad interactions and good, I've found that I find slavish adherence to cultural norms in a person as threatening as some people find the opposite in me. I've decided that the edge of established culture is where the interesting work happens, and where some of the most interesting people can be found.

In other words, my filter bubble is my psychological safety zone. It's an emotional force field, just as it is for everyone. We all choose who we interact with, who we listen to, and the spaces that we inhabit. The important thing is not that we blow those bubbles to smithereens, but that we see them for what we are, and - just as those happily married TCKs have - let people in to help us grow and change them.

This weekend, children were shot with rubber bullets and tear gas at the US border with Mexico. The root of America's refusal to let them in is a fear of a disruption to those norms. It's in vain. Populations have been ebbing and flowing for as long as there have been people. America is changing, just as all countries are changing, how they always have been, and how they always will. And people like me - those of us with no nationality and no religion, but an allegiance to relationships and the cultures we create together - are growing in number. Selfishly, but also truthfully, I believe it's all for the better.

 

Photo by Elias Castillo on Unsplash

· Posts · Share this post

 

How machine learning can reinforce systemic racism

Over Thanksgiving, the Washington Post ran a profile of the babysitting startup Predictim:

So she turned to Predictim, an online service that uses “advanced artificial intelligence” to assess a babysitter’s personality, and aimed its scanners at one candidate’s thousands of Facebook, Twitter and Instagram posts.

The system offered an automated “risk rating” of the 24-year-old woman, saying she was at a “very low risk” of being a drug abuser. But it gave a slightly higher risk assessment — a 2 out of 5 — for bullying, harassment, being “disrespectful” and having a “bad attitude.”

Machine learning works by making predictions based on a giant corpus of existing data, which grows, is corrected, and becomes more accurate over time. If the algorithm's original picks are off, the user lets the software know, and this signal is incorporated back into the corpus. So to be any use at all, the system broadly depends on two important factors: the quality of the original data, and the quality of the aggregate user signal.

In the case of Predictim, it needs to have a great corpus of data about a babysitter's social media posts and how it relates to their real-world activity. Somehow, it needs to be able to find patterns in the way they use Instagram, say, and how that relates to whether they're a drug user or have gone to jail. Then, assuming Predictim has a user feedback component, the users need to accurately gauge whether the algorithm made a good decision. Whereas in many systems a data point might be reinforced by hundreds or thousands of users giving feedback, presumably a babysitter has comparatively fewer interactions with parents. So the quality of each instance of that parental feedback is really important.

It made me think of COMPAS, a commercial system that provides an assessment of how likely a criminal defendant is to recidivate. This tool is just one that courts are using to actually adjust their sentences, particularly with respect to parole. Unsurprisingly, when ProPublica analyzed the data, inaccuracies fell along racial lines:

Black defendants were also twice as likely as white defendants to be misclassified as being a higher risk of violent recidivism. And white violent recidivists were 63 percent more likely to have been misclassified as a low risk of violent recidivism, compared with black violent recidivists.

It all comes down to that corpus of data. And when the underlying system of justice is fundamentally racist - as it is in the United States, and in most places - the data will be too. Any machine learning algorithm supported by that data will, in turn, make racist decisions. The biggest difference is that while we've come to understand that the human-powered justice system is beset with bias, that understanding with respect to artificial intelligence is not yet widespread. For many, in fact, the promise of artificial intelligence is specifically - and erroneously - that it is unbiased.

Do we think parents - particularly in the affluent, white-dominated San Francisco Bay Area communities where Predictim is likely to launch - are more or less likely to give positive feedback to babysitters from communities of color? Do we think the algorithm will mark down people who use language most often used in underrepresented communities in their social media posts?

Of course, this is before we even touch the Minority Report pre-crime implications of technologies like these: they aim to predict how we will act, vs how we have acted. The only possible outcome is that people whose behavior fits within a narrow set of norms will more easily find gainful employment, because the algorithms will be trained to support this behavior, while others find it harder to find jobs they might, in reality, be able to do better.

It also incentivizes a broad surveillance society and repaints the tracking of data about our actions as a social good. When knowledge about the very existence of surveillance creates a chilling effect on our actions, and knowledge about our actions can be used to influence democratic elections, this is a serious civil liberties issue.

Technology can have a part to play in building safer, fairer societies. But the rules they enforce must be built with care, empathy, and intelligence. There is an enormous part to play here not just for user researchers, but for sociologists, psychologists, criminal justice specialists, and representatives from the communities that will be most affected. Experts matter here. It's just one more reason that every team should incorporate people from a wide range of backgrounds: one way for a team to make better decisions on issues with societal implications is for them to be more inclusive.

· Posts · Share this post

 

I'm going dark on social media for the rest of 2018.

For a host of reasons, I've decided to go dark on social media for the remainder of 2018. If my experiment is successful beyond that time, I'll just keep it going.

Originally, I'd intended to do this just for the month of December, but as I sat around the Thanksgiving dinner table yesteryday, surrounded by family and friends, I asked myself: "why not now?"

So, now is the time.

There are two reasons:

The first is that, ordinarily, if a company was found to be furthering an anti-semitic smear in order to protect itself from accusations that it had allowed illegal political advertising in order to influence an election, I probably wouldn't buy goods or services from that company. Particularly if they tried hard to hide that news. The fact that this company has ingrained itself in nearly every aspect of modern life doesn't mean it should be excused - in fact, it makes its actions exponentially more disturbing.

Similarly, other social networks have not exactly shown themselves to be exemplars. While I firmly believe that the web is a net positive for democracy which has provided opportunities for everyone to have a voice, social networking companies have largely shirked the responsibilities of the privileged positions they have found themselves in. We use them more than any other source to learn about the world - but they've chosen to serve us with algorithms that are optimized to maximize our engagement with display ads rather than nurture our curiosity and empathy. Emotive content tends to rise to the top, which has real effects: we're more divided than ever before in the west, and in countries like Myanmar, social networking has been an ingredient in genocide.

I don't want my engagement, or engagement in the content I contribute, to add value to this machine.

The second reason is that it doesn't make me feel good. Partially this is because of the emotive content the algorithms serve to me, which takes a real emotional toll. Partially it's because the relationships you maintain on social networks are shallow. In some cases, they are shadows of real, deeper relationships, but they don't serve those relationships well; posting feels like emotional labor, but has little of the emotional effect or intimacy of real communication. It's an 8-bit approximation of friendship where the conversations are performative because they're always in front of an audience.

One of the things that was stopping me from withdrawing from social media is a worry that people will forget about me. Many of my friends are overseas, and we don't see each other on a regular basis. But I've decided that this is manufactured FOMO; my really meaningful relationships will continue regardless of which social networks I happen to use. The idea that Facebook is an integral part of my friendships seems more toxic the more I think about it.

Finally, I'll admit it: I'm kind of depressed. Social networking has been shown to make people more so. Cutting it out for a while seems like an okay thing to try.

I removed all my social apps on my phone and replaced them with news sources and readers. So here's where to find me for the next little while:

I'm cutting out Facebook, Twitter, Instagram, LinkedIn, and Mastodon completely. (Mastodon doesn't suffer from the organizational issues I described above, but by aping commercial social networking services, it suffers from the same design flaws.) As of tonight, I won't be logging into those platforms on any device, and I won't receive comments, likes, reshares, etc, on any of them.

I will be posting regularly on my blog here at werd.io. If you use a feed reader (I use NewsBlur and Reeder together), I have an RSS feed. Yeah, we still have those in 2018. But if you don't, you can also get new posts in your email inbox by subscribing over here. I've set it up so you can just reply to any message and I'll get it immediately.

You can always email me at ben@benwerd.com, or text me on Signal at +1 (510) 283-3321.

I'm not removing any accounts for now - I'm simply logging out. If this experiment continues, I'll go so far as to remove my information.

Please do say hi using any of those methods. And if we find ourselves in the same city, let's hang out. I'm hoping that this experiment will lead to more, deeper relationships. But for now: this is why you're not going to see my posts in your usual feeds.

· Posts · Share this post

 

Media for the people

Yesterday, in the afternoon, I collapsed. Everything seemed overwhelming and sad.

Today, I'm full of energy again, and I think there's only one kind of work that matters. The work of empowerment.

Broadly: How can we return to a functional democracy that works for everyone?

Narrowly: How can we make sure this administration is not able to follow its authoritarian instincts, how can we make sure they are nowhere near power in 2020, and how can we make sure this never happens again?

A huge amount of this is fixing the media. Not media companies - but the fabric of how we get our information and share with each other. I've been focused on this for my entire career: Elgg, Latakoo, Known, Medium, Matter and Unlock all deal with this central issue.

A convergence of financial incentives has created a situation where white supremacy and authoritarianism can travel across the globe in the blink of an eye - and can also travel faster than more nuanced ideas. Fascist propaganda led directly to modern advertising, and modern advertising has now led us right back to fascist propaganda, aided and abetted by people who saw the right to make a profit as more important than the social implications of their work.

I think this is the time to take more direct action, and to build institutions that don't just speak truth to power, but put power behind the truth. Stories are how we learn, but our actions define us.

Non-violent resistance is the only way to save democracy. But we need it in every corner of society, and in overwhelming numbers.

There are people out on the streets today, who have been fighting this fight for longer than any of us. How can we help them be more effective?

How can we help people who have never been political before in their lives to take a stand?

How can we best overcome our differences and come together in the name of democracy, freedom, and inclusion?

And how can we actively dismantle the apparatus of oppression?

It's time to create a new kind of media that presents a real alternative to the top-down structures that have so disserved us. One that is by the people, for the people, and does not depend on wealthy financial interests.

And with it, a new kind of democracy that is not just representative, but participative. For everyone, forever.

· Posts · Share this post

 

Gab and the decentralized web

As a proponent of the decentralized web, I've been thinking a lot about the aftermath of the domestic terrorism that was committed in Pittsburgh at the Tree of Life synagogue over the weekend, and how it specifically relates to the right-wing social network Gab.

In America, we're unfortunately used to mass shootings from right-wing extremists, who have committed more domestic acts of terror than any other group. We're also overfamiliar with ethnonationalists and racist isolationists, who feel particularly emboldened by the current President. Lest we forget, when fascists marched in the streets yelling "the Jews will not replace us", he announced that "you had very fine people on both sides". The messaging could not be more clear: the President is not an enemy of hate speech.

As the modern equivalent of the public square, social networking services have been under a lot of pressure to remove hate speech from their platforms. Initially, they did little; over time, however, they began to remove many of the worst offenders. Hence Gab, which was founded as a kind of refuge for people whose speech might otherwise be removed by the big platforms.

Gab claims it's a neutral free speech platform in the spirit of the First Amendment. (Never mind that the First Amendment protects you from the government curtailing your speech, rather than corporations enacting policies for private spaces that they own and control.) But anyone who has spent 30 seconds there knows this isn't quite right. This weekend's shooter chose to post there before committing his atrocity; afterwards, many other users proclaimed him to be a hero.

It's an online cesspit, home to of some of the worst of humanity. These are people who refer to overt racism as "wrongthink", and mock people who are upset by it. As Huffington Post recently reported about its CEO, Andrew Torba:

[...] As Gab’s CEO, he has rooted for prominent racists, vilified minorities, fetishized “trad life” in which women stay at home with the kids, and fantasized about a second American civil war in which the right outguns the left.

Gab is gone for now - a victim of its service providers pulling the plug in the wake of the tragedy - but it'll be back. Rather than deplatforming, the way to fight this speech, it claims, is with more speech. In my opinion, this is a trap that falsely sets up the two oppositing sides here as being equivalent. Bigotry is not an equal idea, but it's in their interests to paint it as such. While it's pretty easy to debate bigots on an equal platform and win, doing so unintentionally elevates their standing. Simply put, their ideas shouldn't be given oxygen. A right to freedom of speech is not the same as a right to be amplified.

I found this piece by an anonymous German student in Saxony instructive:

We also have to understand that allowing nationalist slogans to gain currency in the media and politics, allowing large neo-Nazi events to take place unimpeded and failing to prosecute hate crimes all contribute to embolden neo-Nazis. I see parallels with an era we thought was confined to the history books, the dark age before Hitler.

An often-repeated argument about deplatforming fascists is that we'll just drive them underground. In my opinion, this is great: when we're literally talking about Nazis, driving them underground is the right thing to do. Yes, you'll always have neo-Nazis somewhere. But the more they're exposed to the mainstream, the more their movement may gain steam. This isn't an academic problem, or a problem of optics: give Nazis power and people will die. These are people who want to create ethnostates; they want to prioritize people based on their ethnicity and background. These movements start in some very dark places, and often end in genocide.

When we talk about a decentralized social web, the framing is usually that it's one free from censorship; where everyone has a home. I broadly agree with that idea, but I also think the discussion must become more nuanced in the face of communities like Gab.

I agree wholeheartedly that the majority of our global discourse can't be trusted to a small handful of very large, monocultural companies that answer to their shareholders over the needs of the public. The need to make user profiles more valuable to advertisers has, for example, seen transgender users thrown off the platform for not using their deadnames. In a world where you need to be on social media to effectively participate in a community, that has had a meaningful effect on already vulnerable communities.

There's no doubt that this kind of unacceptable bigotry at the hands of surveillance capitalism would, indeed, be prevented by decentralization. But removing silos would also, at least in theory, enable and protect fascist movements, and give racists like this weekend's shooter a place to build unhindered community.

We must consider the implications of removing these gatekeepers very deeply - and certainly more deeply than we have been already.

A common argument is that the web is just a tool, oblivious to what people use it for. This is similar to the argument that was made about algorithms, until it became obvious that they were built by people and based on their assumptions and biases. Nothing created by people is unbiased; everything is in part derived from the context and assumptions of its creators. By being more aware of our context and the assumptions we're bringing to the table, we can hopefully make better decisions, and see potential problems with our ideas sooner. Even if there isn't a perfect solution, understanding the ethics of the situation allows us to make more informed decisions.

On one side, by creating a robust decentralized web, we could create a way for extremist movements to thrive. On another, by restricting hate speech, we could create overarching censorship that genuinely guts freedom of speech protections, which would undermine democracy itself by restricting who can be a part of the discourse. Is there a way to avoid the second without the first being an inevitability? And is it even possible, given the possible outcomes, to return to our cozy idea of the web as being a force for peace through knowledge?

These are complicated ethical questions. As builders of software on the modern internet, we have to know that there are potentially serious consequences to the design decisions we make. Facebook started as a prank by a college freshman and now has a measurable impact on genocide in Myanmar. While it's obvious to me that everyone having unhindred access to knowledge is a net positive that particularly empowers disadvantaged communities, and that social media has allowed us to have access to new voices and understand a wider array of lived experiences, it has also been used to spread hate, undermine elections, and disempower whole communities. Decentralizing the web will allow more people to share on their own terms, using their own voices; it will also remove many of the restrictions to the spread of hatred.

Wherever we end up, it's clear that President Trump is wrong about the alt-right: these aren't very fine people. These are some of the worst people in the world. Their ideology is abhorrent and anti-human; their messages are obscene.

No less than the future of democratic society is at stake. And a society where the alt-right wins won't be worth living in.

Given that, it's tempting to throw up our hands and say that we should ban them from speaking anywhere online. But if we do that, the consequence is that there has to be a mechanism for censorship built into the web, and that there should be single points of failure that could be removed to prevent any community from speaking. Who gets to control that? And who says we should get to have this power?

· Posts · Share this post

 

The day I realized I was going against the career grain

One of the most surreal professional experiences of my career was going to work for Medium. It was a decision I thought long and hard about, and was a sea change in the way I worked.

For my entire career, I'd gone against the grain. I bootstrapped an open source startup from Scotland, determined that I wouldn't move to Silicon Valley. I was the first employee at another one, based in Texas, that was determined to be Texan through and through. And then I finally founded a company in the San Francisco Bay Area, but was determined that it should be open source and decentralized (at a time when almost all investors were against the idea). In all these cases, while I had equity, I had a pretty low salary. In fact, I had never made much money at all, because I had put the highest priority on maintaining my social ideology.

So when I came to Medium, I immediately earned double the highest amount of money I'd ever made. Suddenly I was in this incredibly slick work environment, with empathetic, thoughtful people who were at the top of their skills. There were high-burn frills like kombucha on tap, but much more importantly, there were real benefits. Vacation was encouraged, there was parental leave, and I could spend thousands of dollars on my own education without drawing from my salary. (Side note: a lot of fancy tech company benefits are things that every employee in Europe is entitled to by law.)

Most strikingly, the people I worked with had mostly never worked in low-budget startups. If they'd been involved in small businesses at all, they had very quickly attracted millions of dollars in venture capital - but quite often, they'd come from companies like Google, and had enjoyed these kinds of salaries and benefits for their entire working lives.

Only then did I realize that for my entire career, by going against the grain and trying to build my own environments from scratch, I had made life incredibly hard for myself. Honestly, I thought that this was just how work was. But it turned out there was this world where, if I could accept not being my own boss and coming into an office building every day (which had both felt like psychological barriers, but in reality were very minor), I could make good money, go home at a normal time, take decent vacations without worrying so much about the budget, and be a healthier human being. What?!

In reality, I became incredibly anxious. Because I was working with people who had just had the luxury of focusing on their skills for their whole careers, I had really strong imposter syndrome. And everything was so slow, methodical, and ordered compared to the bouncing-off-the-walls chaos of an early-stage startup. I was still a little bit addicted to the adrenaline, and adapting was tougher than it should have been. This was the cushiest job I ever had, with some of the most genuinely amazing coworkers. I was a highly privileged technology worker, making really good money in a lovely environment - and I felt guilty for not being as happy as I felt I should have been.

Over time, it got easier. Matter offered me a job at the end of my first year, which I couldn't say no to. I think I wouldn't have done as well if I hadn't gone to Medium first: I had become a team player, and a much better employee. Had I stayed, I'm certain the unease would have continued to fade over time. I continued this growth trajectory at Matter; it was like losing an addiction to radical independence.

Honestly, I think that kind of radical independence is oversold. Being a founder - or frankly, even just a sole operator or consultant - is lonely, hard work, and the pay is bad. It's a bit sad that it took me over a decade to understand this. And while founding something is something I don't want to downplay, you should only do it if there's a foreseeable path to a point where you won't be in survival mode. (Real investment really helps, but it's not appropriate for every business, and not everyone can raise it.) Doing what regular people do - which is to get a job, potentially move to where the jobs are, pull a salary as part of a much larger organization, and build a financially stable future - is not at all a bad way to live. And I wish I could go back and tell me 25 year old self about it.

· Posts · Share this post

 

It's time for a new branch of public media

President Lyndon B Johnson signed the Public Broadcasting Act in 1967, which established the Corporation for Public Broadcasting. Previously, an independent public broadcaster had been established through grants by the Ford Foundation, but Ford began to withdraw its support.

Here's what he said:

"It announces to the world that our nation wants more than just material  wealth; our nation wants more than a 'chicken in every pot.' We in America have an appetite for excellence, too. While we work every day to produce new goods and to create new wealth, we want most of all to enrich man's spirit. That is the purpose of this act."

To this day, PBS and NPR carry balanced, factual programming, supported by listeners and underwriters rather than ads.

Meanwhile, C-SPAN was established in 1979 as an independent, non-profit entity. It was founded by cable operators, and gets its funding through carrier fees. It gets 6 cents per cable subscriber in the United States. Its coverage of America's political process is unprecedented.

Public broadcast media hasn't just had an effect on the education of the public and on elections. It's also had an effect on private media, acting as a bar for the kinds of high-quality content that audiences might expect. For example, NPR sets the bar for commercial podcasting.

If companies like Facebook and Twitter are media companies too - and they are - we haven't yet seen a non-commercial equivalent as we have for TV and radio. There's an argument that open projects like Mastodon have a similar spirit, but there's no major backing.

As more and more of us get our news and information from social media, there's more of a call for a public media equivalent. Just as NPR and PBS don't need to worry about which content will sell commercials, this wouldn't worry about promoting engagement to sell display ads.

In the same way that NPR and PBS have set the bar for factual content on radio and TV, an online service run in the public interest would set the bar for how content is delivered online. It would improve the ecosystem for everyone, as well as being directly informative.

History points to different ways this could be funded. The Ford Foundation could back it, in the same way they backed the original US public broadcasters. The Corporation for Public Broadcasting, or an organization like it, could back it. Or it could be created through contributions from service providers, as was done with C-SPAN.

It could also be established as a nonprofit fund that would back and underwrite promising storytelling platforms that promised to be run in the public interest. A little bit of seed funding across multiple projects at first; then more funds to back the platforms that succeeded.

If we've learned anything from broadcasting (or Facebook!), for-profit alone isn't enough to create a healthy media ecosystem. But any noncommercial service is going to need to find both financial & cultural backing.

I think it's one of the most important things we can be doing.

 

This piece was originally published as a Twitter thread.

· Posts · Share this post

 

It's time to get out of the way of artists making money on the internet

I'm spending some of my time trying to better understand how people who make creative work on the internet - writers, artists, musicians, indie developers - can build an audience and make a living from their work.

I have a lot of questions about how these creators can find people who their work resonates with. This is the opposite of founding a startup or a small business, for example: there you're finding the audience first, and building something that resonates with them. While some creative work is along those lines, more of it comes from a different creative space. The work is some function of the creator's need, with the feedback loop from the audience factoring into the mix as it grows.

Community-building, then, is a big question - particularly in the world of opaque social media algorithms that get in the way of talking directly to your followers. I'm calling it "community-building" becaues while promotion is a component, it's not the whole purpose, nor the overriding instinct. Finding kindred minds is a more immediate emotional need, even if the financial act of covering your bills is closer to the base level of Maslow's Hierarchy.

In the current ecosystem, community-building and compensation have been rolled up into one set of tools. By providing value over the top of facilitating transactions, platforms can attract creators. The more creators they attract, the larger the audience they bring with them, and the larger the cumulative profit they ultimately earn.

Medium does this well: by submitting work to the Partner Program, you're much more likely to be featured on the homepage and in its newsletters - and its payments are not insubstantial (here's my featured story Rules for Resters). Substack performs a similar trick for email newsletters (I subscribe to Daniel Ortberg). Patreon attempts to do it for every kind of creative work on every medium, which is a tricky balancing act (I back Hallie Bateman and Mastodon).

Everybody is more or less aligned here, and real money is being made, but this bundling makes it difficult to tailor your revenue or community-building tactics to your audience. One size has to fit all.

This may work for some creators; others, not so much. Every community and audience is different, and understanding their needs and desires is a core part of building a following, and a subscriber base. It's not about what you assume their needs and desires are; it's all about getting to know them as real people, and through this holistic understanding, developing unique insights about them. These insights can validate or invalidate your assumptions, but they can also take you in entirely new directions. (This principle applies to both artists and business founders, although, as I pointed out earlier, the starting point in this learning cycle is probably different.)

There's a clear benefit to making payments easier, and having a common gateway to do that, so that audience members don't have to enter their credit card details again and again and again. But that doesn't mean that everything needs to be bundled. There's also a clear benefit to having the tools of community-building and taking payments made out of small pieces, loosely joined, so that you can create the stack that makes the most sense for your own community, with tools that are tailored for them. One size fits all services are the first step, and maybe the entry point. But this is the web, and more is possible.

Patreon et al don't just want to own the payment relationship between artists and their audiences; they want to  own all aspects of that relationship. They want fans to visit their homepages instead of the artists' own. Ultimately, they want to own the way artists communicate with the world - making those communications subject to their own rules.

By establishing open standards for one-click, peer-to-peer payment that can then integrate with multiple tools, artists can potentially be better served. They can meet their audiences where they're at. They can make money without adhering to anyone else's rules. And they can more quickly reach a point where they're covering their costs through the work that they love.

This open source, decentralized world is coming. It's great news for anyone who wants to see a diverse cultural landscape where anyone can make money on their own terms, without regard for language, borders, or what someone at a desk in San Francisco thinks would be nice to promote. And it will change everything.

· Posts · Share this post

 

Making work in the Trump era

Honestly, most days, I feel paralyzed. I feel like there's so much happening, that we're literally descending into fascism on a global scale, and that I don't know if anything I do can possibly be impactful enough. I also feel that while it would be easy to block it all out and carry on as normal, to put politics aside and live my life as if none of this was going on, to do so would be complicity.

I have the privilege to set everything aside, as a white male in Silicon Valley. But if I did that, I would feel the weight of my ancestors - people who fled pogroms in Ukraine, who fought for social justice in 1930s America, who fought the Nazis in Europe, who led the resistance against the Japanese in Indonesia - weighing down on me. And I would feel the weight of my friends of color, my LGBTQIA friends, my immigrant friends. It would be an entirely selfish act. And even selfishly, the result would be a world that I simply don't want to live in: a restrictive, brutal, theist society built around the supremacy of a narrow, arbitrary demographic.

If you are not vocally political in the current era, your inaction is tacit support for the current regime and its bigoted value system. End of story.

I know I'm not alone.

But I also know there's work to be done.

I'm vocal; I give a significant percentage of my income; I march. But I also need to pay my rent and cover these donations to begin with.

I've already made myself one pact: while I work in tech, an industry that has undeniably been part of the problem, I will only work on mission-driven problems at the intersection with democracy. I've turned down large salaries at companies you can name, because I want to be able to feel like I'm part of the solution and not the problem. It means I'll probably never be a millionaire. I can live with that.

The second, newer pact, is to work hard at the work I do, to the exclusion of distractions. This is not something I've been good at, but it's a skill I need to rebuild. Like many of us, I've been glued to social media, simultaneously addicted to and exhausted by every new development. And honestly, I have to break out of it.

Although raising and maintaining awareness is vital, sitting and typing outraged tweets on social media is masturbatory, and benefits the very platforms that were a large part of creating this current situation. Taking a step back and using my voice to amplify others who might not enjoy the same privileges, while also taking more calculated moves to have impact where it counts, is more important.

· Posts · Share this post

 

Bad news: there's no solution to false information online

For the last couple of years, fake news has been towards the top of my agenda. As an investor for Matter, it was one of the lenses I used to source and select startups in the seventh and eighth cohorts. As a citizen, disinformation and misinformation influenced how I thought about the 2016 US election. And as a technologist who has been involved in building social networks for 15 years, it has been an area of grave concern.

Yesterday marked the first day of Misinfocon in Washington DC; while I'm unfortunately unable to attend, I'm grateful that hundreds of people who are much smarter than me have congregated to talk about these issues. They're difficult and there's no push-button answer. From time to time I've seen pitches from people who purport to solve them outright, and people have phoned me to ask for a solution. So far, I've always disappointed them: I'm convinced that the only workable solution is a holistic approach that provides more context.

Of course, it's a terrible term that's being used to further undermine trust in the press. When we talk about "fake news", we're really talking about three things:

Propaganda: systematic propagation of information or ideas in order to encourage or instil a particular attitude or response. In other words: weaponized information to achieve a change of mindset in its audience. The information doesn't have to be incorrect, but it might be.

Misinformation: spreading incorrect information, for any reason. Misinformation isn't necessarily malicious; people can be wrong for a variety of reasons. I'm wrong all the time, and you are too.

Disinformation: disseminating deliberately false information, especially when supplied by a government or its agent to a foreign power or on the media with the intention of influencing policies of those who receive it.

None of them are new, and certainly none of them were newly introduced in the 2016 election. 220 years ago, John Adams had some sharp words in response to Condorcet's comments about journalism:

Writing in the section where the French philosopher predicted that a free press would advance knowledge and create a more informed public, Adams scoffed. “There has been more new error propagated by the press in the last ten years than in an hundred years before 1798,” he wrote at the time.

Condorcet's thoughts on journalism inspired the establishment of authors' rights in France during the French revolution. In particular, the right to be identified as an author was developed not to reward the inventors of creative work, but so that authors and publishers of subversive political pamphlets at the time could be identified and held responsible. It's clear that these conversations have been going on for a long time.

Still, trust in the media is at an all-time low. 66% of Americans say the news media don't do a good job of separating facts from opinion; only 33% feel positively about them. As Brooke Binkowski, Managing Editor of Snopes, put it to Backchannel in 2016:

The misinformation crisis, according to Binkowski, stems from something more pernicious. In the past, the sources of accurate information were recognizable enough that phony news was relatively easy for a discerning reader to identify and discredit. The problem, Binkowski believes, is that the public has lost faith in the media broadly — therefore no media outlet is considered credible any longer.

Credibility is key. In the face of this lack of trust, a good option would be to go back to the readers, understand their needs deeply, and adjust your offerings to take that into account. It's something that Matter helped local news publishers in the US to do recently with Open Matter to great success, and there's more of this from Matter to come. But this is still a minority response. As Jack Shafer wrote in Politico last year:

But criticize them and ask them to justify what they do and how they do it? They go all go all whiny and preachy, wrap themselves in the First Amendment and proclaim that they’re essential to democracy. I won’t dispute that journalists are crucial to a free society, but just because something is true doesn’t make it persuasive.

So what would be more persuasive?

How can trust be regained by the media, and how could the web become more credible?

There are a few ways to approach the problem: from a bottom-up, user driven perspective; from the perspective of the publishers; from the perspective of the social networks used to disseminate information; and from the perspective of the web as a platform itself.

Users

From a user perspective, one issue is that modern readers put far more trust in individuals than they do in brand names. It's been found that users trust organic content produced by people they trust 50% more than other types of media. Platforms like Purple and Substack allow journalists to create their own personal paid subscription channels, leveraging this increased trust. A more traditional publisher brand could create a set of Purple channels for each business, for example.

Publishers

From a publisher perspective, transparency is key: in response to an earlier version of this post, Jarrod Dicker, the CEO of Po.et, pointed out that transparency of effort could be helpful. Here, journalists could show exactly how the sausage was made. As he put it, "here are the ingredients". Buzzfeed is dabbling in these waters with Follow This, a Netflix documentary following the production of a single story each episode.

Publishers have also often fallen into the trap of writing highly emotive, opinion-driven articles in order to increase their pageview rate. Often, this is created by incentives inside the origanization for journalists to hit a certain popularity level for their pieces. While this tactic may help the bottom line in the short term, it comes at the expensive of longer term profits. Those opinion pieces erode trust in the publisher as a source of information, and because the content is optimized for pageviews, it results in shallower content overall.

Social networks

From a social network perspective, fixing the news feed is one obvious way to make swift improvements. Today's feeds are designed to maximize engagement by showing users exactly what will keep them on the platform for longer, rather than a reverse chronological list of content produced by the people and pages they've subscribed to. Unfortunately, this prioritizes highly emotive content over factual pieces, and the algorithm becomes more and more optimized for this over time. The "angry" reacji is by far the most popular reaction on Facebook - a fact that illustrates this emotional power law. As the Pew Research Center pointed out:

Between Feb. 24, 2016 – when Facebook first gave its users the option of clicking on the “angry” reaction, as well as the emotional reactions “love,” “sad,” “haha” and “wow” – and Election Day, the congressional Facebook audience used the “angry” button in response to lawmakers’ posts a total of 3.6 million times. But during the same amount of time following the election, that number increased more than threefold, to nearly 14 million. The trend toward using the “angry” reaction continued during the last three months of 2017.

Inside sources tell me that this trend has continued. Targeted display advertising both encourages the platforms to maximize revenue in this way, and encourages publishers to write that highly emotive, clickbaity content, undermining their own trust in order to make short-term revenue. So much misinformation is simply clickbait that has been optimized for revenue past the need to tell any kind of truth.

It's vital to understand these dynamics from a human perspective: simply applying a technological or a statistical lens won't provide the insights needed to create real change. Why do users share more emotive content? Who are they? What are their frustrations and desires, and how does this change in different geographies and demographics? My friend Padmini Ray Murray rightly pointed out to me that ethnographies of use are vital here.

It's similarly important to understand how bots and paid trolls can influence opinion across a social network. Twitter has been hard at work suspending millions of bots, while Facebook heavily restricted its API to reduce automatic posting. According to the NATO Stratcom Center of Excellence:

The goal is permanent unrest and chaos within an enemy state. Achieving that through information operations rather than military engagement is a preferred way to win. [...] "This was where you first saw the troll factories running the shifts of people whose task is using social media to micro-target people on specific messaging and spreading fake news. And then in different countries, they tend to look at where the vulnerability is. Is it minority, is it migration, is it corruption, is it social inequality. And then you go and exploit it. And increasingly the shift is towards the robotisation of the trolling."

Information warfare campaigns between nations are made possible by vulnerabilities in social networking platforms. Building these platforms has long stopped being a game, simply about growing your user base; they are now theaters of war. Twitter's long-standing abuse problem is now an information warfare problem. Preventing anyone from gaming them for such purposes should be a priority - but as these conflicts become more serious, the more platform changes become a matter of foreign policy. It would be naïve to assume that the big platforms are not already working with governments, for better or worse.

The web as a platform

Then there's the web as a platform itself: a peaceful, decentralized network of human knowledge and creativity, designed and maintained for everyone in the world. A user-based solution requires behavior change; a social network solution requires every company to improve its behavior, potentially at the expense of its bottom line. What can be done on the level of the web itself, and the browsers that interpret it, to create a healthier information landscape?

One often-touted solution is to maintain a list of trustworthy journalistic sources, perhaps by rating newsroom processes. Of course, the effect here is direct censorship. Whitelisting publishers means that new publications are almost impossible to establish. That's particularly pernicious because incumbent newsrooms are disproportionately white and male: do we really want to prevent women and people of color from publishing? Furthermore, these publications are often legacy news organizations whose preceived trust derives from their historical control over the means of distribution. The fact that a company had a license to broadcast when few were available, or owned a printing press when publishing was prohibitively expensive for most people, should not automatically impart trust. Rich people are not inherently more trustworthy, and "approved news" is a regressive idea.

Similarly, accreditation would put most news startups out of business. Imagine a world where you need to pay thousands of dollars to be evaluated by a central body, or web browsers and search engines around the world would disadvantage you in comparison to people who had shelled out the money. The process would be subject to ideological bias from the accrediting body, and the need for funds would mean that only founders from privileged backgrounds could participate.

I recently joined the W3C Credible Web Community Group and attended the second day of its meeting in San Francisco, and was impressed with the nuance of thought and bias towards action. Representatives from Twitter, Facebook, Google, Mozilla, Snopes, and the W3C were all in attendance, discussing openly and directly collaborating on how their platforms could help build a more credible web. I'm looking forward to continuing to participate.

It's clearly impossible for the web as a platform to objectively report that a stated fact is true or false. This would require a central authority of truth - let's call it MiniTrue for short. It may, however, be possible for our browsers and social platforms to show us the conversation around an article or component fact. Currently, links on the web are contextless: if I link to the Mozilla Information Trust Initiative, there's no definitive way for browsers, search engines or social platforms to know whether I agree or disagree with what is said within (for the record, I'm very much in agreement - but a software application would need some non-deterministic fuzzy NLP AI magic to work that out from this text).

Imagine, instead, if I could highlight a stated fact I disagree with in an article, and annotate it by linking that exact segment from my website, from a post on a social network, from an annotations platform, or from a dedicated rating site like Tribeworthy. As a first step, it could be enough to link to the page as a whole. Browsers could then find backlinks to that segment or page and help me understand the conversation around it from everywhere on the web. There's no censoring body, and decentralized technologies work well enough today that we wouldn't need to trust any single company to host all of these backlinks. Each browser could then use its own algorithms to figure out which backlinks to display and how best to make sense of the information, making space for them to find a competitive advantage around providing context.

Startups

I've come to the conclusion that startups alone can't provide the solutions we need. They do, however, have a part to play. For example:

A startup publication could produce more fact-based, journalistic content from underrepresented perspectives and show that it can be viable by tapping into latent demand. eg, The Establishment.

A startup could help publications rebuild trust by bringing audiences more deeply into the process. eg, Hearken.

A startup could help to build a data ecosystem for trust online, and sell its services to publications, browsers, and search engines alike. eg, Factmata and Meedan.

A startup could establish a new business model that prioritizes something other than raw engagement. eg, Paytime and Purple.

But startups aren't the solution alone, and no one startup can be the entire solution. This is a problem that can only be solved holistically, with every stakeholder in the ecosystem slowly moving in the right direction.

It's a long road

These potential technology solutions aren't enough on their own: fake news is primarily a social problem. But ecosystem players can help.

Users can be wiser about what they share and why - and can call out bad information when the see it. Those with the means can provide patronage to high quality news sources.

Publishers can prioritize their own longer term well-being by producing fact-based, deeper content and optimizing for trust with their audience.

Social networks can find new business models that aren't incentivized to promote clickbait.

And by empowering readers with the ability to fact check for themselves and understand the conversational context around a story, while continuing to support the web as an open platform where anyone can publish, we can help create a web that disarms the people who seek to misinform us by separating us from the context we need.

These are small steps - but together, taken as a whole, steps in the right direction.

 

Thank you to Jarrod Dicker and Padmini Ray Murray for commenting on an earlier version of this post.

· Posts · Share this post

 

Building an Instant Life Plan and telling your personal story

The last couple of months have been full of decision points for me, both personally and professionally. Everything has been on the table, and everything has been in potential flux.

Having worked in early stage startups pretty much continuously since 2003, it's possibly been less stressful for me than this level of uncertainty might be for others. Still, going forward, I would like to be more intentional about how I'm building my personal life. And while this might come across as a little pathological - have I jumped the Silicon Valley shark? - it seems like some of the tools we use to quickly understand businesses might work here, too. I typically don't like imposing frameworks on my personal life because you lose serendipity, and the experiences worth having are usually precluded by adding too much structure. I think humans are meant to freestyle; living by too many sets of rules closes you off to new possibilities.

Conversely, having guiding principles, and treating them as a kind of living document, could be helpful. It's the same thing I've advised so many startups to do: building a rigid business plan destroys your ability to be agile, but writing out the elements of your business forces you to describe and understand them. The Stanford d.School style Instant Business Plan, where the elements are literally Post-Its than can be swapped and changed, is a far better north star than a one-shot document. I think the same approach could work well for a life plan: a paper document where changability is an intrinsic part of the format, but you are nonetheless forced to express your ideas concretely.

Why Post-Its rather than a document or a personal wiki? Post-Its force you to summarize your thoughts succinctly, and can easily and tangibly be replaced and moved around. Other options carry the risk of being too verbose (which is counter to the goal of creating an easy-to-follow north star) or unchangable (which is counter to the goal of creating a living document that changes as you learn more and test your ideas).

Here's what it could look like, as a rough version 0.1. It's inspired both by the Stanford d.School Instant Business Plan, and a similar document used for startups at Matter. Don't give yourself more than 90 minutes to put this together:

 

Hi! I'm [halfsheet Post-It]
An elevator pitch of you, that doesn't focus on what you do for a living (that will come next). It's what we call a POV statement, which contains a description, a need and a unique insight. Example: Hi! I'm Ben. I'm a creative third culture kid who loves technology and social justice, but whose first love is writing. I need a way to stay creative, maintain work/life balance, and do meaningful work that also allows me to live a comfortable life.

I believe the world is [no more than three regular Post-Its]
Three things you think are happening in the world. This is a way to express your beliefs. Example: Experiencing unprecedented inequality that is harming every aspect of society; In the early stages of an internet-driven social revolution; Moving beyond arbitrary national borders. How would you test if these trends are real?

I make money by [halfsheet Post-It]
Here's where you get to describe what you do for a living. Example: Providing consulting and support to mission-driven early-stage technology companies and mission-driven incumbent industries, both from a strategic and technological perspective. Sometimes I write code but it isn't my primary value.

My employers are [no more than three halfsheet Post-Its]
Who typically gives you money? As a category, not a specific company. Example: Early-stage, mission-driven investment firms who need an ex-founder with both technological and analytical skills to help source and select their investments; early stage startups who need a manager with an open web or business strategy background; "legacy" or "incumbent" large organizations like universities and media companies who need an advisor with technical or startup experience.

My key work skills are [no more than three regular Post-Its]
Which skills are core drivers of your employment? Example: Full-stack web development and technical architecture; Trained in design thinking facilitation and processes for both ventures and products; Experienced startup founder who has lived every mistake.

My key personal attributes are [no more than three regular Post-Its]
What aspects of your personality or the way you act are you proud of? What do you think other people respect you for? Example: Bias towards kindness rather than personal enrichment; Writing and storytelling; Collaborative rather than competitive.

My key lifestyle risks are [three regular Post-Its]
What are the things that keep you up at night about your lifestyle? Specifically, in the following three areas:
Happiness: Risks to your ability to be a happy human (this is different for everybody)
Viability: Your financial risks
Feasibility: Risks to your ability to achieve the lifestyle you want with the time, geographies, and resources at your disposalExample: Happiness: I don't time to spend being social or taking care of my health; Viability: I need a minimum base salary of around $120,000 to cover my costs in the San Francisco Bay Area; Feasibility: It might not be possible to maintain the quality of life I enjoyed in Europe without a significantly higher salary.

My key work risks are [three regular Post-Its]
What are the things that keep you up at night about work or your ability to find it? Specifically, in the following three areas:
Workability: Risks to your ability to have a satisfying work life (this is different for everybody)
Viability: Risks to your value in the employment marketplace
Feasibility: Process or ecosystem risks to your finding the employment you want with the time and resources at realistically at your disposal
Example: Workability: I am seen as largely a developer; Viability: I don't have experience working in a large tech giant in a management role, or equivalent; Feasibility: Most jobs are filled within a network and I'm not sure I have the connections I need to get to the jobs I might want.

Risks parking lot
As you figure out what your key risks are in each area, you should keep track of the ones that don't quite make the cut. It's useful to understand what they are, but as your life plan evolves over time, you might want to swap them out and bring them back into the key risks area.

Above all, to be successful, I need to [three regular Post-Its]
The definition of success varies for everyone. Some people are money-driven; some people prioritize other goals. What are the things you need to achieve to be successful? Specifically, in the following three categories:
Happiness: Your ability to be a happy human with the work and personal lives you want
Viability: Your ability to earn money and cover your costs
Feasibility: Your ability to practically achieve the things listed in happiness and viability with the time and resources realistically at your disposal
Example: Happiness: Regularly spend time with inspiring, mission-driven, kind people at work and in my life wihle taking care of my health; Viability: Get a job that comfortably covers my San Francisco Bay Area costs on a recurring basis; Feasibility: Gain marketable skills (MBA? CPA?) to add to my existing technology and business experience.

My key next steps are [three regular Post-Its]
This is what everything has culminated in. Based on the risks and the primary needs expressed above, what are the concrete next steps in the three key areas? Spending more time doing research or thinking doesn't count. It's got to be an action you can take immediately. Again, these are in the following categories:
Happiness: Your ability to be a happy human with the work and personal lives you want
Viability: Your ability to earn money and cover your costs
Feasibility: Your ability to practically achieve the things listed in happiness and viability with the time and resources realistically at your disposal
Example: Happiness: Set clearer boundaries and set aside time to spend with friends and exercising. Viability: Identify and remove any unnecessary recurring expenses. Feasibility: Sign up to do some pre-CPA accounting courses, to allow you to better analyze startup businesses.

 

 

Finally, there's one more thing: get feedback. Once you've put this together, find someone you trust - or better yet, multiple people - and talk them through it. The best possible scenario is if a few friends all do this for themselves, give each other feedback, and then iterate.

Good luck! And please give me feedback. It would be fun to turn this into a framework for solidifying life decisions and more concretely describing the choices and challenges you have, in order to make them easier to deal with a task at a time.

· Posts · Share this post

 

Reflecting on a hard left turn career change

Over the last eighteen months I’ve helped source, interview, select and invest in 24 startups. As Director of Investments for Matter Ventures in San Francisco, twelve of those were my direct responsibility; twelve were supporting my counterpart Josh Lucido in New York City. 

Matter is - and continues to be - the best thing I’ve ever done.

The learning curve was immediate and intense, but I had been advising startups and analyzing the space for well over a decade. I had co-founded two, and was the first employee at a third. I’d also run a few things that weren’t technically startups but could have been: an online magazine in 1994 that found itself on the cover CD for “real” paper magazines, and a social media site that was getting a million pageviews a day in 2002. As an engineer, obviously I’ve built a lot of software - but more than that, I’ve spent every day of my career thinking about, researching, executing and advising on strategy. I love technology, and I love thinking about how to make it better.

But of course, technology isn’t worth anything unless it’s helping someone. The best technology pushes society forward and empowers people with new opportunities. Building new tech for yourself is fun, but it’s not a profession. And it’s just not very satisfying - at least, for me.

It’s been a privilege to get to know hundreds of people who are building ventures to solve real problems for real people. I invested in some, and wished I had room to invest in others. I gave feedback to many more. Most importantly, I was there on the ground with the Ventures we did invest in, helping with everything from fundraising strategy to database normalization. Rather than just writing code, or working on financial documentation, it’s felt like I’ve been able to use every facet of my skills to do this work. It feels good, and meaningful. And although I think it takes years to truly ease into this kind of work, I’m proud of the work I’ve done.

I doubt I’ll ever be an engineer again - at least, not solely. (My role at Matter is my first job since being a barista in college that hasn’t involved writing code in some capacity, but I’ve actually only ever had two pure engineering roles.) I’m certain that I will found my own venture again, and use what I’ve learned to create something that stands the test of time. But for now, I’m delighted to be supportive. Investing turns out to be one of the most satisfying things I’ve ever done (for all kinds of reasons that don’t involve money), and whatever happens in my career, I want to keep doing it.

· Posts · Share this post

 

Building trust in media through financial transparency: it's time to declare LPs

One simple thing that media entities could do to improve trust is to publicly declare exactly who finances them, and then in turn declare their backers. This would hold true for privately-owned companies; trusts; crowdfunded publications; new kinds of media companies operating on the blockchain and funded with ICOs.

VC-funded media companies - like Facebook, which is a media company - would declare which entities own how much of them. As it happens, Facebook is publicly-traded, so must already do this. But it's rare for VC firms to talk about their Limited Partners - the people and organizations who put money into them. We have no idea who might have an interest in the organizations on Facebook's cap table.

This is important because LPs decide which funds to invest in based on their goals and strategy. It's clear that an LP's financial interests may be represented through a fund that they invest in, but it's equally plausible for their political and other strategic interests to be represented as well.

To be specific, we know that socially-minded LPs invest in double bottom line impact funds that strive to make measurable societal change as well as a financial return. It seems reasonable, then, that some LPs might seek to promote significantly more conservative goals. In the current climate, imagine what a Kremlin-connected Russian oligarch might want to achieve as an LP in a US fund. Or a multinational oil company, the NRA, or In-Q-Tel.

The same goes for crowdfunded ventures. What happens if a contributor to a blockchain-powered media startup is the Chinese government, for example? Or organized criminals? It would be hard to tell from the blockchain itself, but understanding who made significant contributions to a publisher is an important part of assessing its trustworthiness.

While it's fairly easy to figure out which venture firms have invested in a media company, those same firms usually have a duty of privacy to their LPs, so it's rare that we get to know who they are. We know that media is the bedrock of democracy. In order to determine who is shaping the stories we hear that inform how we act as an electorate, I think we need to start following the money - and wearing our influences on our sleeves.

(For what it's worth, Matter Ventures, the media startup accelerator that I work at, publicly declares its partners on its homepage.)

· Posts · Share this post

Email me: ben@werd.io

Signal me: benwerd.01

Werd I/O © Ben Werdmuller. The text (without images) of this site is licensed under CC BY-NC-SA 4.0.