Skip to main content
 

With RAD, podcasters can finally learn who's listening

NPR announced Remote Audio Data today: a technology standard for sending podcast audience analytics back to their publishers. Podcasting is one of the few truly decentralized publishing ecosystems left on the web, and it's a relief to see that this is as decentralized as it should be.

Moreover, it's exactly the role public media should be playing: they convened a group of interested parties and created an underlying open source layer that benefits everyone. One of the major issues in the podcast ecosystem is that nobody has good data about who's actually listening; most people use Apple's stats, look at their download numbers, and make inferences. This will change the game - and in a way that directly benefits podcast publishers rather than any single central gatekeeper.

What's not listed in the spec is a standard way to disclose to the listener that their analytics are being shared. This may fall afoul of GDPR and similar legislation if not handled properly; to be honest, I'd hope that any ethical podcast player would ask permission to send this information, giving me the opportunity to tell it not to. Still, at least in the five minutes that everyone isn't sending their listening data to be processed by Google Analytics, this is an order of magnitude better than using Apple as a clearinghouse.

Here's a quick technical overview of how it works:

While MP3 files mostly contain audio, they can also contain something called an ID3 tag for human-readable information like song title, album name, artist, and genre. RAD adds a JSON-encoded remoteAudioData field, which in turn contains two arrays: trackingUrls and events. It can also list a custom podcastId and episodeId. Events have an optional label and mandatory eventTime, expressed as hh:mm:ss.sss, and can have any number of other keys and values.

The example data from the spec looks like this:

{
 "remoteAudioData": {
   "podcastId":"510298",
   "episodeId":"497679856",
   "trackingUrls": [
     "https://tracking.publisher1.org/remote_audio_data",
     "https://tracking.publisher2.org/remote_audio_data",
     "https://tracking.publisherN.org/remote_audio_data",
   ],
   "events": [
     {
       "eventTime":"00:00:00.000",
       "label":"podcastDownload",
       "spId":"0",
       "creativeId":"0",
       "adPosition":"0",
       "eventNum":"0"
     },
     {
       "eventTime":"00:00:05.000",
       "label":"podcastStart",
       "spId":"0",
       "creativeId":"0",
       "adPosition":"0",
       "eventNum":"1"
     },
     {
       "eventTime":"00:05:00.000",
       "label":"breakStart",
       "spId":"123456",
       "creativeId":"1234567",
       "adPosition":"1",
       "eventNum":"2"
     },
     {
       "eventTime":"00:05:15.000",
       "label":"breakEnd",
       "spId":"123456",
       "creativeId":"1234567",
       "adPosition":"1",
       "eventNum":"3"
     }
   ]
 }
}

The podcast player sends a POST request to the URLs listed in trackingURLs, wrapped in a session ID and optionally containing the episodeId and podcastId. By default the player should send this at least once per hour, although the MP3 file can specify a different duration by including a submissionInterval parameter. The intention is that the podcast player stores events and can send them asynchronously, because podcasts are often listened to when there's no available internet connection. After a default of two weeks without sending, events are discarded.

Here's an example JSON string send to a reportingUrl from the spec:

{
 "audioSessions": [
   {
     "podcastId": "510313",
     "episodeId": "525083696",
     "sessionId": "A489C3AD-04AA-4B5F-8289-4D3D2CFE4CFB",
     "events": [
       {
         "sponsorId": "0",
         "creativeId": "0",
         "eventTime": "00:00:00.000",
         "adPosition": "0",
         "label": "podcastDownload",
         "eventNum": "0",
         "timestamp": "2018-10-24T11:23:07+04:00"
       },
       {
         "sponsorId": "0",
         "creativeId": "0",
         "eventTime": "00:00:05.000",
         "adPosition": "0",
         "label": "podcastStart",
         "eventNum": "1",
         "timestamp": "2018-10-24T11:23:08+04:00"
       },
       {
         "sponsorId": "111128",
         "eventTime": "00:00:05.000",
         "adPosition": "1",
         "label": "breakStart",
         "creativeId": "1111132",
         "eventNum": "2",
         "timestamp": "2018-10-24T11:23:09+04:00"
       },
       {
         "label": "breakEnd",
         "sponsorId": "111128",
         "eventTime": "00:00:05.000",
         "adPosition": "1",
         "creativeId": "1111132",
         "eventNum": "3",
         "timestamp": "2018-10-24T11:23:10+04:00"
         }
     ]
   },
   {
     "podcastId": "510314",
     "episodeId": "525083697",
     "sessionId": "778A4569-4B06-469B-8686-519C3B43C31F",
     "events": [
       {
         "sponsorId": "0",
         "eventTime": "00:00:00.000",
         "adPosition": "0",
         "creativeId": "0",
         "eventNum": "0",
         "timestamp": "2018-10-24T11:23:11+04:00"
       }
     ]
    },
   {
     "podcastId": "510315",
     "episodeId": "525083698",
     "sessionId": "F825BE2B-9759-438A-A67E-9C2D54874B4F",
     "events": [
       {
         "sponsorId": "0",
         "eventTime": "00:00:00.000",
         "adPosition": "0",
         "label": "podcastDownload",
         "creativeId": "0",
         "eventNum": "0",
         "timestamp": "2018-10-24T11:23:12+04:00"
       }
     ]
   }
 ]
}

It's a very simple, smart solution. There's more information at the RAD homepage, and mobile developers can grab Android or iOS SDKs on GitHub.

· Posts · Share this post

 

Persuading people to use ethical tech

I've been in the business of getting people to use ideologically-driven technology for most of my career (with one or two exceptions). Leaving out the less ideologically driven positions, it goes something like this:

Elgg: We needed to convince people that, if they're going to run an online community, they should use one that allows them to store their own data anywhere, embraces open standards, and can run in any web browser (which, at the height of Internet Explorer's reign, was a real consideration).

Latakoo: In a world where journalism is experiencing severe budget cuts, we needed to persuade newsrooms that they shouldn't buy technology with astronomically expensive licenses and then literally build it into the architecture of their buildings (when I first discovered that this was happening, it took a while for my jaw to return to the un-dropped position).

Known: We needed to convince people that, if they're going to run an online community-- oh, you get the idea.

Matter: We needed to convince investors that they should put their money into startups that were designed to have a positive social mission as well as perform well financially - and that media was a sound sector to put money into to begin with.

Unlock: We need to persuade people that they should sell their work online through an open platform with no middleman, rather than a traditional payment processor or gateway.

That's a lot of ice skating uphill!

So how do you go about selling these ideas?

One of the most common ideas I've heard from other startup founders is the idea of "educating the market". If people only knew how important web standards were, or if they only knew more about privacy, or about identity, they would absolutely jump on board this better solution we've made for them in droves. We know what's best for them; once they're smarter, they'll know what's best for them too - and it's us!

Needless to say, it rarely works.

The truth comes down to this: people have stuff to do. Everyone has their own motivations and needs, and they probably don't have time to think about the issues that you hold dear to your heart. Your needs - your worries about how technology is built and used, in thise case - are not their needs. And the only way to persuade people to use a product for it to meet their deeply-held, unmet needs.

If you have limited resources, you're probably not going to pull the market to you. But if you understand the space well and understand people well, you can make a strong hypothesis about whether the market is going to come to you at some point. If you think the market is going to want what you're building two or three years out, and you can demonstrate why this is the case (i.e., it's a hypothesis founded on research, not just a hunch) - then that's a good time to start working on a product.

Which is why, while many of us were crowing over the need for web products that don't spy on you for decades, it's taken the aftermath of the 2016 election for many people to come around. Most people aren't there yet, but the market is changing, and tech companies will change their policies to match. The era of tracking won't come to an end because of activist developers like me - it'll come to an end because we failed, and Facebook's ludicrous policies (which, to be clear, aren't really different to the policies of many tech companies) reached their damaging logical conclusion, allowing everyone to see the full implications.

So if an ideology-first approach usually fails, how did we persuade people?

The truth is, it wasn't about the ideology at all. Elgg worked because people needed to customize community spaces and we provided the only platform at the time that would let them. Latakoo worked because it allowed journalists to send video footage faster and more conveniently than any other solution. Known didn't work because we allowed the ideology to become its selling point, when we should have concentrated on allowing people to build cross-device, multi-media communities  quickly and easily (the good news is that because it's open source, there's still time for it). Unlock will work if it's the easiest and most profitable way for people to make money from their work online.

You can (and should) build a tool ethically; unless you're building for a small, niche audience, you can't make ethics be the whole tool. Having deep knowledge of, and caring deeply about, the platform doesn't absolve you from the core things you need to do when you're building any product. Which, first and foremost, is this: make something that people want. Scratch their itch, not yours. Know exactly who you're building for. And make them the ultimate referee of what your product is.

· Posts · Share this post

 

The Facebook emails

I still need to read the documents unsealed by British Parliament for myself, but they seem pretty revealing.

From the Parliamentary summary itself:

Facebook have clearly entered into whitelisting agreements with certain companies, which meant that after the platform changes in 2014/15 they maintained full access to friends data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.

[...] It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.

[...] Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.

In the New York Times:

Emails and other internal Facebook documents released by a British parliamentary committee on Wednesday show how the social media giant gave favored companies like Airbnb, Lyft and Netflix special access to users’ data.

In Forbes:

In one 2013 email from Facebook's director of platform partnerships Konstantinos Papamiltiadis, the executive tells staff that “apps that don’t spend” will have their permissions revoked.

“Communicate to the rest that they need to spend on NEKO $250k a year to maintain access to the data,” he wrote. NEKO is an acronym used at Facebook to describe app install adds, according to The Wall Street Journal.

Meanwhile, the email cache reveals that Facebook shut down Vine's access to the Facebook friends API on the day it was released. Justin Osofsky, VP for Global Operations and Corporate Development, wrote Mark Zuckerberg at the time:

Twitter launched Vine today which lets you shoot multiple short video segments to make one single, 6-second video. As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision.

Zuckerberg's reply:

Yup, go for it.

Purely coincidentally, I'm sure, Facebook changed this policy yesterday. As TechCrunch reported:

Facebook will now freely allow developers to build competitors to its features upon its own platform. Today Facebook announced it will drop Platform Policy section 4.1, which stipulates “Add something unique to the community. Don’t replicate core functionality that Facebook already provides.”

That policy felt pretty disingenuous given how aggressively Facebook has replicated everyone else’s core functionality, from Snapchat to Twitter and beyond. Facebook had previously enforced the policy selectively to hurt competitors that had used its Find Friends or viral distribution features. Apps like Vine, Voxer, MessageMe, Phhhoto and more had been cut off from Facebook’s platform for too closely replicating its video, messaging or GIF creation tools. Find Friends is a vital API that lets users find their Facebook friends within other apps.

It will be interesting to follow the repercussions of this release. My hope is that we'll finally see some action from the US government in the new year. In the meantime, it's ludicrous that it took action from the UK - and legislation from the EU - to bring some of this to light.

 

· Posts · Share this post

 

Facebook's monopoly is harming consumers

I was asked last week about the ethics of social networks: what would need to change to create a more ethical ecosystem.

Targeted display advertising, of course, has a huge part to play. Facebook created a system designed to capture the attention of its users so that they could interact with advertising that was tailored for them in order to manipulate them into an action or position. People buy advertising on Facebook to drive sales, but they also buy them to manufacture brand awareness and loyalty - and to manipulate users into adopting a politcal position. Facebook's machine was not originally built to manipulate, but its business model ratified its sociopathy.

The persuasive effect of its targeted advertising and engagement algorithms would have been diminished, however, if Facebook wasn't completely ubiquitous. In Q3 2018, it had 2.27 billion monthly active users. For context, there will be an estimated 3.2 billion people online by the end of the year: Facebook's monthly active users represent 71% of the internet. In America, it's the site most commonly used to discover news, or in other words, to learn about the world.

This is a dangerous responsibility to place in the hands of a single corporation with no meaningful competition. Yes, other social networks exist, but each serves a different purpose. Twitter is a kind of town hall zeitgeist Pandora's box full of wailing souls (sorry, a place that aims to "give everyone the power to create and share ideas and information instantly, without barriers"); Instagram (which, of course, is Facebook again) is the Vogue edition of everybody's life; Snapchat rests on its "mom don't read this" ephemerality. Facebook is designed, as its homepage used to proudly proclaim, to be a social utility that "reinforces connections to the people around you". Over time, it aims to make those social connections dependent on its service.

In a world where Facebook is a core part of life for billions of people, its policy and product decisions have an outsized effect on how its users see the world. Those decisions can certainly swing elections, but they have a measurable effect on public sentiment in other areas, too. It's not so much that the platform is reflective of the global culture; because that culture is shared and discovered on the platform, the culture reflects it. A bad actor with enough time and money can construct a viral message - or suite of messages - that can sweep across billions of people in less than a day. Facebook itself could engage in social engineering, with almost no oversight. There are few barriers; there is no real vaccine beyond a vain hope that Facebook will do the right thing.

But imagine a world where there isn't one Facebook, and we all participate in many social communities across many different platforms. Rather than one mega filter bubble, we engage with lots of bubbles, loosely joined - all controlled by a different entity, potentially in a different culture, with different priorities. In this world, the actions of a single one of these bubbles become less important. If each one is making different policy and product decisions, and is a logically separate network with its own codebase, userbase, and way of working, it becomes significantly harder for anyone to make a message ubiquitous. If each one has a different feed algorithm, while a malicious campaign could infiltrate one network, it would be much harder for it to infiltrate them all. In a healthy market, even discovering all the different communities that a user participates in could become a difficult task.

Arguably, this would require a different funding model to become the norm in Silicon Valley. Venture capital has enabled many businesses to get off the ground with the capitalization they need; it is not always the bad guy. But it also inherently encourages startups to aim towards monopoly. Venture capital funds want their investments to grow at an expontential rate. VCs want to return 3X the value of each fund inside 10 years (typically) - and because most startups fail, they're looking to invest in businesses that will return around 37X their original investment. That usually looks like owning a particular market or market segment, and that's what tends to find its ways into pitch decks. "This is a $100 billion market." Subtext: "we have the potential to capture all that". In turn, targeted advertising became popular as a way for startups to make revenue because asking customers for money creates sign-up friction and reduces growth.

So accidentally, venture capital creates Facebook-style businesses that aim to grow as big as possible without regard to the social cost. It does not easily support marketplaces with lots of different service providers competing with each other for the same market over a sustained period. And businesses in Silicon Valley have a hard time getting up and running without it, because the cost of living here is so incredibly expensive. Because of the sheer density of people who have experience building technology businesses here, as well as high-end technical talent and a general culture of helpfulness, Silicon Valley is still the best place to start this kind of business. So, VC it is, until we find some other instrument to viably fund tech companies. (Some obvious contenders are out: ICOs have rightly been slapped down by the SEC, and revenue sharing investment only really works for very small amounts of investment.)

Okay, so how about we just break Facebook up, and set a precedent for future businesses, just like we did with Microsoft in the nineties? After all, its impact is even more catastrophic than Microsoft's, and its actions are even more brazenly monopolistic. Everything else aside, consider its use of a VPN app it acquired to identify apps whose usage was threatening Facebook's, so that it could proactively acquire them and shut them down.

American anti-trust law has been set ostensibly to protect consumers, rather than competition. As Wired reported a few years ago:

Under current U.S. law, being a "monopoly" is not illegal; nor is trying to best one’s competitors through lower prices, better customer service, greater efficiency, or more rapid innovation. Consumers benefit when Apple disrupts the market with iPhones and iPads, even if this means RIM sells fewer BlackBerries or that Microsoft licenses fewer desktop operating systems. Antitrust law only springs into action against a monopoly when it destroys the ability of another company to enter the market and compete.

The key question, of course, is whether a particular monopoly is harming consumers – or merely harming its competitors for the benefit of those consumers.

With any lens except the most superficial, Facebook fails this test. Yes, its product is free and available to anyone. But we pay with our data and privacy - and ultimately, with our democracy. Facebook's dominance has adversely affected entire industries, swung elections, and fuelled genocides. In the latter case, this hasn't been in the United States - at least, not so far - and perhaps this is one of the reasons why it's escaped serious repurcussions. Its effects have been felt in different ways all over the world, and various governments have had to deal with them piecemeal. There is no jurisdiction big enough to cover its full impact. Facebook is, in some ways, more powerful than the government of any nation.

There's one thought that gives me hope. Anyone who has watched Facebook closely knows that it didn't grow through brilliant strategy and genius maneuvering. Its growth curve closely maps to the growth of the internet; it happened to be in the right place at the right time, and managed to not screw it up enough to drive people away. As people joined the internet for the first time, they needed a place to go, and Facebook was it. (The same is true of Instagram, which closely maps to the growth in smartphone camera usage.) As the internet became saturated in developed nations, Facebook's growth curve slowed, and it now needs to bring more people online in developing nations if it wants to continue dominating new markets.

That means two things: Facebook will almost inevitably stagnate, and it is possible for it to be outmaneuvered.

In particular, as new computing paradigms take hold - smart speakers, ambient computing, other devices beyond laptops and smartphones - another platform, or set of platforms, can more easily take its place. When this change inevitably happens, it is our responsibility to find a way to replace it ethically, instead of with yet another monopolistic gatekeeper.

There is work to do. It won't be easy, and the outcome is far from inevitable. But the internet is no longer about code being slung from dorm rooms and garages. It's about democracy, it's deadly serious, and it needs to be treated as such.

 

Photo by JD Lasica, shared on Wikipedia under a CC BY 2.0 license.

· Posts · Share this post

 

The unexpected question I ask myself every year

Okay, but seriously, how can I get to work on Doctor Who?

It's a dumb question, but my love for this show runs deep - I've been watching it since I was five years old at least. As a non-aggressive, third culture kid who couldn't fit in no matter how he tried, growing up in Britain in the eighties and nineties, the idea of an alien pacifist who solved problems through intelligence, kindness and empathy appealed to me. It still does. It's brilliant. The best show on TV, by far.

I love it. I love watching it. I love reading the books. I dream complete adventures, sometimes. For real.

I don't need to work on it.

Oh, but I do.

I want to play in that universe. I want to take my knowledge of its 55 years of television, and my deep feeling for the character and the whole ethos of the production, and help to build that world. I want to make things that resonate for children in the same way it resonated for me.

It's not about Daleks or Cybermen or reversing the polarity of the neutron flow. It's about the fundamental humanity of telling stories that teach empathy and peace. It's about an action show where the heroes wield understanding and intuition instead of weapons. It's about an institution that genuinely transcends time and space, after 55 years, in a way that its original creators could never have understood. It's a through line to my life and how I see the world.

It's obviously a pipe dream. Still, every year, I ask myself: "am I any closer to working on Doctor Who?"

Every year, the answer is "no".

It's not like I've been working hard to take my life in that direction. I write, for sure; I've had science fiction stories published. But I work in technology - at the intersection of media and tech, for sure, but still on the side of building software and businesses. There was a time when the show was cast aside, and enthusiasts were welcomed to participate - if not with open arms, then with a markedly lower bar than today, when it's one of the hottest shows on TV.

Someone I went to school with did end up working on the show; her dad, Michael Pickwoad, was the production designer for a time. He worked on TARDIS interiors for Matt Smith and Peter Capaldi, among other things. His daughter worked on it with him for a little bit, and was even name-checked in one episode, when her soul was sucked into the internet through the wifi.

I felt a pang of envy for a moment, but mostly I thought it was cool.

What would you even need to do to work on the show? Should I be focusing more on writing fiction? Should I try and write for something else first? Could I maybe find my way into an advisory position, helping the writers to better understand Silicon Valley? (Because, listen, Kerblam! was a good episode, but the ending ruined the parable. Did Amazon ask you to change it?) I don't understand how this industry works; I don't know where to even begin. The show isn't really even for me, anymore; I'm not the six year old watching Peter Davison on BBC1 while I sit cross-legged on the floor. I'm a grown-ass, middle aged man. And who am I to think I can even stand shoulder to shoulder with the people who do this incredible work? People like Malorie Blackman and Vinay Patel, who wrote this year's standout stories?

Like I said: it's a pipe dream. I'm fine. I don't need to be a part of this. I can just enjoy it. I can.

But.

The year is closing out. We're all preparing to turn over new leaves. A new calendar on the wall means a fresh start. There's so much to look forward to, it feels like the world is finally turning a corner, and I'm working on amazing things.

Just ... look. I just need to ask one question. I can't stop myself, as stupid as it is.

Am I any closer to working on Doctor Who?

· Posts · Share this post

 

Asking permission to be heard is an idea that needs to die

I remember reading about Tavi Gevinson when she was just starting out; a wunderkind blogger. Now her media company is winding down - but at least it's winding down on her terms.

Her goodbye letter is beautiful:

In one way, this is not my decision, because digital media has become an increasingly difficult business, and Rookie in its current form is no longer financially sustainable. And in another way, it is my decision—to not do the things that might make it financially sustainable, like selling it to new owners, taking money from investors, or asking readers for donations or subscriptions. And in yet another way, it doesn’t feel like I’m deciding not to do all that, because I have explored all of these options, and am unable to proceed with any of them.

This was what I wanted to help solve. It was my job, but it went far further than that. I was up late at night while writers turned entrepreneurs cried on my shoulder; sometimes, I cried with them. I felt every setback and every problem, always wondering if there was more that I could do. And I did this as just one part of a bigger team, which in turn was just one part of a bigger movement, all understanding the importance of media, all invested in new ways to pay for it.

It is so far from being solved.

And yes, we're talking about a fairly privileged fashion blogger from New York City. But we're also not. This experience is wider and deeper than just this one publication. And even when we are focused back in on Rookie, Gevinson's unique perspective, alongside the unique perspectives of all the previously-unpublished young women writers she supported, is worth preserving.

I want these voices - of women, of people of color, of anyone with a new perspective or an insight or just words that make you feel anything at all - to be sustainable. The world needs them. Our tapestry of culture is better for their presence. Ensuring we have a thriving media has never just been about direct journalism and reporting the news (although those things are vitally important); if we accept that media is the connective tissue of society, the way we learn about the wider world, we must also accept that a vibrancy of diverse voices is central to that understanding. Not just in terms of who gets reported on and who stories are about, but who gets to make media to begin with.

The most compelling business model for media companies in America is to be propped up by a billionaire. Overwhelmingly, they lose money, and depend on wealthy benefactors to survive - sometimes through acquisition, and other times simply through donation. There are other, incrementally devolved versions of this: in a patronage model, publications ask rich people to pay so that everyone can read. Subscription models create gated communities of information. Kickstarters ask wealthy people to benevolently pay for something to come into existence. In all of these models, young media companies, outlets for voices, must in some way contort themselves to be appealing to rich people, who are predominantly white, straight men, even if those people are not its core audience.

So, then there's this. This quote is so real, and it viscerally conjures up so many feelings for me:

One woman venture capitalist told us, after hearing my very nervous pitch, “I hate to say this because I hate that it’s true, but men who come in here pitch the company they’re going to build, while women pitch the company they’ve already built.” The men could sound delusional, but they could also sound visionary; women felt the need to show their work, to prove themselves.

Women have been told "no" so many times that they don't dare to discuss the true vision for what they want to build. I've certainly been in funding discussions where the true vision was obvious but unspoken; allowing it to flourish required creating a very safe space, and one that I'm inherently less qualified to provide. An ambition unspoken is not an ambition unconceived.

Public media, as with public art, has a measurable impact on everybody's quality of life. There should be public money available for both, as there is in other countries. The reliance on wealthy individuals to graciously provide is perverse. But here we are.

Given the strangehold that rich people and their agents have on culture, we should empower more diverse people to be able to deploy that money where it's needed. Venture capital firms should aim to have more diverse partners, to make safer spaces for more diverse founders - not just for social reasons, but because immigrants founded over half the billion dollar startups in the world and companies run by women perform better.

We can't, though, accept this status quo. Ensuring that more funding goes towards supporting diverse voices means overhauling the system of funding itself. That means removing all of the funding gatekeepers. Why are they there to begin with?

My ideal would be an independent pool of money that predominantly comes from public funds, but I recognize that this is a very European-style idea that is unlikely to gain traction in the US.

Ultimately, though, it's a half-measure. A solution to a world where the decisions over whose voices get to be heard and who gets to be distributed are related to wealth is simply this: the wealth needs to be more evenly distributed. So many of our diversity problems are because society is unequal. Income inequality continues to grow in the United States, as it does in many other places - and of course, the people who hold an increasing majority of the world's income are white men.

A world where everyone has money in their pocket to support what they care about is one where many different types of voices are supported. That, it seems to me, is the real problem to solve. We have to remove the strangehold of a very small number of rich people on all of society; a strangehold that was originally established through racism, colonialism, and oppression. We're building a world where everyone else, and every endeavor that they do not directly control, must go back to them to ask permission. Everything else - problems in our political system, problems with media business models, productivity, economic diversity, crime - can be drawn back to this. We urgently need, coherently, together, and in a way that embraces our ideological diversity and differences of contexts and backgrounds, to find a way to empower everyone to live.

 

Photo by Rita Morais on Unsplash

· Posts · Share this post

 

Gefilte bubbles

My nuclear family - the one I grew up with - has four different accents. My mother's is somewhere between New England and California; my dad's is Dutch with some Swiss German and English inflections; my sister has traveled further down the road towards a Bay Area accent; and mine is just softened enough that most people think I'm from New Zealand. Thanksgiving, like Christmas, is for us a wholly appropriated holiday: not about genocide or holiness, respectively, but simply about being together as a family. Like magpies, we've taken the pieces that resonate with us, and left the rest.

Technically, I'm a Third Culture Kid: "persons raised in a culture other than their parents' or the culture of the country named on their passport (where they are legally considered native) for a significant part of their early development years". I'm not British, but grew up the place; I love it there, but I also did not assimilate.

I've never felt any particular belonging to the countries on my passports, either, which turns out to be a common characteristic among TCKs. Instead, our nationality and religion is found among shared values and the relationships we build. I've written about this before, although back then I didn't fully understand the meta-tribe to which I belonged. It's also part of the Jewish experience, and the experience of any group of people who has been forcibly moved throughout history. Yes, I'm a product of globalization, but that doesn't mean I'm also a product of privilege; migration for many, including my ancestors, has not been optional.

I was well into my thirties before I understood that my experience of culture was radically different to many other peoples'. It hadn't occurred to me that some people simply inherit norms: the practices of their communities become their practices, too; the way things were done become the way things are and will be done. If you live in this sort of cultural filter bubble, challenges to those well-established norms are threatening. We know that people prefer to consume news and information that confirms their existing beliefs; that's why misinformation can be so effective. The same confirmation bias also applies to how people choose to build relationships of all kinds with one another. It's at the heart of xenophobia and racism, at its most overt, but it also manifests in subtler ways.

I lost count of the number of people who told me I should give up my nationalities and become British, or who made fun of my name, or took issue with my lack of understanding of shared cultural norms. Food is just one example of something mundane that can be incredibly contentious: the dishes from your community carry the weight of love and history. When someone presents as being from your community - no visible differences; more or less the same accent, even if they mispronounce a word here and there - but doesn't have any of that shared understanding, it simply doesn't compute.

I'm fascinated by this survey of Third Culture Kid marriages. The TCK blogger Third Culture Mama received 130 responses from TCKs and their spouses, in an effort to discover how cross-cultural relationships can thrive. It's the first time I've seen anything like this, and I found some of the qualitative responses to be unexpectedly comforting. For example:

When multiple cultures are involved it’s easy to idealize your own culture and how you were brought up. But if you can set it aside to listen to another point of view and another way of doing things, you realize there isn’t only one right way. As a couple you need to decide to say “this is how WE do things. This is what WE believe.” Not “this is what she did. Or this is how my family did it growing up.” There is great validity in understanding both of your pasts and how you were raised. But you need to move on from there and choose a path that you go down together. Doing this takes humility, love, and a desire to do right more than to be right. Listen to one another.

Particularly in startup-land, but in the media in general, there is a glut of how-to articles that assume what worked for the author will work for you. It's a great idea to read other peoples' experiences and learn from them, but you can't apply them directly: you have to forge your own path. Rather than take someone else's pattern verbatim and throw yourself into it, you need to build something that is nurturing and right for you. That's true in relationships, and it's true in business. Over half of all billion dollar startups were founded by immigrants, and I think this mindset is one of the reasons why. As an immigrant, you don't have the luxury of following patterns; you have to weave your own from first principles. You can't make assumptions about how people will behave; you have to study them. Taking this outside perspective is a path to success for everyone.

Another response:

Ask questions, let them cook food from their childhood, look at pictures, learn key phrases in their language. Understand that we’re constantly fighting against this dichotomy of wanting to venture off, but also wanting a place to belong. Realize that we approach emotional intimacy and relationships very differently.

For me, the relationships that have worked are the ones where we've made the space to create our own culture together. I'm drawn to outsiders and people who are willing to question established norms, and over time, through trial and error, bad interactions and good, I've found that I find slavish adherence to cultural norms in a person as threatening as some people find the opposite in me. I've decided that the edge of established culture is where the interesting work happens, and where some of the most interesting people can be found.

In other words, my filter bubble is my psychological safety zone. It's an emotional force field, just as it is for everyone. We all choose who we interact with, who we listen to, and the spaces that we inhabit. The important thing is not that we blow those bubbles to smithereens, but that we see them for what we are, and - just as those happily married TCKs have - let people in to help us grow and change them.

This weekend, children were shot with rubber bullets and tear gas at the US border with Mexico. The root of America's refusal to let them in is a fear of a disruption to those norms. It's in vain. Populations have been ebbing and flowing for as long as there have been people. America is changing, just as all countries are changing, how they always have been, and how they always will. And people like me - those of us with no nationality and no religion, but an allegiance to relationships and the cultures we create together - are growing in number. Selfishly, but also truthfully, I believe it's all for the better.

 

Photo by Elias Castillo on Unsplash

· Posts · Share this post

 

How machine learning can reinforce systemic racism

Over Thanksgiving, the Washington Post ran a profile of the babysitting startup Predictim:

So she turned to Predictim, an online service that uses “advanced artificial intelligence” to assess a babysitter’s personality, and aimed its scanners at one candidate’s thousands of Facebook, Twitter and Instagram posts.

The system offered an automated “risk rating” of the 24-year-old woman, saying she was at a “very low risk” of being a drug abuser. But it gave a slightly higher risk assessment — a 2 out of 5 — for bullying, harassment, being “disrespectful” and having a “bad attitude.”

Machine learning works by making predictions based on a giant corpus of existing data, which grows, is corrected, and becomes more accurate over time. If the algorithm's original picks are off, the user lets the software know, and this signal is incorporated back into the corpus. So to be any use at all, the system broadly depends on two important factors: the quality of the original data, and the quality of the aggregate user signal.

In the case of Predictim, it needs to have a great corpus of data about a babysitter's social media posts and how it relates to their real-world activity. Somehow, it needs to be able to find patterns in the way they use Instagram, say, and how that relates to whether they're a drug user or have gone to jail. Then, assuming Predictim has a user feedback component, the users need to accurately gauge whether the algorithm made a good decision. Whereas in many systems a data point might be reinforced by hundreds or thousands of users giving feedback, presumably a babysitter has comparatively fewer interactions with parents. So the quality of each instance of that parental feedback is really important.

It made me think of COMPAS, a commercial system that provides an assessment of how likely a criminal defendant is to recidivate. This tool is just one that courts are using to actually adjust their sentences, particularly with respect to parole. Unsurprisingly, when ProPublica analyzed the data, inaccuracies fell along racial lines:

Black defendants were also twice as likely as white defendants to be misclassified as being a higher risk of violent recidivism. And white violent recidivists were 63 percent more likely to have been misclassified as a low risk of violent recidivism, compared with black violent recidivists.

It all comes down to that corpus of data. And when the underlying system of justice is fundamentally racist - as it is in the United States, and in most places - the data will be too. Any machine learning algorithm supported by that data will, in turn, make racist decisions. The biggest difference is that while we've come to understand that the human-powered justice system is beset with bias, that understanding with respect to artificial intelligence is not yet widespread. For many, in fact, the promise of artificial intelligence is specifically - and erroneously - that it is unbiased.

Do we think parents - particularly in the affluent, white-dominated San Francisco Bay Area communities where Predictim is likely to launch - are more or less likely to give positive feedback to babysitters from communities of color? Do we think the algorithm will mark down people who use language most often used in underrepresented communities in their social media posts?

Of course, this is before we even touch the Minority Report pre-crime implications of technologies like these: they aim to predict how we will act, vs how we have acted. The only possible outcome is that people whose behavior fits within a narrow set of norms will more easily find gainful employment, because the algorithms will be trained to support this behavior, while others find it harder to find jobs they might, in reality, be able to do better.

It also incentivizes a broad surveillance society and repaints the tracking of data about our actions as a social good. When knowledge about the very existence of surveillance creates a chilling effect on our actions, and knowledge about our actions can be used to influence democratic elections, this is a serious civil liberties issue.

Technology can have a part to play in building safer, fairer societies. But the rules they enforce must be built with care, empathy, and intelligence. There is an enormous part to play here not just for user researchers, but for sociologists, psychologists, criminal justice specialists, and representatives from the communities that will be most affected. Experts matter here. It's just one more reason that every team should incorporate people from a wide range of backgrounds: one way for a team to make better decisions on issues with societal implications is for them to be more inclusive.

· Posts · Share this post

 

I'm going dark on social media for the rest of 2018.

For a host of reasons, I've decided to go dark on social media for the remainder of 2018. If my experiment is successful beyond that time, I'll just keep it going.

Originally, I'd intended to do this just for the month of December, but as I sat around the Thanksgiving dinner table yesteryday, surrounded by family and friends, I asked myself: "why not now?"

So, now is the time.

There are two reasons:

The first is that, ordinarily, if a company was found to be furthering an anti-semitic smear in order to protect itself from accusations that it had allowed illegal political advertising in order to influence an election, I probably wouldn't buy goods or services from that company. Particularly if they tried hard to hide that news. The fact that this company has ingrained itself in nearly every aspect of modern life doesn't mean it should be excused - in fact, it makes its actions exponentially more disturbing.

Similarly, other social networks have not exactly shown themselves to be exemplars. While I firmly believe that the web is a net positive for democracy which has provided opportunities for everyone to have a voice, social networking companies have largely shirked the responsibilities of the privileged positions they have found themselves in. We use them more than any other source to learn about the world - but they've chosen to serve us with algorithms that are optimized to maximize our engagement with display ads rather than nurture our curiosity and empathy. Emotive content tends to rise to the top, which has real effects: we're more divided than ever before in the west, and in countries like Myanmar, social networking has been an ingredient in genocide.

I don't want my engagement, or engagement in the content I contribute, to add value to this machine.

The second reason is that it doesn't make me feel good. Partially this is because of the emotive content the algorithms serve to me, which takes a real emotional toll. Partially it's because the relationships you maintain on social networks are shallow. In some cases, they are shadows of real, deeper relationships, but they don't serve those relationships well; posting feels like emotional labor, but has little of the emotional effect or intimacy of real communication. It's an 8-bit approximation of friendship where the conversations are performative because they're always in front of an audience.

One of the things that was stopping me from withdrawing from social media is a worry that people will forget about me. Many of my friends are overseas, and we don't see each other on a regular basis. But I've decided that this is manufactured FOMO; my really meaningful relationships will continue regardless of which social networks I happen to use. The idea that Facebook is an integral part of my friendships seems more toxic the more I think about it.

Finally, I'll admit it: I'm kind of depressed. Social networking has been shown to make people more so. Cutting it out for a while seems like an okay thing to try.

I removed all my social apps on my phone and replaced them with news sources and readers. So here's where to find me for the next little while:

I'm cutting out Facebook, Twitter, Instagram, LinkedIn, and Mastodon completely. (Mastodon doesn't suffer from the organizational issues I described above, but by aping commercial social networking services, it suffers from the same design flaws.) As of tonight, I won't be logging into those platforms on any device, and I won't receive comments, likes, reshares, etc, on any of them.

I will be posting regularly on my blog here at werd.io. If you use a feed reader (I use NewsBlur and Reeder together), I have an RSS feed. Yeah, we still have those in 2018. But if you don't, you can also get new posts in your email inbox by subscribing over here. I've set it up so you can just reply to any message and I'll get it immediately.

You can always email me at ben@benwerd.com, or text me on Signal at +1 (510) 283-3321.

I'm not removing any accounts for now - I'm simply logging out. If this experiment continues, I'll go so far as to remove my information.

Please do say hi using any of those methods. And if we find ourselves in the same city, let's hang out. I'm hoping that this experiment will lead to more, deeper relationships. But for now: this is why you're not going to see my posts in your usual feeds.

· Posts · Share this post

 

Media for the people

Yesterday, in the afternoon, I collapsed. Everything seemed overwhelming and sad.

Today, I'm full of energy again, and I think there's only one kind of work that matters. The work of empowerment.

Broadly: How can we return to a functional democracy that works for everyone?

Narrowly: How can we make sure this administration is not able to follow its authoritarian instincts, how can we make sure they are nowhere near power in 2020, and how can we make sure this never happens again?

A huge amount of this is fixing the media. Not media companies - but the fabric of how we get our information and share with each other. I've been focused on this for my entire career: Elgg, Latakoo, Known, Medium, Matter and Unlock all deal with this central issue.

A convergence of financial incentives has created a situation where white supremacy and authoritarianism can travel across the globe in the blink of an eye - and can also travel faster than more nuanced ideas. Fascist propaganda led directly to modern advertising, and modern advertising has now led us right back to fascist propaganda, aided and abetted by people who saw the right to make a profit as more important than the social implications of their work.

I think this is the time to take more direct action, and to build institutions that don't just speak truth to power, but put power behind the truth. Stories are how we learn, but our actions define us.

Non-violent resistance is the only way to save democracy. But we need it in every corner of society, and in overwhelming numbers.

There are people out on the streets today, who have been fighting this fight for longer than any of us. How can we help them be more effective?

How can we help people who have never been political before in their lives to take a stand?

How can we best overcome our differences and come together in the name of democracy, freedom, and inclusion?

And how can we actively dismantle the apparatus of oppression?

It's time to create a new kind of media that presents a real alternative to the top-down structures that have so disserved us. One that is by the people, for the people, and does not depend on wealthy financial interests.

And with it, a new kind of democracy that is not just representative, but participative. For everyone, forever.

· Posts · Share this post

 

Gab and the decentralized web

As a proponent of the decentralized web, I've been thinking a lot about the aftermath of the domestic terrorism that was committed in Pittsburgh at the Tree of Life synagogue over the weekend, and how it specifically relates to the right-wing social network Gab.

In America, we're unfortunately used to mass shootings from right-wing extremists, who have committed more domestic acts of terror than any other group. We're also overfamiliar with ethnonationalists and racist isolationists, who feel particularly emboldened by the current President. Lest we forget, when fascists marched in the streets yelling "the Jews will not replace us", he announced that "you had very fine people on both sides". The messaging could not be more clear: the President is not an enemy of hate speech.

As the modern equivalent of the public square, social networking services have been under a lot of pressure to remove hate speech from their platforms. Initially, they did little; over time, however, they began to remove many of the worst offenders. Hence Gab, which was founded as a kind of refuge for people whose speech might otherwise be removed by the big platforms.

Gab claims it's a neutral free speech platform in the spirit of the First Amendment. (Never mind that the First Amendment protects you from the government curtailing your speech, rather than corporations enacting policies for private spaces that they own and control.) But anyone who has spent 30 seconds there knows this isn't quite right. This weekend's shooter chose to post there before committing his atrocity; afterwards, many other users proclaimed him to be a hero.

It's an online cesspit, home to of some of the worst of humanity. These are people who refer to overt racism as "wrongthink", and mock people who are upset by it. As Huffington Post recently reported about its CEO, Andrew Torba:

[...] As Gab’s CEO, he has rooted for prominent racists, vilified minorities, fetishized “trad life” in which women stay at home with the kids, and fantasized about a second American civil war in which the right outguns the left.

Gab is gone for now - a victim of its service providers pulling the plug in the wake of the tragedy - but it'll be back. Rather than deplatforming, the way to fight this speech, it claims, is with more speech. In my opinion, this is a trap that falsely sets up the two oppositing sides here as being equivalent. Bigotry is not an equal idea, but it's in their interests to paint it as such. While it's pretty easy to debate bigots on an equal platform and win, doing so unintentionally elevates their standing. Simply put, their ideas shouldn't be given oxygen. A right to freedom of speech is not the same as a right to be amplified.

I found this piece by an anonymous German student in Saxony instructive:

We also have to understand that allowing nationalist slogans to gain currency in the media and politics, allowing large neo-Nazi events to take place unimpeded and failing to prosecute hate crimes all contribute to embolden neo-Nazis. I see parallels with an era we thought was confined to the history books, the dark age before Hitler.

An often-repeated argument about deplatforming fascists is that we'll just drive them underground. In my opinion, this is great: when we're literally talking about Nazis, driving them underground is the right thing to do. Yes, you'll always have neo-Nazis somewhere. But the more they're exposed to the mainstream, the more their movement may gain steam. This isn't an academic problem, or a problem of optics: give Nazis power and people will die. These are people who want to create ethnostates; they want to prioritize people based on their ethnicity and background. These movements start in some very dark places, and often end in genocide.

When we talk about a decentralized social web, the framing is usually that it's one free from censorship; where everyone has a home. I broadly agree with that idea, but I also think the discussion must become more nuanced in the face of communities like Gab.

I agree wholeheartedly that the majority of our global discourse can't be trusted to a small handful of very large, monocultural companies that answer to their shareholders over the needs of the public. The need to make user profiles more valuable to advertisers has, for example, seen transgender users thrown off the platform for not using their deadnames. In a world where you need to be on social media to effectively participate in a community, that has had a meaningful effect on already vulnerable communities.

There's no doubt that this kind of unacceptable bigotry at the hands of surveillance capitalism would, indeed, be prevented by decentralization. But removing silos would also, at least in theory, enable and protect fascist movements, and give racists like this weekend's shooter a place to build unhindered community.

We must consider the implications of removing these gatekeepers very deeply - and certainly more deeply than we have been already.

A common argument is that the web is just a tool, oblivious to what people use it for. This is similar to the argument that was made about algorithms, until it became obvious that they were built by people and based on their assumptions and biases. Nothing created by people is unbiased; everything is in part derived from the context and assumptions of its creators. By being more aware of our context and the assumptions we're bringing to the table, we can hopefully make better decisions, and see potential problems with our ideas sooner. Even if there isn't a perfect solution, understanding the ethics of the situation allows us to make more informed decisions.

On one side, by creating a robust decentralized web, we could create a way for extremist movements to thrive. On another, by restricting hate speech, we could create overarching censorship that genuinely guts freedom of speech protections, which would undermine democracy itself by restricting who can be a part of the discourse. Is there a way to avoid the second without the first being an inevitability? And is it even possible, given the possible outcomes, to return to our cozy idea of the web as being a force for peace through knowledge?

These are complicated ethical questions. As builders of software on the modern internet, we have to know that there are potentially serious consequences to the design decisions we make. Facebook started as a prank by a college freshman and now has a measurable impact on genocide in Myanmar. While it's obvious to me that everyone having unhindred access to knowledge is a net positive that particularly empowers disadvantaged communities, and that social media has allowed us to have access to new voices and understand a wider array of lived experiences, it has also been used to spread hate, undermine elections, and disempower whole communities. Decentralizing the web will allow more people to share on their own terms, using their own voices; it will also remove many of the restrictions to the spread of hatred.

Wherever we end up, it's clear that President Trump is wrong about the alt-right: these aren't very fine people. These are some of the worst people in the world. Their ideology is abhorrent and anti-human; their messages are obscene.

No less than the future of democratic society is at stake. And a society where the alt-right wins won't be worth living in.

Given that, it's tempting to throw up our hands and say that we should ban them from speaking anywhere online. But if we do that, the consequence is that there has to be a mechanism for censorship built into the web, and that there should be single points of failure that could be removed to prevent any community from speaking. Who gets to control that? And who says we should get to have this power?

· Posts · Share this post

 

The day I realized I was going against the career grain

One of the most surreal professional experiences of my career was going to work for Medium. It was a decision I thought long and hard about, and was a sea change in the way I worked.

For my entire career, I'd gone against the grain. I bootstrapped an open source startup from Scotland, determined that I wouldn't move to Silicon Valley. I was the first employee at another one, based in Texas, that was determined to be Texan through and through. And then I finally founded a company in the San Francisco Bay Area, but was determined that it should be open source and decentralized (at a time when almost all investors were against the idea). In all these cases, while I had equity, I had a pretty low salary. In fact, I had never made much money at all, because I had put the highest priority on maintaining my social ideology.

So when I came to Medium, I immediately earned double the highest amount of money I'd ever made. Suddenly I was in this incredibly slick work environment, with empathetic, thoughtful people who were at the top of their skills. There were high-burn frills like kombucha on tap, but much more importantly, there were real benefits. Vacation was encouraged, there was parental leave, and I could spend thousands of dollars on my own education without drawing from my salary. (Side note: a lot of fancy tech company benefits are things that every employee in Europe is entitled to by law.)

Most strikingly, the people I worked with had mostly never worked in low-budget startups. If they'd been involved in small businesses at all, they had very quickly attracted millions of dollars in venture capital - but quite often, they'd come from companies like Google, and had enjoyed these kinds of salaries and benefits for their entire working lives.

Only then did I realize that for my entire career, by going against the grain and trying to build my own environments from scratch, I had made life incredibly hard for myself. Honestly, I thought that this was just how work was. But it turned out there was this world where, if I could accept not being my own boss and coming into an office building every day (which had both felt like psychological barriers, but in reality were very minor), I could make good money, go home at a normal time, take decent vacations without worrying so much about the budget, and be a healthier human being. What?!

In reality, I became incredibly anxious. Because I was working with people who had just had the luxury of focusing on their skills for their whole careers, I had really strong imposter syndrome. And everything was so slow, methodical, and ordered compared to the bouncing-off-the-walls chaos of an early-stage startup. I was still a little bit addicted to the adrenaline, and adapting was tougher than it should have been. This was the cushiest job I ever had, with some of the most genuinely amazing coworkers. I was a highly privileged technology worker, making really good money in a lovely environment - and I felt guilty for not being as happy as I felt I should have been.

Over time, it got easier. Matter offered me a job at the end of my first year, which I couldn't say no to. I think I wouldn't have done as well if I hadn't gone to Medium first: I had become a team player, and a much better employee. Had I stayed, I'm certain the unease would have continued to fade over time. I continued this growth trajectory at Matter; it was like losing an addiction to radical independence.

Honestly, I think that kind of radical independence is oversold. Being a founder - or frankly, even just a sole operator or consultant - is lonely, hard work, and the pay is bad. It's a bit sad that it took me over a decade to understand this. And while founding something is something I don't want to downplay, you should only do it if there's a foreseeable path to a point where you won't be in survival mode. (Real investment really helps, but it's not appropriate for every business, and not everyone can raise it.) Doing what regular people do - which is to get a job, potentially move to where the jobs are, pull a salary as part of a much larger organization, and build a financially stable future - is not at all a bad way to live. And I wish I could go back and tell me 25 year old self about it.

· Posts · Share this post

 

It's time for a new branch of public media

President Lyndon B Johnson signed the Public Broadcasting Act in 1967, which established the Corporation for Public Broadcasting. Previously, an independent public broadcaster had been established through grants by the Ford Foundation, but Ford began to withdraw its support.

Here's what he said:

"It announces to the world that our nation wants more than just material  wealth; our nation wants more than a 'chicken in every pot.' We in America have an appetite for excellence, too. While we work every day to produce new goods and to create new wealth, we want most of all to enrich man's spirit. That is the purpose of this act."

To this day, PBS and NPR carry balanced, factual programming, supported by listeners and underwriters rather than ads.

Meanwhile, C-SPAN was established in 1979 as an independent, non-profit entity. It was founded by cable operators, and gets its funding through carrier fees. It gets 6 cents per cable subscriber in the United States. Its coverage of America's political process is unprecedented.

Public broadcast media hasn't just had an effect on the education of the public and on elections. It's also had an effect on private media, acting as a bar for the kinds of high-quality content that audiences might expect. For example, NPR sets the bar for commercial podcasting.

If companies like Facebook and Twitter are media companies too - and they are - we haven't yet seen a non-commercial equivalent as we have for TV and radio. There's an argument that open projects like Mastodon have a similar spirit, but there's no major backing.

As more and more of us get our news and information from social media, there's more of a call for a public media equivalent. Just as NPR and PBS don't need to worry about which content will sell commercials, this wouldn't worry about promoting engagement to sell display ads.

In the same way that NPR and PBS have set the bar for factual content on radio and TV, an online service run in the public interest would set the bar for how content is delivered online. It would improve the ecosystem for everyone, as well as being directly informative.

History points to different ways this could be funded. The Ford Foundation could back it, in the same way they backed the original US public broadcasters. The Corporation for Public Broadcasting, or an organization like it, could back it. Or it could be created through contributions from service providers, as was done with C-SPAN.

It could also be established as a nonprofit fund that would back and underwrite promising storytelling platforms that promised to be run in the public interest. A little bit of seed funding across multiple projects at first; then more funds to back the platforms that succeeded.

If we've learned anything from broadcasting (or Facebook!), for-profit alone isn't enough to create a healthy media ecosystem. But any noncommercial service is going to need to find both financial & cultural backing.

I think it's one of the most important things we can be doing.

 

This piece was originally published as a Twitter thread.

· Posts · Share this post

 

It's time to get out of the way of artists making money on the internet

I'm spending some of my time trying to better understand how people who make creative work on the internet - writers, artists, musicians, indie developers - can build an audience and make a living from their work.

I have a lot of questions about how these creators can find people who their work resonates with. This is the opposite of founding a startup or a small business, for example: there you're finding the audience first, and building something that resonates with them. While some creative work is along those lines, more of it comes from a different creative space. The work is some function of the creator's need, with the feedback loop from the audience factoring into the mix as it grows.

Community-building, then, is a big question - particularly in the world of opaque social media algorithms that get in the way of talking directly to your followers. I'm calling it "community-building" becaues while promotion is a component, it's not the whole purpose, nor the overriding instinct. Finding kindred minds is a more immediate emotional need, even if the financial act of covering your bills is closer to the base level of Maslow's Hierarchy.

In the current ecosystem, community-building and compensation have been rolled up into one set of tools. By providing value over the top of facilitating transactions, platforms can attract creators. The more creators they attract, the larger the audience they bring with them, and the larger the cumulative profit they ultimately earn.

Medium does this well: by submitting work to the Partner Program, you're much more likely to be featured on the homepage and in its newsletters - and its payments are not insubstantial (here's my featured story Rules for Resters). Substack performs a similar trick for email newsletters (I subscribe to Daniel Ortberg). Patreon attempts to do it for every kind of creative work on every medium, which is a tricky balancing act (I back Hallie Bateman and Mastodon).

Everybody is more or less aligned here, and real money is being made, but this bundling makes it difficult to tailor your revenue or community-building tactics to your audience. One size has to fit all.

This may work for some creators; others, not so much. Every community and audience is different, and understanding their needs and desires is a core part of building a following, and a subscriber base. It's not about what you assume their needs and desires are; it's all about getting to know them as real people, and through this holistic understanding, developing unique insights about them. These insights can validate or invalidate your assumptions, but they can also take you in entirely new directions. (This principle applies to both artists and business founders, although, as I pointed out earlier, the starting point in this learning cycle is probably different.)

There's a clear benefit to making payments easier, and having a common gateway to do that, so that audience members don't have to enter their credit card details again and again and again. But that doesn't mean that everything needs to be bundled. There's also a clear benefit to having the tools of community-building and taking payments made out of small pieces, loosely joined, so that you can create the stack that makes the most sense for your own community, with tools that are tailored for them. One size fits all services are the first step, and maybe the entry point. But this is the web, and more is possible.

Patreon et al don't just want to own the payment relationship between artists and their audiences; they want to  own all aspects of that relationship. They want fans to visit their homepages instead of the artists' own. Ultimately, they want to own the way artists communicate with the world - making those communications subject to their own rules.

By establishing open standards for one-click, peer-to-peer payment that can then integrate with multiple tools, artists can potentially be better served. They can meet their audiences where they're at. They can make money without adhering to anyone else's rules. And they can more quickly reach a point where they're covering their costs through the work that they love.

This open source, decentralized world is coming. It's great news for anyone who wants to see a diverse cultural landscape where anyone can make money on their own terms, without regard for language, borders, or what someone at a desk in San Francisco thinks would be nice to promote. And it will change everything.

· Posts · Share this post

 

Making work in the Trump era

Honestly, most days, I feel paralyzed. I feel like there's so much happening, that we're literally descending into fascism on a global scale, and that I don't know if anything I do can possibly be impactful enough. I also feel that while it would be easy to block it all out and carry on as normal, to put politics aside and live my life as if none of this was going on, to do so would be complicity.

I have the privilege to set everything aside, as a white male in Silicon Valley. But if I did that, I would feel the weight of my ancestors - people who fled pogroms in Ukraine, who fought for social justice in 1930s America, who fought the Nazis in Europe, who led the resistance against the Japanese in Indonesia - weighing down on me. And I would feel the weight of my friends of color, my LGBTQIA friends, my immigrant friends. It would be an entirely selfish act. And even selfishly, the result would be a world that I simply don't want to live in: a restrictive, brutal, theist society built around the supremacy of a narrow, arbitrary demographic.

If you are not vocally political in the current era, your inaction is tacit support for the current regime and its bigoted value system. End of story.

I know I'm not alone.

But I also know there's work to be done.

I'm vocal; I give a significant percentage of my income; I march. But I also need to pay my rent and cover these donations to begin with.

I've already made myself one pact: while I work in tech, an industry that has undeniably been part of the problem, I will only work on mission-driven problems at the intersection with democracy. I've turned down large salaries at companies you can name, because I want to be able to feel like I'm part of the solution and not the problem. It means I'll probably never be a millionaire. I can live with that.

The second, newer pact, is to work hard at the work I do, to the exclusion of distractions. This is not something I've been good at, but it's a skill I need to rebuild. Like many of us, I've been glued to social media, simultaneously addicted to and exhausted by every new development. And honestly, I have to break out of it.

Although raising and maintaining awareness is vital, sitting and typing outraged tweets on social media is masturbatory, and benefits the very platforms that were a large part of creating this current situation. Taking a step back and using my voice to amplify others who might not enjoy the same privileges, while also taking more calculated moves to have impact where it counts, is more important.

· Posts · Share this post

 

Bad news: there's no solution to false information online

For the last couple of years, fake news has been towards the top of my agenda. As an investor for Matter, it was one of the lenses I used to source and select startups in the seventh and eighth cohorts. As a citizen, disinformation and misinformation influenced how I thought about the 2016 US election. And as a technologist who has been involved in building social networks for 15 years, it has been an area of grave concern.

Yesterday marked the first day of Misinfocon in Washington DC; while I'm unfortunately unable to attend, I'm grateful that hundreds of people who are much smarter than me have congregated to talk about these issues. They're difficult and there's no push-button answer. From time to time I've seen pitches from people who purport to solve them outright, and people have phoned me to ask for a solution. So far, I've always disappointed them: I'm convinced that the only workable solution is a holistic approach that provides more context.

Of course, it's a terrible term that's being used to further undermine trust in the press. When we talk about "fake news", we're really talking about three things:

Propaganda: systematic propagation of information or ideas in order to encourage or instil a particular attitude or response. In other words: weaponized information to achieve a change of mindset in its audience. The information doesn't have to be incorrect, but it might be.

Misinformation: spreading incorrect information, for any reason. Misinformation isn't necessarily malicious; people can be wrong for a variety of reasons. I'm wrong all the time, and you are too.

Disinformation: disseminating deliberately false information, especially when supplied by a government or its agent to a foreign power or on the media with the intention of influencing policies of those who receive it.

None of them are new, and certainly none of them were newly introduced in the 2016 election. 220 years ago, John Adams had some sharp words in response to Condorcet's comments about journalism:

Writing in the section where the French philosopher predicted that a free press would advance knowledge and create a more informed public, Adams scoffed. “There has been more new error propagated by the press in the last ten years than in an hundred years before 1798,” he wrote at the time.

Condorcet's thoughts on journalism inspired the establishment of authors' rights in France during the French revolution. In particular, the right to be identified as an author was developed not to reward the inventors of creative work, but so that authors and publishers of subversive political pamphlets at the time could be identified and held responsible. It's clear that these conversations have been going on for a long time.

Still, trust in the media is at an all-time low. 66% of Americans say the news media don't do a good job of separating facts from opinion; only 33% feel positively about them. As Brooke Binkowski, Managing Editor of Snopes, put it to Backchannel in 2016:

The misinformation crisis, according to Binkowski, stems from something more pernicious. In the past, the sources of accurate information were recognizable enough that phony news was relatively easy for a discerning reader to identify and discredit. The problem, Binkowski believes, is that the public has lost faith in the media broadly — therefore no media outlet is considered credible any longer.

Credibility is key. In the face of this lack of trust, a good option would be to go back to the readers, understand their needs deeply, and adjust your offerings to take that into account. It's something that Matter helped local news publishers in the US to do recently with Open Matter to great success, and there's more of this from Matter to come. But this is still a minority response. As Jack Shafer wrote in Politico last year:

But criticize them and ask them to justify what they do and how they do it? They go all go all whiny and preachy, wrap themselves in the First Amendment and proclaim that they’re essential to democracy. I won’t dispute that journalists are crucial to a free society, but just because something is true doesn’t make it persuasive.

So what would be more persuasive?

How can trust be regained by the media, and how could the web become more credible?

There are a few ways to approach the problem: from a bottom-up, user driven perspective; from the perspective of the publishers; from the perspective of the social networks used to disseminate information; and from the perspective of the web as a platform itself.

Users

From a user perspective, one issue is that modern readers put far more trust in individuals than they do in brand names. It's been found that users trust organic content produced by people they trust 50% more than other types of media. Platforms like Purple and Substack allow journalists to create their own personal paid subscription channels, leveraging this increased trust. A more traditional publisher brand could create a set of Purple channels for each business, for example.

Publishers

From a publisher perspective, transparency is key: in response to an earlier version of this post, Jarrod Dicker, the CEO of Po.et, pointed out that transparency of effort could be helpful. Here, journalists could show exactly how the sausage was made. As he put it, "here are the ingredients". Buzzfeed is dabbling in these waters with Follow This, a Netflix documentary following the production of a single story each episode.

Publishers have also often fallen into the trap of writing highly emotive, opinion-driven articles in order to increase their pageview rate. Often, this is created by incentives inside the origanization for journalists to hit a certain popularity level for their pieces. While this tactic may help the bottom line in the short term, it comes at the expensive of longer term profits. Those opinion pieces erode trust in the publisher as a source of information, and because the content is optimized for pageviews, it results in shallower content overall.

Social networks

From a social network perspective, fixing the news feed is one obvious way to make swift improvements. Today's feeds are designed to maximize engagement by showing users exactly what will keep them on the platform for longer, rather than a reverse chronological list of content produced by the people and pages they've subscribed to. Unfortunately, this prioritizes highly emotive content over factual pieces, and the algorithm becomes more and more optimized for this over time. The "angry" reacji is by far the most popular reaction on Facebook - a fact that illustrates this emotional power law. As the Pew Research Center pointed out:

Between Feb. 24, 2016 – when Facebook first gave its users the option of clicking on the “angry” reaction, as well as the emotional reactions “love,” “sad,” “haha” and “wow” – and Election Day, the congressional Facebook audience used the “angry” button in response to lawmakers’ posts a total of 3.6 million times. But during the same amount of time following the election, that number increased more than threefold, to nearly 14 million. The trend toward using the “angry” reaction continued during the last three months of 2017.

Inside sources tell me that this trend has continued. Targeted display advertising both encourages the platforms to maximize revenue in this way, and encourages publishers to write that highly emotive, clickbaity content, undermining their own trust in order to make short-term revenue. So much misinformation is simply clickbait that has been optimized for revenue past the need to tell any kind of truth.

It's vital to understand these dynamics from a human perspective: simply applying a technological or a statistical lens won't provide the insights needed to create real change. Why do users share more emotive content? Who are they? What are their frustrations and desires, and how does this change in different geographies and demographics? My friend Padmini Ray Murray rightly pointed out to me that ethnographies of use are vital here.

It's similarly important to understand how bots and paid trolls can influence opinion across a social network. Twitter has been hard at work suspending millions of bots, while Facebook heavily restricted its API to reduce automatic posting. According to the NATO Stratcom Center of Excellence:

The goal is permanent unrest and chaos within an enemy state. Achieving that through information operations rather than military engagement is a preferred way to win. [...] "This was where you first saw the troll factories running the shifts of people whose task is using social media to micro-target people on specific messaging and spreading fake news. And then in different countries, they tend to look at where the vulnerability is. Is it minority, is it migration, is it corruption, is it social inequality. And then you go and exploit it. And increasingly the shift is towards the robotisation of the trolling."

Information warfare campaigns between nations are made possible by vulnerabilities in social networking platforms. Building these platforms has long stopped being a game, simply about growing your user base; they are now theaters of war. Twitter's long-standing abuse problem is now an information warfare problem. Preventing anyone from gaming them for such purposes should be a priority - but as these conflicts become more serious, the more platform changes become a matter of foreign policy. It would be naïve to assume that the big platforms are not already working with governments, for better or worse.

The web as a platform

Then there's the web as a platform itself: a peaceful, decentralized network of human knowledge and creativity, designed and maintained for everyone in the world. A user-based solution requires behavior change; a social network solution requires every company to improve its behavior, potentially at the expense of its bottom line. What can be done on the level of the web itself, and the browsers that interpret it, to create a healthier information landscape?

One often-touted solution is to maintain a list of trustworthy journalistic sources, perhaps by rating newsroom processes. Of course, the effect here is direct censorship. Whitelisting publishers means that new publications are almost impossible to establish. That's particularly pernicious because incumbent newsrooms are disproportionately white and male: do we really want to prevent women and people of color from publishing? Furthermore, these publications are often legacy news organizations whose preceived trust derives from their historical control over the means of distribution. The fact that a company had a license to broadcast when few were available, or owned a printing press when publishing was prohibitively expensive for most people, should not automatically impart trust. Rich people are not inherently more trustworthy, and "approved news" is a regressive idea.

Similarly, accreditation would put most news startups out of business. Imagine a world where you need to pay thousands of dollars to be evaluated by a central body, or web browsers and search engines around the world would disadvantage you in comparison to people who had shelled out the money. The process would be subject to ideological bias from the accrediting body, and the need for funds would mean that only founders from privileged backgrounds could participate.

I recently joined the W3C Credible Web Community Group and attended the second day of its meeting in San Francisco, and was impressed with the nuance of thought and bias towards action. Representatives from Twitter, Facebook, Google, Mozilla, Snopes, and the W3C were all in attendance, discussing openly and directly collaborating on how their platforms could help build a more credible web. I'm looking forward to continuing to participate.

It's clearly impossible for the web as a platform to objectively report that a stated fact is true or false. This would require a central authority of truth - let's call it MiniTrue for short. It may, however, be possible for our browsers and social platforms to show us the conversation around an article or component fact. Currently, links on the web are contextless: if I link to the Mozilla Information Trust Initiative, there's no definitive way for browsers, search engines or social platforms to know whether I agree or disagree with what is said within (for the record, I'm very much in agreement - but a software application would need some non-deterministic fuzzy NLP AI magic to work that out from this text).

Imagine, instead, if I could highlight a stated fact I disagree with in an article, and annotate it by linking that exact segment from my website, from a post on a social network, from an annotations platform, or from a dedicated rating site like Tribeworthy. As a first step, it could be enough to link to the page as a whole. Browsers could then find backlinks to that segment or page and help me understand the conversation around it from everywhere on the web. There's no censoring body, and decentralized technologies work well enough today that we wouldn't need to trust any single company to host all of these backlinks. Each browser could then use its own algorithms to figure out which backlinks to display and how best to make sense of the information, making space for them to find a competitive advantage around providing context.

Startups

I've come to the conclusion that startups alone can't provide the solutions we need. They do, however, have a part to play. For example:

A startup publication could produce more fact-based, journalistic content from underrepresented perspectives and show that it can be viable by tapping into latent demand. eg, The Establishment.

A startup could help publications rebuild trust by bringing audiences more deeply into the process. eg, Hearken.

A startup could help to build a data ecosystem for trust online, and sell its services to publications, browsers, and search engines alike. eg, Factmata and Meedan.

A startup could establish a new business model that prioritizes something other than raw engagement. eg, Paytime and Purple.

But startups aren't the solution alone, and no one startup can be the entire solution. This is a problem that can only be solved holistically, with every stakeholder in the ecosystem slowly moving in the right direction.

It's a long road

These potential technology solutions aren't enough on their own: fake news is primarily a social problem. But ecosystem players can help.

Users can be wiser about what they share and why - and can call out bad information when the see it. Those with the means can provide patronage to high quality news sources.

Publishers can prioritize their own longer term well-being by producing fact-based, deeper content and optimizing for trust with their audience.

Social networks can find new business models that aren't incentivized to promote clickbait.

And by empowering readers with the ability to fact check for themselves and understand the conversational context around a story, while continuing to support the web as an open platform where anyone can publish, we can help create a web that disarms the people who seek to misinform us by separating us from the context we need.

These are small steps - but together, taken as a whole, steps in the right direction.

 

Thank you to Jarrod Dicker and Padmini Ray Murray for commenting on an earlier version of this post.

· Posts · Share this post

 

Building an Instant Life Plan and telling your personal story

The last couple of months have been full of decision points for me, both personally and professionally. Everything has been on the table, and everything has been in potential flux.

Having worked in early stage startups pretty much continuously since 2003, it's possibly been less stressful for me than this level of uncertainty might be for others. Still, going forward, I would like to be more intentional about how I'm building my personal life. And while this might come across as a little pathological - have I jumped the Silicon Valley shark? - it seems like some of the tools we use to quickly understand businesses might work here, too. I typically don't like imposing frameworks on my personal life because you lose serendipity, and the experiences worth having are usually precluded by adding too much structure. I think humans are meant to freestyle; living by too many sets of rules closes you off to new possibilities.

Conversely, having guiding principles, and treating them as a kind of living document, could be helpful. It's the same thing I've advised so many startups to do: building a rigid business plan destroys your ability to be agile, but writing out the elements of your business forces you to describe and understand them. The Stanford d.School style Instant Business Plan, where the elements are literally Post-Its than can be swapped and changed, is a far better north star than a one-shot document. I think the same approach could work well for a life plan: a paper document where changability is an intrinsic part of the format, but you are nonetheless forced to express your ideas concretely.

Why Post-Its rather than a document or a personal wiki? Post-Its force you to summarize your thoughts succinctly, and can easily and tangibly be replaced and moved around. Other options carry the risk of being too verbose (which is counter to the goal of creating an easy-to-follow north star) or unchangable (which is counter to the goal of creating a living document that changes as you learn more and test your ideas).

Here's what it could look like, as a rough version 0.1. It's inspired both by the Stanford d.School Instant Business Plan, and a similar document used for startups at Matter. Don't give yourself more than 90 minutes to put this together:

 

Hi! I'm [halfsheet Post-It]
An elevator pitch of you, that doesn't focus on what you do for a living (that will come next). It's what we call a POV statement, which contains a description, a need and a unique insight. Example: Hi! I'm Ben. I'm a creative third culture kid who loves technology and social justice, but whose first love is writing. I need a way to stay creative, maintain work/life balance, and do meaningful work that also allows me to live a comfortable life.

I believe the world is [no more than three regular Post-Its]
Three things you think are happening in the world. This is a way to express your beliefs. Example: Experiencing unprecedented inequality that is harming every aspect of society; In the early stages of an internet-driven social revolution; Moving beyond arbitrary national borders. How would you test if these trends are real?

I make money by [halfsheet Post-It]
Here's where you get to describe what you do for a living. Example: Providing consulting and support to mission-driven early-stage technology companies and mission-driven incumbent industries, both from a strategic and technological perspective. Sometimes I write code but it isn't my primary value.

My employers are [no more than three halfsheet Post-Its]
Who typically gives you money? As a category, not a specific company. Example: Early-stage, mission-driven investment firms who need an ex-founder with both technological and analytical skills to help source and select their investments; early stage startups who need a manager with an open web or business strategy background; "legacy" or "incumbent" large organizations like universities and media companies who need an advisor with technical or startup experience.

My key work skills are [no more than three regular Post-Its]
Which skills are core drivers of your employment? Example: Full-stack web development and technical architecture; Trained in design thinking facilitation and processes for both ventures and products; Experienced startup founder who has lived every mistake.

My key personal attributes are [no more than three regular Post-Its]
What aspects of your personality or the way you act are you proud of? What do you think other people respect you for? Example: Bias towards kindness rather than personal enrichment; Writing and storytelling; Collaborative rather than competitive.

My key lifestyle risks are [three regular Post-Its]
What are the things that keep you up at night about your lifestyle? Specifically, in the following three areas:
Happiness: Risks to your ability to be a happy human (this is different for everybody)
Viability: Your financial risks
Feasibility: Risks to your ability to achieve the lifestyle you want with the time, geographies, and resources at your disposalExample: Happiness: I don't time to spend being social or taking care of my health; Viability: I need a minimum base salary of around $120,000 to cover my costs in the San Francisco Bay Area; Feasibility: It might not be possible to maintain the quality of life I enjoyed in Europe without a significantly higher salary.

My key work risks are [three regular Post-Its]
What are the things that keep you up at night about work or your ability to find it? Specifically, in the following three areas:
Workability: Risks to your ability to have a satisfying work life (this is different for everybody)
Viability: Risks to your value in the employment marketplace
Feasibility: Process or ecosystem risks to your finding the employment you want with the time and resources at realistically at your disposal
Example: Workability: I am seen as largely a developer; Viability: I don't have experience working in a large tech giant in a management role, or equivalent; Feasibility: Most jobs are filled within a network and I'm not sure I have the connections I need to get to the jobs I might want.

Risks parking lot
As you figure out what your key risks are in each area, you should keep track of the ones that don't quite make the cut. It's useful to understand what they are, but as your life plan evolves over time, you might want to swap them out and bring them back into the key risks area.

Above all, to be successful, I need to [three regular Post-Its]
The definition of success varies for everyone. Some people are money-driven; some people prioritize other goals. What are the things you need to achieve to be successful? Specifically, in the following three categories:
Happiness: Your ability to be a happy human with the work and personal lives you want
Viability: Your ability to earn money and cover your costs
Feasibility: Your ability to practically achieve the things listed in happiness and viability with the time and resources realistically at your disposal
Example: Happiness: Regularly spend time with inspiring, mission-driven, kind people at work and in my life wihle taking care of my health; Viability: Get a job that comfortably covers my San Francisco Bay Area costs on a recurring basis; Feasibility: Gain marketable skills (MBA? CPA?) to add to my existing technology and business experience.

My key next steps are [three regular Post-Its]
This is what everything has culminated in. Based on the risks and the primary needs expressed above, what are the concrete next steps in the three key areas? Spending more time doing research or thinking doesn't count. It's got to be an action you can take immediately. Again, these are in the following categories:
Happiness: Your ability to be a happy human with the work and personal lives you want
Viability: Your ability to earn money and cover your costs
Feasibility: Your ability to practically achieve the things listed in happiness and viability with the time and resources realistically at your disposal
Example: Happiness: Set clearer boundaries and set aside time to spend with friends and exercising. Viability: Identify and remove any unnecessary recurring expenses. Feasibility: Sign up to do some pre-CPA accounting courses, to allow you to better analyze startup businesses.

 

 

Finally, there's one more thing: get feedback. Once you've put this together, find someone you trust - or better yet, multiple people - and talk them through it. The best possible scenario is if a few friends all do this for themselves, give each other feedback, and then iterate.

Good luck! And please give me feedback. It would be fun to turn this into a framework for solidifying life decisions and more concretely describing the choices and challenges you have, in order to make them easier to deal with a task at a time.

· Posts · Share this post

 

Reflecting on a hard left turn career change

Over the last eighteen months I’ve helped source, interview, select and invest in 24 startups. As Director of Investments for Matter Ventures in San Francisco, twelve of those were my direct responsibility; twelve were supporting my counterpart Josh Lucido in New York City. 

Matter is - and continues to be - the best thing I’ve ever done.

The learning curve was immediate and intense, but I had been advising startups and analyzing the space for well over a decade. I had co-founded two, and was the first employee at a third. I’d also run a few things that weren’t technically startups but could have been: an online magazine in 1994 that found itself on the cover CD for “real” paper magazines, and a social media site that was getting a million pageviews a day in 2002. As an engineer, obviously I’ve built a lot of software - but more than that, I’ve spent every day of my career thinking about, researching, executing and advising on strategy. I love technology, and I love thinking about how to make it better.

But of course, technology isn’t worth anything unless it’s helping someone. The best technology pushes society forward and empowers people with new opportunities. Building new tech for yourself is fun, but it’s not a profession. And it’s just not very satisfying - at least, for me.

It’s been a privilege to get to know hundreds of people who are building ventures to solve real problems for real people. I invested in some, and wished I had room to invest in others. I gave feedback to many more. Most importantly, I was there on the ground with the Ventures we did invest in, helping with everything from fundraising strategy to database normalization. Rather than just writing code, or working on financial documentation, it’s felt like I’ve been able to use every facet of my skills to do this work. It feels good, and meaningful. And although I think it takes years to truly ease into this kind of work, I’m proud of the work I’ve done.

I doubt I’ll ever be an engineer again - at least, not solely. (My role at Matter is my first job since being a barista in college that hasn’t involved writing code in some capacity, but I’ve actually only ever had two pure engineering roles.) I’m certain that I will found my own venture again, and use what I’ve learned to create something that stands the test of time. But for now, I’m delighted to be supportive. Investing turns out to be one of the most satisfying things I’ve ever done (for all kinds of reasons that don’t involve money), and whatever happens in my career, I want to keep doing it.

· Posts · Share this post

 

Building trust in media through financial transparency: it's time to declare LPs

One simple thing that media entities could do to improve trust is to publicly declare exactly who finances them, and then in turn declare their backers. This would hold true for privately-owned companies; trusts; crowdfunded publications; new kinds of media companies operating on the blockchain and funded with ICOs.

VC-funded media companies - like Facebook, which is a media company - would declare which entities own how much of them. As it happens, Facebook is publicly-traded, so must already do this. But it's rare for VC firms to talk about their Limited Partners - the people and organizations who put money into them. We have no idea who might have an interest in the organizations on Facebook's cap table.

This is important because LPs decide which funds to invest in based on their goals and strategy. It's clear that an LP's financial interests may be represented through a fund that they invest in, but it's equally plausible for their political and other strategic interests to be represented as well.

To be specific, we know that socially-minded LPs invest in double bottom line impact funds that strive to make measurable societal change as well as a financial return. It seems reasonable, then, that some LPs might seek to promote significantly more conservative goals. In the current climate, imagine what a Kremlin-connected Russian oligarch might want to achieve as an LP in a US fund. Or a multinational oil company, the NRA, or In-Q-Tel.

The same goes for crowdfunded ventures. What happens if a contributor to a blockchain-powered media startup is the Chinese government, for example? Or organized criminals? It would be hard to tell from the blockchain itself, but understanding who made significant contributions to a publisher is an important part of assessing its trustworthiness.

While it's fairly easy to figure out which venture firms have invested in a media company, those same firms usually have a duty of privacy to their LPs, so it's rare that we get to know who they are. We know that media is the bedrock of democracy. In order to determine who is shaping the stories we hear that inform how we act as an electorate, I think we need to start following the money - and wearing our influences on our sleeves.

(For what it's worth, Matter Ventures, the media startup accelerator that I work at, publicly declares its partners on its homepage.)

· Posts · Share this post

 

What you're proud of

I've always struggled with resumés.

The paper, career-orientated version of my life is one-dimensional at best. Here's what it looks like, more or less:

Built one of the first local classifieds websites. Graduated with an honors degree in Computer Science. Worked in educational technology at the University of Edinburgh. Co-founded a startup and an influential open source community. Worked for the Saïd Business School at the University of Oxford. Was CTO at Latakoo, a video transfer startup for newsrooms. Became Geek in Residence at the Edinburgh Festivals. Co-founded a startup and an open source publishing platform. Worked in engineering at Medium. Became Director of Investments (San Francisco) at Matter Ventures.

I'm proud of those things, for sure, but none of this really describes who I am. Even if I added clubs, programs, or volunteering, it would remain a very transactional list. I don't think the people who know me best would even recognize me in it. Where is the human behind the jobs?

That's what I wonder every time I look at a LinkedIn profile or receive a resumé as part of a hiring process.

Traditional resumés also do a grave disservice to people who have had a more eclectic journey. It's often seen as negative if you've tried a bunch of things that aren't quite a linear career progression. I don't think that's the owner's fault: everyone walks their own journey, which is a combination of luck, opportunities, creativity, and highly emotional decisions that are a product of their circumstances. But those factors, that underlying humanity, is completely lost on the page.

I wish resumés told a story. I want to know the narrative of a person. The why is often more important than the where. Not why did I take this job?, but why do I make the decisions I do? What motivates me?

And most of all: what am I really proud of? For me, it runs the gamut:

I'm proud of moving to California to be closer to my mother when she got sick, and having to be kicked out of the ICU because I wouldn't leave her side. I'm proud of building an online community that was a safe space for teenagers to come out. I'm proud of not being money-driven. I'm proud of financially supporting social justice organizations like Planned Parenthood and the SPLC. I'm proud of a short story I wrote a couple of years ago. I'm proud of cooking my Oma's Indonesian recipes and helping them live on. I'm proud of refusing to fall into the trap of traditional masculinity. I'm proud of always working mission-driven jobs. I'm proud of my fundamental belief that everybody is connected. I'm proud of my terrible puns.

All of these things are much more me. They don't fit on a resumé, but they also don't fit on a social media profile. They're also not just things I've made or organized; some are just characteristics, positions, or actions. But, together with the work I've done and other things I've made, they form a more three dimensional picture.

I wish there was a place where I could read the story of a person. Everybody's journey is so different and beautiful; each one leads to who we are. It would be the anti-LinkedIn. And because you wouldn't "engage with brands", it would be the anti-Facebook, too. Instead, it would be a record of the beauty and diversity of humanity, and a thing to point to when someone asks, "who are you?"

· Posts · Share this post

 

Becoming more interested in ICOs

I started looking at blockchain from a position of extreme skepticism. Over time, mostly thanks to friends like Julien Genestoux and the amazing team over at DADA, I've come to a better understanding.

I've always been interested in decentralization as a general topic, of course - the original vision of Elgg had federation at its core, which is something I experimented with in Known as well. I'm also an active Mastodon supporter. It just took me a lot longer than it should have to see the implications in blockchain to actually bring those ideas about - mostly because of the very broey, Wall Street veneer of that scene. I don't need to be associated with the modern day Gordon Gekkos of the world; that's not what I went into technology to do.

What I did go into technology to do is empower people. I want to connect people together and amplify underrepresented communities. I want to help people speak truth to power. And I want to help create a fairer, more peaceful world. Speak to many founders from the early era of the web and they'll say the same thing.

By decoupling communications from central, controlling authorities, decentralization has the potential to do that. For example, the drag community was kicked off Facebook en masse because they weren't using their government-sanctioned names; that couldn't happen in a decentralized system. On the other hand, it's almost impossible to flag problematic content in such a system, so it could also allow marginalized voices to become even more marginalized with no real recourse.

But ICOs are really interesting. There is a well documented demographic bias in venture capital: it's significantly easier for well-connected, upper middle class, straight white men to receive funding. That's because most funding comes via existing connections; reaching out to investors cold is frowned upon and rarely works. The result is that only people who have connections get funding (except at places like Matter and Backstage that explicitly have an open application policy).

ICOs might be a different story. They are (theoretically) legal crowdfunding mechanisms that allow anyone to raise money, potentially from anyone - without diluting ownership of the company. Assuming you can pull it off (which is likely also dependent on having the right connections), you could potentially raise tens of millions of dollars without having to prostate yourself to Sand Hill Road. It's potentially very liberating.

But I need help understanding some of the mechanics - and I suspect the community in general does, too. 

In a traditional venture relationship, investors don't just bring money. They also bring expertise, connections, ideas, and sometimes even a shoulder to cry on. Your investors almost become like cofounders, and you build a relationship that lasts for many years.

In an ICO relationship, it seems to me that the incentive is for investors to dump their tokens almost immediately. You put your money into a presale, you wait for the price to go up, and then you immediately sell, because you don't know what's going to happen in the future. The good news is that you have your presale takings, but the potential for the post-ICO dump to irreversibly crash the price of your tokens seems high - which would effectively prevent you from being able to raise money in this way again. Not to mention the fact that you don't really have any kind of relationship with any of these investors. It's dumb, fickle money.

Equity is scary - you're giving away part of your company. But it also aligns investors with your mission. You're in the same boat: if you succeed, they succeed. At the extreme end, there's potential for certain kinds of investors to push you into unhealthy growth so they can see a return (sometimes employing toxic practices like installing their own HR team), but in general, I do believe that most investors are in it for the right reasons, and want to see companies succeed on their terms. I don't see an equivalent to the non-monetary side of the equation in the ICO world, and I worry that teams will suffer as a result.

But potentially I just don't understand. Just as a my friends helped me get my head into blockchain, I'd love some help with this, too.

· Posts · Share this post

 

A day in the life of an engineer turned investor

When I talk to former colleagues about my life at Matter, and in particular how much of my day I spend talking to people. As an engineer, maybe I had three meetings a week; these days it's often eight a day. And I love it: as a former founder, I'm excited to meet with hundreds of people who are all working on things they care deeply about - and I'm excited to find them.

This is what yesterday looked like for me:

7am: I finished a blog post draft that will be published on Thursday. I'm excited about intelligent assistants and the shift to ambient computing, and I was able to back up my piece with sources from an internal investment trend document I wrote.

8am: Headed into work, listening to On the Media, my favorite podcast.

9am: Caught up with email. I'm still figuring out a process for this: I get more than I can really handle, and I don't feel good about sending one-line responses.

9:30am: A standup with the team, talking about the day, and any new developments.

10am: I welcomed a group of foreign journalists who were interested in Matter. We talked for an hour about new trends, how we think about products vs teams (hint: we invest in teams), and whether there's still a future for print.

11am and 11:30am: I jumped on the phone with some founders who wanted to learn more about Matter, and whether it would be a good fit for their companies.

12pm: More email, including outreach to some startups that I'm hoping will apply. There are a lot of people out there who don't think of themselves as working on a media startup, but who are exactly what we're looking for, and who could be substantially helped by the Matter program.

1pm: I joined in on a workshop with our Matter Eight teams, thinking about how to pin down the top-down trends that make their startups good investments. Key question: why is now the right time for this venture? Our Directors of Program are, frankly, geniuses at helping people think their way through these kinds of questions, and I'm always excited to learn from them.

2pm: I sat down with the CEOs of one of our portfolio companies to give them some feedback on how they're describing their venture to investors.

3pm: I spoke to another founder who didn't join Matter, but wanted to give me an update about where they were. It's always exciting to hear about how a team has progressed.

3:30pm: I took an audit of our application process on the web. Some applicants drop off while they're filling in the form, and I wanted to know where that might be happening. At the same time, I did some SEO work on the website. (SEO work follows me in every role, wherever I go.)

4pm: I have a personal goal of reaching out to at least five startups a day - so I spent more time doing research and uncovering both communities to visit and events to attend, as well as individual startups that I would love to see join the program.

5pm: Facilitated introductions for some portfolio founders who wanted to meet certain investors. I always do double blind introductions, asking the investors first if they want to connect. Then I turned to going over our applicants, reading through their decks, and doing some research on their markets and founders.

7pm: I went home to eat.

8pm: I caught up on my RSS subscriptions, reading about the various industries and founders I'm interested in.

There's no time for coding anymore - but there's a lot to do, and I couldn't be happier to support these amazing founders. If that's you, applications are open now.

· Posts · Share this post

 

How we run the Matter application process using Typeform, AirTable, Zapier and Slack

Applications for Matter Nine are open. It's my job - together with my New York City counterpart, Josh Lucido - to run the process, source candidates, and find the twelve teams that will walk through our San Francisco garage door on August 13.

We get many hundreds of applications for every class, which almost all arrive via our website. The trick is to ensure that everyone is handled fairly, robustly, and with transparency internally to the team. Nothing happens based on a whim, and nobody can fall through the cracks.

Inspired by Nick Grossman's piece about how Union Square Ventures ran their analyst application process, I thought it might be interesting to show off how we're using a collection of tools to drive our Matter accelerator application process.

The application form

The entire application to our accelerator takes place on a single form. We don't ask for a video, although we do want to see links to external resources like your website - and we definitely want to see a deck.

We've used Typeform to power our application form for years. The interface is both simple and pleasant to use. For a while, we had it embedded on our site, but a few users reported that the embed didn't work well on mobile devices, so I decided to link directly to the form instead.

Although the form is designed to be quick to fill in, we ask for a lot of information that will be useful to us as we make our decisions. (It's early stage, so these answers are more than likely imperfect, and that's fine.) Do you know who your user is? Who is the team, and can you execute? What is the mission, and why is that important? What are the trends that make this the right time to start this venture? How do you think you'll make money? We also ask diversity and inclusion questions to help us track our progress on our goal to build a more diverse and inclusive kind of startup community.

All of this data is used to make decisions in the sourcing process individually. It's also used in aggregate to examine trends in the startups that apply to us, and to help us figure out where the gaps in our sourcing might be, as well as how to iterate our process.

So storing it in a way that can be analyzed easily is vital. I don't have time to write my own scripts, and the investments team shouldn't need to have a computer science degree or know how to code in order to do this.

Luckily, AirTable exists.

The database

AirTable looks like a spreadsheet (at least, by default), but is much more like a database. Datasets are split up into "bases", which each contain "tables". Each table in a base can reference each other. And while a traditional database might have field types like text and integers, AirTable adds file-sharing, images, tagging, spreadsheet-style formulae, and a lot more.

Our ecosystem base has two core tables: People and Companies. These contain all the people and all the companies in Matter's ecosystem; not just those who have come through the application process.

To that, we add Applications and Assessments. Almost every question from our form is represented here. For example, we use a tag field (technically a "multi-select") for the areas of focus for the venture, a text field for a link to the deck, and long-text for the qualitative questions.

Our form asks about each member of the team, and these are represented in the People table. Similarly, the startup itself is added to the Company table. Each Application links to a Company, which in turn links to several People. That way, if a company applies to several classes, we can easily see each of them, and see how the company has evolved from one application to the next.

Because AirTable allows us to view a table using a Kanban view, we can easily create a view that starts applications in Inbox, allows us to drag them to Under Consideration, Invite for Pitch, and so on. It looks like this (I've hidden our actual applicants, and there are closer to 15 statuses in total):

 

 

For every single startup that applies, we assess the applicant using a special set of questions that we also use in our Design Reviews throughout the program itself. The answers to these questions get stored in the Assessment table, which links to the Application table. AirTable lets us structure this as a form, which I keep linked from my browser bookmarks tab:

 

(This is a subset of the questions.)

So to assess an incoming application, and at each stage of the application process, each reviewer's feedback is captured on the form, which is them recorded in AirTable. The investment team meets every week to decide who to advance through the process, based on the feedback.

Connecting Typeform to AirTable (and letting us know about it)

I built a Zapier zap to automatically translate incoming applications from Typeform into AirTable (as well as to notify us in a special investments-incoming channel in Slack).

It looks at the company in the application; if it doesn't already exist in AirTable, it builds a new entry in Companies. Otherwise, it updates the existing one.

It looks at each individual in the startup; if they don't already exist in AirTable, it builds new entries in our People table. Otherwise, it updates the existing ones.

And finally, it always creates a new Application entry, sets the Status to Inbox, and sends a summary of the information to Slack, so we're immediately notified that something new has come in.

In summary

We can now track every application for every company, including all our assessment notes, from a simple interface that also allows us to perform operations on the quantifiable information we capture. From this, we could theoretically create live dashboards that chart our process; we can (and do) also create static summaries of how our applications pool breaks down across themes, stages, team skills, intersectional diversity and inclusion statistics, and more.

I wish some of these steps were easier (for example, if AirTable's own forms were prettier, we might not need to use Zapier etc at all). And there are definitely things we could improve. Still, it's a robust process that allows us to run a very competitive application process in a data-driven way using a small team.

In the future, this structure will allow us to add new interfaces - for example, why not apply to Matter with a conversational chatbot? - that talk to this AirTable back-end. We can also easily perform experiments with the application process to make it more streamlined, brand application forms for specific events or partnerships, or better support certain communities.

In particular, I've been incredibly impressed with AirTable, and I've started recommending it to everyone. I'd love to hear your experiences.

And of course: Applications are open. Join Matter Nine today.

· Posts · Share this post

 

Blogging and newsletters

I'm doing a lot more writing on my own blog this year. Writing has always helped me think through a problem space and socialize ideas; it's a good way to get feedback on something you're thinking about early. Unlike an article or a project, a blog is deliberately imperfect, and it works best if you do it regularly.

I'm not a lone blogger. There's something nice about reading my news feed away from the noise of social media (and on a reverse-chronological stream rather than somebody's algorithm), and I find myself learning about things I never would have otherwise discovered. It's a struggle - social media absolutely is addictive - but it's really worth it.

Here's my reading stack:

I use NewsBlur to power my subscriptions. It's $36 a year, and absolutely worth it. One great feature is that it allows you to subscribe to email newsletters: you can create a Gmail filter to forward your newsletters to a special address, and essentially keep them confined to a special inbox.

Because the reader ecosystem is pretty open, a few native apps are available. I've settled on Reeder for my Mac and iPhone; it's slick and gets out of my way. (The mobile app is $4.99 and the desktop app is $9.99.)

I'm trying my best to only follow individuals for now, but I expect I'll start adding some particularly insightful corporate / startup blogs over time. I'd love to hear recommendations.

Finally, I've noticed that Fred Wilson (who has blogged every single day for years) also allows readers to subscribe to his posts via email. If I'm going to continue to write on a regular basis, this seems like a pretty good idea; sadly the RSS ecosystem, as wonderful as it is, is very far from being mainstream at this point. He uses Feedblitz, and I'm thinking of giving that a try too. I'd love to hear whether you'd find that useful.

· Posts · Share this post

 

Decentralized paid subscriptions for independent publishers

A real problem that needs to be solved is making it easier to subscribe to independent publishers putting out great, regular content. Online magazines, blogs, podcasts, etc. Independence and autonomy are important, but discovery and ease of use are too.

RSS is a pretty ancient technology, but it's in far more use than you'd think. For example, every podcast runs on RSS. There are a lot of sites that use MRSS behind the scenes, to power portals like AOL News, and to ingest multimedia content in back-end systems. Readers are largely gone, but not the backbone technology.

What RSS is missing is authentication. Knowing who the user is would allow for more personalized experiences, and it would also allow publishers to add business models to monetize their distributed content.

So what if we added OAuth 2.0 as a really simple auth layer, so that content providers could accurately assess who was requesting a feed, podcast, etc?

Add three new tags to the RSS feed:

  • The URI of the OAuth endpoint
  • A human-readable URI where an authenticated user can pay to subscribe or manage their account
  • Whether this feed contains premium content or not (maybe a label for the content level - "free" / "subscribed")

This way, a compatible feed reader / podcast client could tell a user if it's possible to subscribe to get premium content. They could auth the user (possibly allowing them to register with the publisher) and point to a subscription page.

From then on, the reader makes a signed request whenever it looks for the feed. The publisher is responsible for figuring out whether to serve premium content or not based on the user's identity.

The publisher gets to decide which CMS to use, which payment provider to use, how much to charge, etc etc - they retain full autonomy. If they want to use Stripe; fine. Bitcoin; whatever. The only major standardization point is authentication itself.

The market is then open to anyone who wants to create a hub for finding content. Publishers might pay the hub to promote their sites - or lots of business models are possible. But paid subscriptions are baked into apps and readers, and are totally under the publisher's control.

Everyone gets to have their own website and content model. Everyone gets to have a standard way of pointing to a built-in revenue model, and decide what that is.

Imagine if Apple News, Flipboard, Medium, and maybe even the Facebook news feed, as well as hundreds of independent apps, could all feed directly into independent publisher revenue streams.

Anyway, just a thought I've been having. Thought I'd share.

 

This piece was originally a tweetstorm.

· Posts · Share this post

Email me: ben@werd.io

Signal me: benwerd.01

Werd I/O © Ben Werdmuller. The text (without images) of this site is licensed under CC BY-NC-SA 4.0.