Skip to main content

Why it isn't rude to talk about politics (and I think we should be doing it more)

5 min read

It's often said that you shouldn't talk about politics, religion or money. I tend to think those are all part of the same thing: conversations about how the world is, and should be, organized. Anyone who's been watching the American electoral system warm up its engines will be in no doubt that your views and status in any one of those prongs affect the other two. And all are inseparable from the cognitive biases that your context in the world has given you.

So let's restate the maxim: it's rude to talk about the world.


The reason that's most often given is that people might disagree with you. It might start an argument, someone might be offended by your viewpoint, or you might be offended by a deeply-held position from someone else. As the thinking goes, we should try to avoid offending other people, and we shouldn't be starting arguments.

Living in a democracy, I take a different view. Each of us has a different context and different opinions, which we use to inform the votes we cast to elect the government we want, allowing us to help dictate how our communities should be organized. That's awesome, and a freedom we shouldn't take for granted. It's also the fundamental bedrock of being a democratic citizen.

I want to be better informed, so I can cast better votes and be a better citizen. Which means I want to hear different views, that potentially challenge my own. If you define offence as a kind of shock at someone else's disregard for your own principles, I want to be offended. I want to know other peoples' deeply-held beliefs and principles, because they allow me to calibrate mine. I don't exist in a vacuum.

I think the world would be better if we used our freedoms and were more open with our beliefs. The challenge is that it is not always safe to do so. Middle class politeness is one thing; for millions of Muslims in America, like communists and Jews before them, sharing their beliefs can be life-threatening. For a supposedly democratic nation, America is spectacularly good at stigmatizing entire groups of people.

I'd like to think that this is where the politeness principle comes from, as a kind of protection mechanism for more vulnerable members of our community. I don't think that's the case. I think it's much more to do with maintaining a cultural hegemony, and the harmful illusion that all citizens are united in their beliefs and principles.

Citizens don't all have the same beliefs and principles. This is part of the definition of democracy, and we should embrace it.

Citizens don't all have the same privileges and contexts. As a white, middle-class male, I have privileges that many people in this country are not afforded, and a very secure filter bubble to sit inside. I think it's my duty to listen and amplify beyond the walls of that bubble. Candidates for the President of the United States are, in 2015, suggesting that we have "a Muslim problem" in terms that echo the Jewish Question from before the Second World War. Even if you don't believe in advocating for people in ways that don't directly affect you, this directly affects you. It's all about what kind of country we want to be living in. It's all about how it's organized.

It's also about what kind of world we want to be living in. I think it's also my duty, as a citizen of one of the wealthiest nations on earth, to listen and amplify beyond our border walls. Citizens of countries like Iran, Yemen and Burkina Faso are people too, with their own personal politics, religions, hopes and dreams.

We've been given this incredible freedom to talk and advocate, to assemble and discuss, and we should use it.

Yes, there will be arguments. It would be nice to say, "hey, we shouldn't get angry with each other," but these are issues that cut to the core of society. Tone policing these debates is in itself oppressive and undemocratic. And while I'd like to be able to say, "we should have a free and even exchange of ideas, and I won't think less of you for what you believe," that actually isn't true. If you believe that Muslims are second class citizens, or that the Black Lives Matter movement isn't important, I do think a little worse of you, just as some of you will likely think worse of me for thinking socialism is an okay idea or for not believing in God. We can respect each other as citizens, and have respect for our right to have opinions. We should still talk. And as dearly held as my beliefs are, I want to know when you think I'm wrong: it's how I learn.

What we shouldn't do is tell people that they should just accept what they're given, and take the world as it is. That's not what being in a democracy is all about, and it's what we do when we tell people to shut up about what they believe.


Is crowdfunding the answer in a post-ad universe?

3 min read


Albert Wenger of Union Square Ventures asks:

How then is journalism to be financed? As I wrote in 2014, I continue to believe that crowdfunding is the answer. Since then great progress has been made by Beaconreader, Kickstarter’s Journalism category, and also Patreon. Together the amounts are still small but it is early days. Apple’s decision to support these adblockers may well help accelerate the growth of crowdfunding and that would be a good thing – I don’t like slow page loads and distracting ads but I will happily support content creation directly (just highly unlikely to do so through micropayments while reading). All of this provides one more reason to support Universal Basic Income – a floor for every content creator and also more people who can participate in crowdfunding.

I've also heard Universal Basic Income argued for as a solution to funding open source projects. I'm not sure I buy it, so to speak - I think it's not fair to assume that content creators should live on a minimum safety net wage. I do strongly believe in a Universal Basic Income, but as a strong safety net that promotes economic growth rather than something designed to be relied on. For one thing, what happens if everyone falls back to a Universal Basic Income? Could the system withstand that, and would the correct incentives be in place?

I love the idea of crowdfunding content. This does seem to put incentives in the correct place. However, when systems like Patreon work well (and they often do), the line between crowdfunding and a subscription begins to blur. When you're paying me whenever I create content, with a monthly cap, and I create content on a regular basis, it starts to look a lot like it's just a monthly subscription. If you pick up enough monthly subscriptions, it starts to look like real money - a thousand people at $10 a month would lead to a very comfortable wage for a single content creator (even in San Francisco and New York).

So remove the complexity: recurring crowdfunding is just a subscription with a social interface. Which is fine. For recurring content like news sources and shows, I think subscriptions are the future.

For individual content like movies, albums and books, campaigns begin to make a lot of sense. But crowdfunding isn't magic: funders won't necessarily show up. I've been told that I should really have 33% of my campaign contributions pre-confirmed before the campaign begins, and I suspect that's not possible for most unknown artists.

There needs to be a positive signal of quality. In the old days, there were PR campaigns, which were paid for by record labels and publishing companies. Unless we only want rich artists and established brands to make money making content, we need a great, reliable way to discover new, high-quality independent media. And then we need to be able to make an informed decision whether we want tob buy.

As great as Patreon, Kickstarter, Indiegogo and the others are, we're not there yet. Social media won't get us there alone - at least not as is. But there's money to be made, and I'm convinced that whoever unlocks discovery will unlock the future of content on the web.


Image: Crowdfunding by Rocío Lara


Meet the future of online commerce - and the future of the web.

3 min read

Meet the future of online commerce:

We're all used to content unbundling: very few of us are loyal to magazines, blogs or personal websites any more. We consume content through our social feeds, clicking on articles that people we care about have recommended. Articles are atoms of content, flowing in a social stream, unbundled from their parent publications. Very few of us visit content homepages any more.

Products like Stripe Relay let vendors do the same with commerce. Suddenly you can get products in your social stream, which you can share and comment on, as well as buy right there. There's no need to visit a store homepage like You can find products anywhere on the web, and click to buy wherever you encounter them.

There's no point in vendors having apps: the app experience is handled by the social stream (be it Facebook, Twitter, or something more open). The homepage also becomes significantly less crucial to purchasing, just as it's become much less crucial to serving content. In fact, there's often no need to visit a standalone web page at all, except perhaps to learn more about the product. Even then, you can imagine this extended content traveling along the social stream with the main post, in the same way that Facebook's Instant Articles become part of their app.

It's no accident that Google and Twitter are creating an open source version of instant articles. Facebook's version is a proof of concept that shows the way: websites are not destinations any longer. The social stream has become a new kind of browser, where users navigate through social activities rather than thematic links.

Social streams used to be how we discovered content on web pages. Increasingly, the content will live in the stream itself.

A battle is raging over who will own this real estate, and Facebook is winning it hands down. However, that doesn't mean they'll win the war over how we discover information online - there's plenty of precedent in computing for the more open alternative eventually winning. And that's what Google and Twitter are betting on:

Another difference between the Google/Twitter plan and other mobile publishing projects is that Google and Twitter won’t host publishers’ content. Instead, the plan is to show readers cached Web pages — a “snapshot of [a] webpage,” in Google’s words — from publishers’ sites.

The language of the web will still be a crucial part of how we communicate. What's at stake is who controls how we learn about the world, and an open plan allows us to dictate how that content is consumed.

If Facebook is the Apple Mac of social feeds, Twitter has the potential to be the IBM PC. And that may, eventually, be how they succeed.

In the meantime, the web has turned a corner into a new era of social commerce and free-flowing content. There's no turning back the clock; platform owners need to embrace these dynamics and run fast.


"I'd like to introduce you to Elle": four September 11ths

11 min read

September 11, 2001

I was in Oxford, working for Daily Information. My dad actually came into the office to let me know that it had happened - I had been building a web app and had no idea. For the rest of the day I tried to reload news sites to learn more; the Guardian was the only one that consistently stayed up.

The terror of the event itself is obvious, but more than anything else, I remember being immediately hit by the overwhelming sadness of it. Thousands of people who had just gone to work that day, like we all had to, and were trapped in their office by something that had nothing to do with them. I remember waiting for the bus home that day, watching the faces in all the cars and buses that passed me almost in slow motion, thinking that it could have been any of us. I wondered what their lives were like; who they were going home to see. Each face was at once unknowable and subject to the same shared experiences we all have.

I was the only American among my friends, and so I was less distanced from it than them. I remember waiting to hear from my cousin who had been on the New York subway at the time. I'm kind of a stealth American (no accent), so nobody guarded what they said around me. They definitely had a different take, and among them, as well as more widely, there was a sense of "America deserved this". It's hard to accurately describe the anti-American resentment that still pervades liberal Britain, but it was very ugly that day. On Livejournal, someone I followed (and knew in real life) posted: "Burn, America, burn".

One thing I agreed with them on was that we couldn't be sure what the President would do. America had elected a wildcard, who had previously held the record for number of state executions. It seemed clear that he would declare war, and potentially use this as an excuse to erode freedoms and turn America into a different kind of country; we had enough distance to be having those discussions on day one.

There were so many questions in the days that followed. Nobody really understood what had happened, and the official Bush explanations were not considered trustworthy. People brought up the American-led Chilean coup on September 11, 2003, when Salvador Allende had been deposed and killed; had it been symbolically related to that? Al Qaeda seemed like it had come out of nowhere.

Meanwhile, the families of thousands of people were grieving.


September 11, 2002

I had an aisle to myself on the flight to California. The flight had been cheap, and it was obvious that if something were to happen on that day, it wouldn't be on a plane. Airport security at all levels was incredibly high; nobody could afford for there to be another attack.

I had graduated that summer. Earlier that year, my parents had moved back to California, mostly to take care of my grandmother. They were living in a small, agricultural town in the central valley, and I had decided to join them and help for a few months. This was what families do, I thought: when someone needs support, they band together and help them. Moreover, my Oma had brought her children through a Japanese internment camp in Indonesia, finding creative ways to keep them alive in horrifying circumstances. My dad is one of the youngest survivors of these camps, because of her. In turn, taking care of her at the end of her life was the right thing to do.

In contrast to the usual stereotype of California, the central valley is largely a conservative stronghold. When I first arrived, it was the kind of place where they only played country music on the radio and there was a flag on every house. Poorer communities are the ones that disproportionately fight our wars, and there was a collage in the local supermarket of everyone in the community who had joined the army and gone to fight in Afghanistan.

The central valley also has one of the largest Assyrian populations in the US, which would lead to some interesting perspectives a few years later, when the US invaded Iraq.

Our suspicions about Bush had proven to be correct, and the PATRIOT Act was in place. The implications seemed terrible, but these perspectives seemed to be strangely absent on the news. But there was the Internet, and conversations were happening all over the social web. (MetaFilter became my go-to place for intelligent, non-histrionic discussion.) I had started a comedy site the previous year, full of sarcastic personality tests and articles that were heavily influenced by both The Onion and Ben Brown's Conversations were beginning to happen on the forum there, too.

I flew back to Edinburgh after Christmas, and found a job in educational technology at the university. Dave Tosh and I shared a tiny office, and bonded over talking about politics. It wasn't long before we had laid the groundwork for Elgg.


September 11, 2011

I was sitting at the kitchen table I'm sitting at now. It had been my turn to move to California to support a family-member; my mother was deeply ill and I had to be closer to her. I had left Elgg when she was diagnosed: there were disagreements about direction, and I was suddenly reminded how short and fragile life was.

My girlfriend had agreed that being here was important, and had come out with me, but had needed to go home for visa reasons. Eventually, after several more trips, she would decide that she didn't feel comfortable living in the US, or with marrying me. September was the first month I was by myself in my apartment, and I found myself without any friends, working remotely for latakoo in Austin.

Rather than settle in the valley, I had decided that the Bay Area was close enough. I didn't have a car, but you could BART to Dublin/Pleasanton, and be picked up from there. The valley itself had become more moderate over time, partially (I think) because of the influence of the new UC Merced campus, and the growth of CSU Stanislaus, closer to my parents. Certainly, you could hear more than country music on the radio, and the college radio station was both interesting and occasionally edgy.

I grew up in Oxford: a leafy university town just close enough to London. Maybe because of this, I picked Berkeley, another leafy university town, which is just close enough to San Francisco. (A train from Oxford to London takes 49 minutes; getting to San Francisco from Berkeley takes around 30.) My landlady is a Puerto Rican novelist who sometimes gave drum therapy sessions downstairs. If I look out through my kitchen window, I just see trees; the garden is dominated by a redwood that is just a little too close to the house. Squirrels, overweight from the nearby restaurants, often just sit and watch me, and I wonder what they're planning.

Yet, ask anyone who's just moved here what they notice first, and they'll bring up the homeless people. Inequality and social issues here are troublingly omnipresent. The American dream tells us that anyone can be anything, which means that whens someone doesn't make it, or they fall through the cracks, it must be their fault somehow. It's confronting to see people in so much pain every day, but not as confronting as the day you realize you're walking right by them without thinking about it.

Countless people told me that they wouldn't have moved to the US; not even if a parent was dying. I began to question whether I had done the right thing, but I also silently judged them. You wouldn't move to another country to support your family? I asked but didn't ask them. I'm sorry your family has so little love.

I don't know if that was fair, but it felt like an appropriate response to the lack of understanding.


September 11, 2014

"I'm Ben; this is my co-founder Erin; and I'd like to introduce you to Elle." Click. Cue story.

We were on stage at the Folsom Street Foundry in San Francisco, at the tail end of our journey through Matter. Over five months, we had taken a simple idea - that individuals and communities deserve to own their own spaces on the Internet - and used design thinking techniques to make it a more focused product that addressed a concrete need. Elle was a construct: a student we had invented to make our story more relatable and create a shared understanding.

After a long health journey, my mother had finally begun to feel better that spring. 2013 had been the most stressful year of my life, by a long way; mostly for her, but also for my whole family in a support role. I had also lost the relationship I had once hoped I'd have for the rest of my life, and the financial pressures of working for a startup and living in an expensive part of the world had often reared their head. Compared to that year, 2014 felt like I had found all my luck at once.

Through Matter, and before that, through the indie web community, I felt like I had communities of friends. There were people I could call on to grab a beer or some dinner, and I was grateful for that; the first year of being in the Bay Area had been lonely. The turning point had been at the first XOXO, which had been a reminder that individual creativity was not just a vital part of my life, but was somethign that could flourish on its own. I met lovely people there, and at the sequel the next year.

California had given me opportunities that I wouldn't have had anywhere else. It's also, by far, the most beautiful place I've ever lived. Standing on that stage, telling the world what we had built, I felt grateful. I still feel grateful now. I'm lucky as hell.

I miss everyone I left behind a great deal, but any time I want to, I can climb in a metal tube, sit for eleven hours while it shoots through the sky, and go see them. After all the health problems and startup adventures, I finally went back for three weeks last December. Air travel is odd: the reality you step out into supplants the reality you left. Suddenly, California felt like a dream, and Edinburgh and Oxford were immediate and there, like I had never left. The first thing I did was the first thing anyone would have done: I went to the pub with my friends.

But I could just as easily have walked out into Iran, or Israel, or Egypt, or Iraq, or Afghanistan. Those are all realities too, and all just a sky-ride in a metal tube away. The only difference is circumstance.

Just as so many people couldn't understand why I felt the need to move to America, we have the same cognitive distance from the people who live in those places. They're outside our immediate understanding, but they are living their own human realities - and our own reality is distant to them. The truth is, though, that we're all people, governed by the same base needs. I mean, of course we are.

My hope for the web has always been that getting on a plane wouldn't be necessary to understand each other more clearly. My hope for Known was that, in a small way, we could help bridge that distance, by giving everyone a voice that they control.

I think back to the people I watched from that bus stop often. You can zoom out from there, to think about all the people in a country, and then a region, and then the world. Each one an individual, at once unknowable and subject to the same shared experiences we all have. We are all connected, both by technology and by humanity. Understanding each other is how we will all progress together.


Get over yourself: notes from a developer-founder-CEO

11 min read

Known, the company I founded with Erin Jo Richey, is the third startup I've been deeply involved in. The first created Elgg, the open source social networking platform; I was CTO. The second is latakoo, which helps video professionals at organizations like NBC News send video quickly and in the correct format without needing to worry about compression or codecs. Again, I was CTO. In both cases, I was heavily involved in all aspects of the business, but my primary role was tending product, infrastructure and engineering.

At Known, I still write code and tend servers, but my role is to put myself out of that job. Despite having worked closely with two CEOs over ten years, and having spent a lot of time with CEOs of other companies, I've learned a lot while I've been doing this. I've also had conversations with developers that have revealed some incorrect but commonly-held assumptions.

Here are some notes I've made. Some of these I knew before; some of these I've learned on the job. But they've all come up in conversation, so I thought I'd make a list for anyone else who arrives at being a business founder via the engineering route. We're still finding our way - Known is not, yet, a unicorn - but here's what I have so far.


The less I code, the better my business does.

I could spend my time building software all day long, but that's only a fraction of the story. There's a lot more  to building a great product than writing code: you're going to need to talk to people, constantly, to empathize with the problems they actually have. (More on this in a second.) Most importantly, there's a lot more to building a great business than building a great product. You know how startup founders constantly, infuriatingly, talk about "hustling"? The language might be pure machismo, but the sentiment isn't bullshit.

When I'm sitting and coding, I'm not talking to people, I'm not selling, I'm not gaining insight and there's a real danger my business's wheels are spinning without gaining any traction.

The biggest mistake I made on Known is sitting down and building for the first six months of our life, as we went through the Matter program. If I could do it again, I would spend almost none of that time behind my screen.


Don't scratch your own itch.

In the open source world, there's a "scratch your own itch" mentality: build software to solve your own problems. It's true that you can gain insight to a problem that way. But you're probably not going to want to pay yourself for your own product, so you'd better be solving problems for a lot of other people, too. That means you need to learn what peoples' itches are, and most importantly, get over the idea that you know better than them.

Many developers, because they know computers better than their users, think they know problems better than them, too. The thing is, as a developer, your problems are very different indeed. You use computers dramatically differently to most people; you work in a different context to most people. The only way to gain insight is to talk to lots and lots of people, constantly.

If you care passionately about a problem, the challenge is then to accept it when it's not shared with enough people to be a viable business. A concrete example: we learned the hard way that people, generally, won't pay for an indie web product for individuals, and took too long to explore other business avenues. (Partially because I care dearly about that problem and solution.) A platform for lots of people to share resources in a private group, with tight integration with intranets and learning management systems? We're learning that this is more valuable, and more in need. We're investigating much more, and I'm certain we'll continue to evolve.


Pick the right market; make the right product. Make money.

Learning to ask people for money is the single hardest thing I've had to do. I'm getting better at it, in part thanks to the storytelling techniques we picked up at Matter.

Product-market fit is key. It can't be overstated how important this is.

Product-market fit means being in a good market with a product that can satisfy that market.

The problem you pick is directly related to how effectively you can sell - not just because you need to be solving real pain for people, but because different problems have different values. A "good market" is one that can support a business well, both in terms of growth and finance. Satisfy that market, and, well, you're in business.

We sell Known Pro for $10 a month: hardly a bank-breaking amount. Nonetheless, we've had plenty of feedback that it's much too expensive. That's partially because the problem we were solving wasn't painful enough, and partially because consumers are used to getting their applications for free, with ads to support them.

So part of "hustling" is about picking a really important problem for a valuable market and solving it well. Another part is making sure the people who can benefit from it know about it. The Field of Dreams fallacy - "if you build it, they will come" - takes a lot of work to avoid. I have a recurring task in Asana that tells me to reach out to new potential customers every day, multiple times a day, but sales is really about relationships, which takes time. Have conversations. Gain insight. See if you can solve their problems well. Social media is fun but virtually useless for this: you need to talk to people directly.

And here's something I've only latterly learned: point-blank ask people to pay. Be confident that what you're offering is valuable. If you've done your research, and built your product well, it is. (And if nobody says "yes", then it's time to go through that process again.)


Do things that don't scale in order to learn.

Startups need to do things that scale over time. It's better to design a refrigerator once and sell lots of them than to build bespoke refrigerators. But in the beginning, spending time solving individual problems, and holding peoples' hands, can give you insight that you can use to build those really scalable solutions.

Professional services like writing bespoke software are not a great way to run a startup - they're inherently unscalable - but they can be an interesting way to learn about which problems people find valuable. They're also a good way to bootstrap, in the early stages, as long as you don't become too dependent on them.


Be bloody-minded, but only about the right things.

Lots of people will tell you you're going to fail. You have to ignore those voices, while also knowing when you really are going to fail. That's why you keep talking to people, making prototypes, searching for that elusive product-market fit.

Choosing what to be bloody-minded about can be nuanced. For example:


Technology doesn't matter (except when it does).

Developers often fall down rabbit holes discussing the relative merits of operating systems and programming languages. Guess what: users don't care. Whether you use one framework or another isn't important to your bottom line - unless it will affect hiring or scalability later on. It's far better to use what you know.

But sometimes the technology you choose is integral to the problem. I care about the web, and figured that a responsive interface that works on any web browser would make us portable acros platforms. This was flat-out wrong: we needed to build an app. We still need to build an app.

The entire Internet landscape has changed over the last six years, and we were building for an outdated version that doesn't really exist anymore. As technologists, we tend to fall in love with particular products or solutions. Customers don't really work that way, and we need to meet them where they're at.


Non-technical customers don't like options.

As a technical person, I like to customize my software. I want lots of options, and I always have: I remember changing my desktop fonts and colors as a teenager, or writing scripts for the chatrooms I used to join. So I wasn't prepared, when we started to do more conversations with real people, for how little they want that. Apple is right: things should just work. Options are complexity; software should just do the right things.

I think that's one reason why there's a movement towards smaller apps and services that just do one thing. You can focus on solving one thing well, without making it configurable within an inch of its life. If a user wants it to work a different way, they can choose a different app. That's totally not how I wish computers worked for people, but if there's one thing I've learned, it's this: what I want is irrelevant.



Run fast. Keep adjusting your direction. But run like the wind. You're never the only person in the race.


Investment isn't just not-evil: it's often crucial.

Bootstrapping is very hard for any business, but particularly tough if you're trying to launch a consumer product, which needs very wide exposure to gain traction and win in the marketplace. Unless you're independently wealthy or have an amazing network of people who are, you will need to find support. Money aside, the right investors become members of your team, helping you find success. Their insights and contacts will be invaluable.

But that means you have to have your story straight. Sarah Milstein puts it perfectly:

Entrepreneurs understandably get upset when VCs don’t grasp your business’s potential or tell you your idea is too complex. While those things happen, and they’re shitty, it’s not just that VCs are under-informed. It’s also that their LPs won’t support investments they don’t understand. Additionally, to keep attracting LP money, VCs need to put their money in startups that other investors will like down the road. VCs thus have little incentive to try to wrap their heads around your obscure idea, even if it’s possibly ground-breaking. VCs are money managers; they do not exist to throw dollars into almost any idea.

Keep it simple, stupid. Your ultra-cunning complicated mousetrap or niche technical concept may not be investable. You know you're doing something awesome, but the perception of your team, product, market and solution has to be that it has a strong chance of success. Yes, that rules some ventures out from large-scape investment and partially explains why the current Silicon Valley landscape looks like it does. So, find another way:


Be scrappy.

Don't be afraid of hacks or doing things "the wrong way". If you follow all the rules, or you're afraid of going off-road and trying something new, you'll fail. Beware of recipes (but definitely learn from other peoples' experiences).


Most of all: get over yourself, and get over why you fell in love with computers.

If empathy-building conversations and user testing tell you one thing, it's this: your assumptions are almost always wrong. So don't assume you have all the answers.

You probably got into computers well before most people. Those people have never known the computing environment you loved, and it's never coming back. You're building for them, because they're the customer: in many ways the hardest thing is to let go of what you love about computers, and completely embrace what other people need. A business is about serving customers. Serve them well by respecting their opinions and their needs. You are not the customer.

It's a hard lesson to learn, but the more I embrace it, the better I do.


Need a way to privately share and discuss resources with up to 200 people? Check out Known Pro or get in touch to learn about our enterprise services.


On the new web, get used to paying for subscriptions

4 min read

Piccadilly Circus The Verge reports that YouTube is trying a new business model:

According to multiple sources, the world’s largest video-sharing site is preparing to launch its two separate subscription services before the end of 2015 — Music Key, which has been in beta since last November, and another unnamed service targeting YouTube’s premium content creators, which will come with a paywall. Taken together, YouTube will be a mix of free, ad-supported content and premium videos that sit behind a paywall.

At first glance, this seems like a brave new move for YouTube, which has been ad-supported since its inception. But it turns out that ads on the platform actually haven't been doing that well - and have been pulling down Google's Cost-Per-Click ad revenues as a whole.

However, during the company's earnings call on Thursday, Google's outgoing CFO Patrick Pichette dismissed mobile as the reason for the company's cost-per-click declines. Instead it is YouTube's fault. YouTube's skippable TrueView ads "currently monetize at lower rates than ad clicks on," Mr. Pichette said. He added that excluding TrueView ads -- which Google counts as ad clicks when people don't skip them -- the number of ad clicks on Google's own sites wouldn't have grown as much in the quarter but the average cost-per-click "would be healthy and growing year-over-year."

If Google's CPC ad revenue would otherwise be growing, it makes sense to switch YouTube to a different revenue model. Subscriptions are tough, but consumers have already shown that they're willing to pay to access music and entertainment services (think Spotify and Netflix).

But what if those revenues don't continue to climb? Back in May, Google confirmed that more searches take place on mobile than on desktop. That pattern continues all over the web: smartphones are fast becoming our primary computing devices, and you can already think of laptops and desktops as the minority.

Enter Apple, which is going to include native ad blocking in the next version of iOS:

Putting such “ad blockers” within reach of hundreds of millions of iPhone and iPad users threatens to disrupt the $70 billion annual mobile-marketing business, where many publishers and tech firms hope to generate far more revenue from a growing mobile audience. If fewer users see ads, publishers—and other players such as ad networks—will reap less revenue.

This is an obvious shot across the bow to Google, but it also serves another purpose. Media companies disproportionately depend on advertising for revenue. The same goes for consumer web apps: largely thanks to Google, it's very difficult to convince consumers to pay for software. They're used to getting high-quality apps like Gmail and Google Docs for free, in exchange for some promotional messages on the side. In a universe web web browsers block ads, the only path to revenue is to build your own app.

From Apple's perspective, this makes sense: it encourages more people to build native apps on their platform. The trouble is, users spend most of their time in just five apps - and most users don't download new apps at all. The idea of a smartphone user deftly flicking between hundreds of beautiful apps on their device is a myth. Media companies who create individual apps for their publications and networks are tilting at windmills and wasting their money.

Which brings us back to subscriptions. YouTube's experiment is important, because it's the first time a mass-market, ad-supported site - one that everybody uses - has switched to a subscription model. If it works, and users accept subscription fees as a way to receive content, more and more services will follow suit. I think this is healthy: it heralds a transition from a personalized advertising model that necessitates tracking your users to one that just takes money from people who find what you do valuable. You can even imagine Google providing a subscription mechanism that would allow long-tail sites with lower traffic to also see payment. (Google Contributor is another experiment in this direction.)

If it doesn't work, we can expect to see more native content ads: ads disguised as content, written on a bespoke basis. These are impossible to block, but they're fundamentally incompatible with long-tail sites with low traffic. They also violate the line between editorial and advertising.

Media companies find themselves in a tough spot. As Bloomberg wrote earlier this year:

This is the puzzle for companies built around publishing businesses that thrived in the 20th century. Ad revenue has proved ever harder to come by as reading moves online and mobile, but charging for digital content can drive readers away.

Something's got to give.


Photo by Moyan Brenn on Flickr.


What would it take to save #EdTech?

10 min read

Education has a software problem.

98% of higher educational institutions have a Learning Management System: a software platform designed to support the administration of courses. Larger institutions often spend over a million dollars a year on them, once all costs have been factored in, but the majority of people who use them - from educators through to students - hate the experience. In fact, when we did our initial user research for Known, we couldn't find a single person in either of those groups who had anything nice to say about them.

That's because the LMS has been designed to support administration, not teaching and learning. Administrators like the way they can keep track of student accounts and course activity, as well as the ability to retain certain data for years, should they need it in the event of a lawsuit. Meanwhile, we were appalled to discover that students are most often locked out of their LMS course spaces as soon as the course is over, meaning they can't refer back to their previous discussions and feedback as they continue their journey towards graduation.

The simple reason is that educators aren't the customers, whereas administrators have buying power. From a vendor's perspective, it makes sense to aim software products at the latter group. However, it's a tough market: institutions have a very long sales cycle. They might hear about a product six months before they run a pilot, and then deploy a product the next year. And they'll all do it at the same time, to fit in with the academic calendar. At the time of writing, institutions are looking at software that they might consider for a pilot in Spring 2016. Very few products will make it to campus deployment.

There are only a few kinds of software vendors that can withstand these long cycles for such a narrow market. By necessity, they must have "runway" - the length of time a company can survive without additional revenue - to last this cycle for multiple institutions. It follows that these products must have high sticker prices; once they've made a sale, vendors cling to their customers for dear life, which leads to outrageous lock-in strategies and occasionally vicious intra-vendor infighting.

Why can't educators buy software?

If it would lower costs and prevent lock-in, why don't institutions typically allow on-demand educator purchasing? One reason is what I call the Microsoft Access effect. Until the advent of cloud technologies, it was common for any medium to large organization to have hundreds, or even thousands, of Access databases dotted around their network, supporting various micro-activities. (I saw this first-hand early in my career, as IT staff at the University of Oxford's Saïd Business School.) While it's great that any member of staff can create a database, the IT department is then expected to maintain and repair it. The avalanche of applications can quickly become overwhelming - and sometimes they can overlap significantly, leading to inefficient overspending and further maintenance nightmares. For these and a hundred other reasons, purchasing needs to be planned.

A second reason is that, in the Internet age, applications do interesting things with user data. A professor of behavioral economics, for example, isn't necessarily also going to be an expert in privacy policies and data ownership. Institutions need to be very careful with student data, because of legislation like FERPA and other factors that could leave them exposed to being sued or prosecuted. Therefore, for very real legal reasons, software and services need to be approved.

The higher education bubble?

Some startups have decided to overcome these barriers by declaring that they will disrupt universities themselves. These companies provide Massively Open Online Courses directly, most often without accreditation or any real oversight. I don't believe they mean badly: in theory an open market for education is a great idea. However, institutions provide innumerable protections and opportunities for students that for-profit, independent MOOCs cannot provide. MOOCs definitely have a place in the educational landscape, but they cannot replace schools and universities, as much as it is financially convenient to say that they will. Similarly, some talk of a "higher education bubble" out of frustration that they can't efficiently make a profit from institutions. If it's a bubble, it's one that's been around for well over a thousand years. Universities, in general, work.

However, as much as startups aren't universities, universities are also not startups. Some institutions have decided to overcome their software problem by trying to write software themselves. Sometimes it even works. The trouble is that effective software design does not adhere to the same principles as academic discussion or planning; you can't do it by committee. Institutions will often try and create standards, forgetting that a technology is only a standard if people are using it by bottom-up convention (otherwise it's just bureaucracy). Discussions about features can sometimes take years. User experience design falls somewhere towards the bottom of the priority list. The software often emerges, but it's rarely world class.

Open source to the rescue.

Open source software like WordPress has been a godsend in this environment, not least because educators don't need to have a budget to deploy it. With a little help, they can modify it to support their teaching. The problem is that most of these platforms aren't designed for them, because there's no way for revenue to flow to the developers. (Even when educators use specialist hosting providers like Reclaim Hosting - which I am a huge fan of - no revenue makes its way to the application developers in an open source model.) Instead, they take platforms like WordPress, modify them, and are saddled with the maintenance burden for the modifications, minus the budget. While this may support teaching in the short-term, there's little room for long-term strategy. The result, once again, can be poor user experience and security risks. Most importantly, educators run the risk of fitting their teaching around available technology, rather than using technology to support their pedagogy. Teaching and learning should be paramount.

As Audrey Watters recently pointed out, education has nowhere near enough criticism about the impact of technology on teaching.

So where does this leave us?

We have a tangle of problems, including but not limited to:

  • Educators can't acquire software to support their teaching
  • Startups and developers can't make money by selling software that supports teaching
  • Institutions aren't good at making software
  • Existing educational software costs a fortune, has bad user experience and doesn't support teaching

I am the co-founder and CEO of a startup that sells its product to higher education institutions. I have skin in this game. Nonetheless, let's remove "startups" from the equation. There is no obligation for educational institutions to support new businesses (although they certainly have a role in, for example, spinning research projects into ventures). Instead, we should think about the inability of developers to make a living building software that supports teaching. Just as educators need a salary, so do the developers who make tools to help them.

When we remove startups, we also remove an interest in "disrupting" institutions, and locking institutions into particular kinds of technologies or contracts. We also remove a need to produce cookie-cutter one-size-fits-all software in order to scale revenue independently of production costs. In teaching, one size never fits all.

We also know that institutions don't have a lot of budget, and certainly can't support the kind of market-leading salaries you might expect to see at a company like Google or Facebook. The best developers, unless they're particularly mission-driven, are not likely to look at universities first when they're looking for an employer. The kinds of infrastructure that institutions use probably also don't support the continuous deployment, fail forward model of software development that has made Silicon Valley so innovative.

So here's my big "what if".

What if institutions pooled their resources into a consortium, similar to the Open Education Consortium (or, perhaps, Apereo), specifically for supporting educators with software tools?

Such an organization might have the following rules:

LMS and committee-free. The organization itself decides which software it will work on, based on the declared needs of member educators. Rather than a few large products, the organization builds lots of small, single-serving tools that do one thing well. Rather than trying to build standards ahead of time, compatibility between projects emerges over time by convention, with actual working code taking priority over bureaucracy.

Design driven. Educators are not software designers, but they need to be deeply involved in the process. Here, software is created through a design thinking process, with iterative user research and testing performed with both educators and students. The result is likely to be software that better meets their needs, released with an understanding that it is never finished, and instead will be rapidly improved during its use.

Fast. Release early, release often.

Open source. All software is maintained in a public repository and released under a very liberal license. (After all, the aim here is not to receive a return on investment in the form of revenue.) One can easily imagine students being encouraged to contribute to these projects as part of their courses.

A startup - but in the open. The organization is structured like a software company, with the same kinds of responsibilities. However, most communications take place on open channels, so that they can at least be read by students, educators and other organizations that want to learn from the model. The organization has autonomy from its member institutions, but reports to them. In some ways, these institutions are the VC investors of the organization (except there can never be a true "exit").

A mandate to experiment. The aim of the organization is not just to experiment with software, but also the models through which software can be created in an academic context. Ideally, the organization would also help institutions understand design thinking and iterative design.

There is no doubt that institutions have a lot to gain from innovative software that supports teaching on a deep level. I also think that well-made open source software that follows academic values rather than a pure profit motive could be broadly beneficial, in the same way that the Internet itself has turned out to pretty good thing for human civilization. As we know from public media, when products exist in the marketplace for reasons other than profit, it affects the whole market for the better. In other words, this kind of organization would be a public good as well as an academic one.

How would it be funded? Initially, through member institutions, perhaps on a sliding scale based on the institution's size and public / private status. I would hope that over time it would be considered worthy of federal government grants, or even international support. However, just as there's no point arguing about academic software standards on a mailing list for years, it's counter-productive to stall waiting for the perfect funding model. It's much more interesting to just get it moving and, finally, start building software that help teachers and students learn.


The Internet is more alive than it's ever been. But it needs our help.

5 min read

Another day, another eulogy for the Internet:

It's an internet driven not by human beings, but by content, at all costs. And none of us — neither media professionals, nor readers — can stop it. Every single one of us is building it every single day.

Over the last decade, the Internet has been growing at a frenetic pace. Since Facebook launched, over two billion people have joined, tripling the number of people who are connected online.

When I joined the Internet for the first time, I was one of only 25 million users. Now, there are a little over 3 billion. Most of them never knew the Internet many of us remember fondly; for them, phones and Facebook are what it has always looked like. There is certainly no going back, because there isn't anything to return to. The Internet we have today is the most accessible it's ever been; more people are connected than ever before. To yearn for the old Internet is to yearn for an elitist network that only a few people could be part of.

This is also the fastest the Internet will ever grow, unless there's some unprecedented population explosion. And it's a problem for the content-driven Facebook Internet. These sites and services need to show growth, which is why Google is sending balloons into the upper atmosphere to get more people online, and why Facebook is creating specially-built planes. They need more people online and using their services; their models start to break if growth is static.

Eventually, Internet growth has to be static. We can pour more things onto the Internet - hey, let's all connect our smoke alarms and our doorknobs - but ultimately, Internet growth has to be tethered to global population.

It's impressive that Facebook and Google have both managed to reach this sort of scale. But what happens once we hit the population limit and connectivity is ubiquitous?

From Vox:

In particular, it requires the idea that making money on this new internet requires scale, and if you need to always keep scaling up, you can't alienate readers, particularly those who arrive from social channels. The Gawker of 2015 can't afford to be mean, for this reason. But the Gawker of 2005 couldn't afford not to be mean. What happens when these changes scrub away something seen as central to a site's voice?

In saying that content needs to be as broadly accessible as possible, you're saying that the potential audience for any piece must be 3.17 billion people and counting. It's also a serious problem for journalism or any kind of factual content: if you're creating something that needs to be as broadly accessible as possible, you can't be nuanced, quiet, or considered.

The central thesis that you need to have a potential audience of the entire Internet to make money on it is flat-out wrong. On a much larger Internet, it should theoretically be easier to find the 1,000 true fans you need to be profitable than ever before. And then ten thousand, and a million, and so on. There are a lot of people out there.

In a growth bubble (yes, let's call it that), everyone's out to grab turf. On an Internet where there's no-one left to join and everyone is connected, the only way you can compete is the old-fashioned way: with quality. Having necessarily jettisoned the old-media model, where content is licensed to geographic regions and monopoly broadcasters, content will have to fight on its own terms.

And here's where it gets interesting. It's absolutely true that websites as destinations are dead. You're not reading this piece because you follow my blog; you're either picking it up via social media or, if you're part of the indie web community and practically no-one else, because it's in your feed reader.

That's not a bad thing at all. It means we're no longer loyal readers: the theory is that if content is good, we'll read and share it, no matter where it's from. That's egalitarian and awesome. Anyone can create great content and have it be discovered, whether they're working for News International or an individual blogger in Iran.

The challenge is this: in practice, that's not how it works at all. The challenge on the Internet is not to give everyone a place to publish: thanks to WordPress, Known, the indie web community and hundreds of other projects, they have that. The challenge is letting people be heard.

It's not about owning content. On an Internet where everyone is connected, the prize is to own discovery. In the 21st century more than ever before, information is power. If you're the way everyone learns about the world, you hold all the cards.

Information is vital for democracy, but it's not just socially bad for one or two large players to own how we discover content on the Internet. It's also bad for business. A highly-controlled discovery layer on the Internet means that what was an open market is now effectively run by one or two companies' proprietary business rules. A more open Internet doesn't just lead to freedom: it leads to free trade. Whether you're an activist or a startup founder, a liberal or a libertarian, that should be an idea you can get behind.

The Internet is not dead: it's more alive than it's ever been. The challenge is to secure its future.


Market source: an open source ecosystem that pays

4 min read

Open source is a transformative model for building software. However, there are a few important problems with it, including but not limited to:

  1. "Libre" has become synonymous with "no recurring license", meaning it's hard for vendors to make money from open source software in a scalable way.
  2. As a result, "Open source businesses" are few and far between, except for development shops that provide services on top of platforms that other people have built for free, and service businesses like Red Hat. (Red Hat is the only sizeable open source business.)
  3. Even if the cost to the end user is zero, the total cost to produce and support the software does not go down.
  4. There is a diversity problem in open source, because only a few kinds of people can afford to give their time for free, meaning that open source software misses out on a lot of potential contributions from talented people.

I believe that the core product produced by a business can never be open source. In Red Hat's case, it's services. In Automattic's case, it's the Akismet and the ecosystem (WordPress itself is run by a non-profit entity). In Mozilla's case, it's arguably advertising. Even GitHub, which has enabled so much of today's open source ecosystem, itself depends on a closed-source platform. After all, they need to make money.

Nonetheless, having an open codebase is beneficial:

  1. It gives the userbase a much greater say in the direction of the software.
  2. It allows the software to be audited for security purposes.
  3. It allows the software to be adapted for environments and contexts that the original designers and architects did not consider.

So how can we retain the benefits of being open while allowing for scalable businesses?

One option I've been thinking about combines the mechanics of crowdfunding platforms like Patreon with an open source dynamic. I call it market source:

  1. End users pay a license fee to use the software. This could be as low as $1, depending on the kind of software, and the dynamics of its audience. (For example, $1 is totally fair for a mobile app; an enterprise intranet platform might be significantly higher.)
  2. In return, users receive a higher level of support than they would from a free open source project, perhaps including a well-defined SLA where appropriate.
  3. Users also get access to the source code, as with any open source codebase. Participants are encouraged to file issues and pull requests.
  4. Accepted pull requests are rewarded with a share of the pool of license money. Rather than rewarding by volume of code committed - after all, some of the best commits remove code - this is decided by the project maintainers on a simple scale. Less-vital commits are rewarded with a smaller share of the pool than more important commits.
  5. Optionally: users can additionally place bounties on individual issues, such that any user with an accepted pull request that solves the issue also receives the bounty.
  6. The pool is divided up at the end of every month and automatically credited to each contributor's account.

For the first time, committers are guaranteed to be compensated for the unsolicited work they do on an open source project. Perhaps more importantly, funding is baked into the ecosystem: it becomes much easier for a project to bootstrap based on revenue, because it is understood by all stakeholders that money is a component.

The effect is that an open source project using this mechanism is a lot like a co-operative. Anyone can contribute, as long as they adhere to certain rules, and they will also receive a share of the work they have contributed to.

These dynamics are not appropriate for every open source project. However, they create new incentives to participate in open source projects, and - were they to be successful - would create a way for new businesses to make more secure, open software without committing to giving away the value in their core product.


Two years of being on the #indieweb

2 min read

For the last two years, I haven't directly posted a single tweet on Twitter, a single post on Facebook or LinkedIn, or a photo on Flickr. Instead, I publish on my own site at, and syndicate to my other services.

If Flickr goes away, I keep all my photos. If Twitter pivots to another content model, I keep all my tweets. If I finally shut my Facebook profile, I get to keep everything I've posted there. And because my site is powered by Known, I can search across all of it, both by content and content type.

My site is Known site zero. It's hosted on my own server, using a MongoDB back-end. I'm also writing 750 words a day on a site - kept away from here because this site is mostly about technology, and those pieces are closer to streams of consciousness. Very shortly, though, I'll be able to syndicate from one Known site to another.

The indie web community has created a set of fantastic protocols (like webmention) and use patterns (like POSSE). I'm personally invested in making those technologies accessible to both non-technical and impatient users - partially because I'm very impatient myself.

This is a community that's been very good to me, and I find it really rewarding to participate. I'm looking forward to continuing to be a part of it as it goes from strength to strength.


Let's expand the Second Amendment to include encryption.

3 min read

The German media is up in arms today because both German politicians and journalists were surveilled by the United States. Meanwhile, Germany is being sued by Reporters Without Borders this week for intercepting email communications. Over in the UK, Amnesty International released a statement yesterday after learning that their communications had been illegally intercepted. (Prime Minister David Cameron also declared his intention to ban strong encryption this week.) France legalized mass surveillance in June.

Everyone, in other words, is spying on everyone else. This has profound democratic implications.

From Amnesty International's statement:

Mass surveillance is invasive and a dangerous overreach of government power into our private lives and freedom of expression. In specific circumstances it can also put lives at risk, be used to discredit people or interfere with investigations into human rights violations by governments.


We have good reasons to believe that the British government is interested in our work. Over the past few years we have investigated possible war crimes by UK and US forces in Iraq, Western government involvement in the CIA's torture scheme known as the extraordinary rendition programme, and the callous killing of civilians in US drone strikes in Pakistan: it was recently revealed that GCHQ may have provided assistance for US drone attacks.

It has been shown that widespread surveillance creates a chilling effect on journalism, free speech and dissent. Just the fact that you know you're being surveilled changes your behavior, and as the PEN American Center discovered, this includes journalism. Journalism, in turn, is vital for a healthy democracy. A voting population is only as effective as the information they act upon.

Today is July 3. It seems appropriate to revisit the Second Amendment to the Constitution of the United States, which was passed by Congress and ratified by the States in two forms:

A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.

A well regulated militia being necessary to the security of a free state, the right of the people to keep and bear arms shall not be infringed.

The Supreme Court has confirmed [PDF] that this has a historical link to the older right to bear arms in the English Bill of Rights: "That the Subjects which are Protestants may have Arms for their Defence suitable to their Conditions and as allowed by Law." The Supreme Court has also verified multiple times that the right to bear arms is an individual right.

In 2015, guns are useless at "preserving the security of a free state", and cause inordinate societal harm. Meanwhile, encryption is one of the most important tools we have for preserving democratic freedom. We already subject encryption to export controls on the munitions list. It seems reasonable, and very relevant, to expand the definition of "arms" in the Second Amendment to include it. Let's use the effort that has been put into allowing individual citizens to own firearms, and finally direct it to preserving democracy.

While this would protect the democratic rights of US citizens, it would not impact the global surveillance arms race in itself. It would be foolish to only consider the freedom of domestic citizens: Americans are not more important than anyone else. However, considering the prevalence of American Internet services, and the global influence of American policy as a whole, it would be a very good first step.


Just zoom out.

4 min read

Sometimes it's important to step out of your life for a while.

I spent the last week in Zürich, reconnecting with my Swiss family in the area. A long time ago, I named an open source software platform after a nearby town that my family made their home hundreds of years ago: Elgg. I hadn't been back to the town since I was a child, and visiting it stirred echoes of memories that gained new focus and perspective.

I grew up in the UK, more or less, and while I had some extended family there, mostly they were in Switzerland, the Netherlands and the United States. I've always been fairly in touch with my US family, but not nearly enough with my European cousins. Effectively meeting your family for the first time is surreal, and it happens to me in waves.

Growing up as an immigrant, and then having strong family ties to many places, means that everywhere feels like home and nothing does. I often say that family is both my nationality and my religion. Moving to the US, where I've always been a citizen, feels no more like coming home than moving to Australia, say. Similarly, walking around Zürich felt like a combination of completely alien and somewhere I'd always been - just like San Francisco, or Amsterdam. My ancestors were textile traders there, centuries ago, and had a say in the running of the city, and as a result, our name crops up here and there, in museums and on street corners.

So, Zürich is in the atoms of who I am. So is the Ukrainian town where another set of ancestors fled the Pogroms; so are the ancestors who boarded the Mayflower and settled in Plymouth; so are the thousands of people and places before and since. My dad is one of the youngest survivors of the Japanese concentration camps in Indonesia, who survived because of the intelligence and determination of my Oma. His dad, my Opa, was a prominent member of the resistance. My Grandpa translated Crime and Punishment into English and hid his Jewishness when he was captured as a prisoner of war by the Nazis. My great grandfather, after arriving in New England from Ukraine, was a union organizer, fighting for workers' rights. My Grandma was one of the kindest, calmest, wisest, most uniting forces I've ever known in my life, together with my mother, even in the face of wars, hardship, and an incurable disease.

All atoms. Entire universes in their own right, but also ingredients.

And so is Oxford, the city where I grew up. Pooh Sticks from the top of the rainbow bridge in the University Parks; the giant horse chestnut tree that in my hands became both a time machine and a spaceship; the Space Super Heroes that my friends and I became over the course of our entire childhoods. Waking up at 5am to finish drawing a comic book before I went to school. Going to an English version of the school prom with my friends, a drunken marquee on the sports field, our silk shirts subverting the expected uniform in primary colors. Tying up the phone lines to talk to people on Usenet and IRC when I got home every day, my open window air cooling my desktop chassis. My friends coming round on Friday nights to watch Friends, Red Dwarf and Father Ted; LAN parties on a Saturday.

In Edinburgh, turning my Usenet community into real-life friends. Absinthe in catacombs. Taking over my house on New Year's Eve, 1999, and having a week-long house party. Walking up Arthur's Seat at 2am in a misguided, and vodka-fuelled attempt to watch the dawn. Hugging in heaps and climbing endless stairs to house parties in high-ceilinged tenements. Being bopped on the head with John Knox's britches at graduation. The tiny, ex-closet workspace we shared when we created Elgg, where the window didn't close properly and the smell of chips wafted upwards every lunchtime. And then, falling in love, which I can't begin to express in list form. The incredible people I have been lucky enough to have in my life; the incredible people who I have also lost.

And California too. We are all tapestries, universes, and ingredients. Works in progress.

If we hold a screen to our faces for too long, the world becomes obscured. Sometimes it's important to step out of your life for a while, so you can see it in its true perspective.


If we want open software to win, we need to get off our armchairs and compete.

9 min read

The reason Facebook dominates the Internet is that while we were busy having endless discussions about open protocols, about software licenses, about feed formats and about ownership, they were busy fucking making money.

David Weinberger writes in the Atlantic:

In the past I would have said that so long as this architecture endures, so will the transfer of values from that architecture to the systems that run on top of it. But while the Internet’s architecture is still in place, the values transfer may actually be stifled by the many layers that have been built on top of it.

In short, David worries that the Internet has been paved, going so far as to link to Joni Mitchell's Big Yellow Taxi as he does it. If the decentralized, open Internet is paradise, he implies, then Facebook is the parking lot.

While he goes on to argue, rightly, that the Internet isn't completely paved and that open culture is alive and well, the assumption that an open network will necessarily translate to open values is obviously flawed. I buy into those values completely, but technological determinism is a fallacy.

I've been using the commercial Internet (and building websites) since 1994 - which is a lifetime in comparison to some Internet users, but makes me a spring chicken in comparison to others. The danger for people like us is that we tend to think of the early web in glowing terms: every website felt like it was built by an intern and hosted in a closet somewhere (and may well have been), so the experience was gloriously egalitarian. Anyone could make a home on the web, whether you were a megacorporation or sole enthusiast, and it had an equal chance of gaining an audience. Many of us would like this web back.

Before Reddit, there were Usenet newsgroups. (I'll take a moment to let the Usenet faithful calm down again.) Every September, a new group of students would arrive at their respective universities and get online, without any understanding of the cultural mores that had come before. They would begin chatting on Usenet newsgroups, and long-standing users would groan inwardly as they quietly taught the new batch all about - remember this word? - netiquette.

In September, 1993, AOL began offering Usenet access to its commercial subscribers. This moment became known as "the eternal September", because the annual influx of new Internet users became a constant trickle, and then a deluge. There was no going back, and the Internet culture that had existed before began to give way to a new culture, defined by the commercial users who were finding their way online.

"Eternal September" is a loaded, elitist term, used by people who wanted to keep the Internet for themselves. As early web users, rich with technostalgia and a warm regard for the way things were, we run the risk of carrying the torch of that elitism.

The central deal of the technology industry is this: keep making new shit. Innovate or die. You can be incredibly successful, making a cultural impact and/or personal wealth beyond your wildest dreams, but the moment you rest on your laurels, someone is going to eat your lunch. In fact, this is liable to happen even if you don't rest on your laurels. It's a geek-eat-geek world out there.

For many people, Facebook is the Internet. The average smartphone user checks it 14 times a day, which of course means that a lot of smartphone users check it far more than that. In the first quarter of this year, Facebook had 1.44 billion monthly active users. That means that almost 20% of the people on Earth don't just have a Facebook account: they check it regularly. In comparison, WordPress, which is probably the platform most used to run an independent personal website, powers around 75 million sites in total - but Apple's App store has powered over 100 billion app downloads.

Are all those people wrong? Does the influx of people using Facebook as the center of their Internet experience represent a gargantuan eternal September? Or have apps just snuck up and eaten the web's lunch?

Back in 2011, I sat on a SXSW panel (yes, I'm that guy) about decentralized web identity with Blaine Cook and Christian Sandvig. While Blaine talked about open protocols including webfinger, and I talked about the ideas that would eventually underly Known, Christian was noticeably contrarian. When presented with the central concepts around decentralized social networking, his stance was to ask, "why do we need this?" And: "why will this work?"

In the Atlantic, David Weinberger references Christian's paper "The Internet as the Anti-Television" (PDF), where he argues that the rise of CDNs and other technologies built to solve commercial distribution problems have meant that the egalitarian playing field that we all remember on the web is gone forever. While services like CloudFlare allow more people than ever before to make use of a CDN, it requires some investment - as do domain names, secure certificates, and even hosting itself. (The visual and topical diversity of GeoCities and even MySpace, though roundly mocked, was very healthy in my opinion, but is gone for good.)

For most people, Facebook is faster, easier to use, and, crucially, free.

Rather than solving these essential user problems, the open web community disappeared up its own activity streams. Mailing list after mailing list filled with technical arguments, without any products or actual technical innovation to back them up. Worse, in many organizations, participating in these arguments was seen as productive work, rather than meaningless circling around the void. Very little software was shipped, and as a result, very little actual innovation took place.

Organizations who encourage endless discussion about web technologies are, in a very real way, promoting the death of the open web. The same is true for organizations that choose to snark about companies like Facebook and Google rather than understanding that users are actually empowered by their products. We need to meet people where they're at - something the open web community has been failing at abysmally. We are blindsided by technostalgia and have lost sight of innovation, and in doing so, we erase the agency of our own users.

"They can't possibly want this," we say, dismissively, remembering our early web and the way things used to be. Guess what: yes they fucking do.

This stopped being a game some time ago. Ubiquitous surveillance, diversity in publishing and freedom of the press are hardly niche issues: they're vital to global democracy. A world in which most of our news is delivered to us through a single provider (like Facebook), and where our every movement and intention can be tracked by an organization (like Google) is not where any of us should want to be. That's not inherently Facebook or Google's fault: as American corporations, they will continue to follow commercial opportunities. It's not a problem we can legislate or just code away. The issue is that there isn't enough of a commercial counterbalancing force, and it really matters.

Part of the problem is that respectful software - software that protects a user's privacy and gives them full control over their data - has become political. In particular, "open source" has become synonymous with "free of charge", and even tied up with anti-capitalism causes. This is a mistake: open source and libre software were never intended to be independent from cost. The result of tying up software that respects your privacy with the idea that software should come without cost is that it's much harder to make money from it.

If it's easier to make money by violating a user's autonomy than protecting it, guess which way the market will go?

A criticism I personally receive on a regular basis is that we're trying to make money with Known (which is an open source product using the Apache license). A common question is, "shouldn't an open source project be free from profit?"

My answer is, emphatically, no. The idea behind open source and libre software is that you can audit the code, to ensure that it's not doing something untoward behind your back, and that you can modify its function. Most crucially, if we as a company go bust, your data isn't held hostage. These are important, empowering values, and the idea that you shouldn't make money from products that support them is crazy.

More importantly, by divorcing open software from commercial forces, you actually remove some of the pressure to innovate. In a commercial software environment, discussing an open standard for three years without releasing any code would not be tolerated - or if it was, it would be because that standard was not significant to the company's bottom line, or because the company was so mismanaged that it was about to disappear without trace. (Special mention goes to the indie web community here, for specifically banning mailing lists and emphasizing shipping software.)

The web is no longer a movement: it's a market. There is no vanguard of super-users who are more qualified to say which products and technologies people should use, just as there should be no vanguard of people more qualified than others to make political decisions. Consumers will speak with their wallets, just as citizens speak with their votes.

If we want products that protect people's privacy and give people control over their data and identities - and we absolutely should - then we have to make them, ship them, and do it quickly so we can iterate, refine and make something that people really love and want to pay for. This isn't politics, it's innovation. The business models that promote surveillance and take control can be subverted: if we decide to compete, we can sneak up and eat their lunch.

Let's get to work.


Community is the most important part of open source (and most people get it wrong)

3 min read

This post by Bill Mills about power and communication in open source is great:

Being belittled and threatened and told to shut up as a matter of course when growing up is the experience of many; and it does not correlate to programming ability at all. It is not enough to simply not be overtly rude to contributors; the tone was set by someone else long before your first commit. What are we losing by hearing only the brash?

Bottom line: if you, either as a maintainer or as a community, are telling people to shut up then you're not open at all.

If you make opaque demands of people to test their legitimacy before participating then you're not open at all.

If you require that only certain kinds of people participate then you're not open at all.

The potential of open source is, much like the web, that anyone can participate. On Known, we're really keen to embrace not just developers, but designers, writers, QA testers - anyone who wants to chip in and create great software with us. That's not going to happen if we're unfriendly or project the vibe that only certain kinds of people can play. Donating time and resources on an open project is a very generous act, that not everyone can participate in. Frankly, as a community we should be grateful that anyone wants to take part.

As a project founder, a lot of that is about leading by example. That means being talkative and open. I get a lot of direct messages and emails from people, and I try and direct people to participate in the IRC channel and the mailing list - not just because it allows our conversations to be findable if people in the future have similar questions, but because every single message adds to the positive feedback loop. If there's public conversation going on, and it's friendly, then hopefully more people will feel comfortable taking part in it.

Like any positive communication, a lot of this is related to empathy. I'm pretty shy: what would make me feel welcome to participate in a community? Probably not abrupt messages, terse technical corrections or (as we see in many communities) name-calling. Further to that, explicitly marking the community as a safe space is important. We're one of the few communities to have an anti-harassment policy; I'm pleased to say that we've never had to invoke it. More communities should do this.

Which isn't to say that there isn't more that we can do. There is: we need better documentation, better user discussion spaces, a better showcase for people to show off what they've built on top of Known. We're working on it, but let us know what you think.

And please! Whether you're a writer, designer, illustrator, eager user, or a developer, we'd love for you to get involved.


10 things to consider about the future of web applications

3 min read

  1. Twitter - by far the social network that I use the most - is struggling to break 300 million monthly active users and is not hitting revenue targets. (Contrast with Facebook's 1.44 billion monthly actives.) Even investor Chris Sacca has warned that he's going to start making "suggestions".
  2. Instagram - still a newcomer in many peoples' eyes - is beginning to send re-engagement emails in response to flagging user growth.
  3. The 2016 US election is apparently going to be huge on Snapchat. Translation: Snapchat is over. The next generation of young users are already looking for something else. Snapchat was released in September 2011.
  4. The Document Object Model - core to how web pages are manipulated inside the browser - is slow, and may never catch up to native apps. We've known that responsiveness matters for engagement for over a decade.
  5. It's possible to build more responsive web apps by going around the DOM. But these JavaScript-based web apps are harder to parse and often can't be seen by search engines (unless you provide a fallback, which requires a lot of extra programming time).
  6. Push notifications - which are core to apps like Snapchat, and possibly the future of Internet applications - are not available on the open web. Browsers like Chrome are implementing them on a browser-by-browser basis.
  7. Facebook has no HTML fallbacks, renders almost entirely in JavaScript and lives off push notifications. Twitter has HTML fallbacks, is very standards-based, uses push notifications but also SMS and email, and is generally a good player (with respect to the web, at least, although it's less good at important features like abuse management). Facebook is kicking Twitter's ass.
  8. The thing that may save Twitter? Periscope, a native live video app, which is highly responsive and live-video-heavy.
  9. Users have stopped paying for apps, and instead opt for free apps that have in-app purchases, so they can try before they buy. We're a long way off having a payments standard for the web.
  10. There's no way to transcode video in a web browser, which means uploading video via the web is effectively impossible on most mobile connections. (Who wants to sit and wait for a 1GB file to upload, even on an LTE connection?) Meanwhile, the web audio API saves WAV files, rather than some other, more highly-compressed formats you may have heard of. Similarly, resampling images is difficult. In other words, while the web has been optimized for consumption (albeit in a slower way than native apps, as we've seen), it has a long way to go when it comes to letting people produce content, particularly from mobile devices.

What does all of this mean?

I don't mean to be pessimistic, but I think it's important to understand where users are at. The people making the web aren't always the people using it, and there's a serious danger that we find ourselves trying to remake the platform we all enjoyed when we first discovered it.

Instead, we need to make something new, and understand that if we're building applications to serve people, the experience is more important to our users than our principles.

All of these things can be solved. But while we're solving them at length, native app developers are going off and building experiences that may become the future of the Internet.