A long time ago on an #indieweb far far away …

Ben Werdmüller

Why WordPress's new Calypso interface is genius

3 min read

Matt Mullenweg just introduced a new management interface for WordPress:

Today we’re announcing something brand new, a new approach to WordPress, and open sourcing the code behind it. The project, codenamed Calypso, is the culmination of more than 20 months of work by dozens of the most talented engineers and designers I’ve had the pleasure of working with (127 contributors with over 26,000 commits!).

How does WordPress, a twelve-year old server-side product, compete with new, beautiful publishing services like Medium? And how does Automattic grow its $1.16bn valuation?

One of the biggest problems with self-hosted software has been the technical barrier. By now, many users are comfortable with installing an application from CPanel or, maybe, FTPing files to some shared web space. But it's hard - and these approaches only really work with relatively old-school PHP-based software.

New, evented server-side platforms like Node allow you to build completely new kinds of experiences, but installing them is beyond the reach of most self-hosted users.

So, first, WordPress introduced a core API, using best practices from the modern web, making it far easier to publish third-party client applications.

And then they introduced Calypso: a completely new administration interface, based on Node and React. It's open source and works with any WordPress site, but it requires a WordPress.com account. It also uses the WordPress.com servers to power a new reader interface. Effectively, if you want to have a superior reading, writing and administration experience on WordPress, you need to use their service.

In his post, Matt adds:

With core WordPress on the server and Calypso as a client I think we have a good chance to bring another 25% of the web onto open source, making the web a more open place, and people’s lives more free.

I think that has the potential to be true. The new interface is incredibly fast, beautiful, and functional - and you can continue to own your data on your own server, if you want to. But this is also a rebuttal to anyone who thinks that everything should sit on your own server. With this change, WordPress is now, at least in part, a centralized service - albeit one where you get to choose where your data is stored.

Or to put it another way: WordPress powers 25% of the web, and Calypso is a strong step in the direction of putting all of that under the control of services run by Automattic. I don't think that's bad at all: I want both WordPress and Automattic to be wildly successful, and I see this as a smart way to maintain and grow their position.

My expectation: we'll start to see more examples of this data-interface separation, where the logic and data will sit wherever you want, and the beautiful apps and interfaces will be powered by centralized services. Architectually, it makes sense. And it's about time open source moved away from its limitations and built the best possible user interfaces it can.

Let in the refugees. How we respond to them is a reflection of who we are.

4 min read

George Packer in the New Yorker this week:

A lot of people in this country are disgracing themselves this week. They include politicians of both parties—though many more Republicans than Democrats—and all regions. Their motives vary: deep-seated bigotry, unreasoning fear, spinelessness, opportunism, or some unholy mix of them all.

They say you only really know the true nature of someone's character in a crisis. Similarly, you only know the character of your country when people are in need. For citizens of the US and the UK - the two major countries I've called "home" - there has been a lot to be ashamed of.

We have the threat of terrorism, now from Isis, and atrocities being committed all over the world. We also have a stream of people who are fleeing those same atrocities, in a manner that is reminiscent of Jewish people fleeing the Nazis before and during the second world war.

Back then, both the US and the UK turned down Jewish refugees, sending them back to their deaths in mainland Europe. There were numerous reasons, which you can now hear by watching the news, as they are parrotted by today's politicians as reasons we shouldn't accept refugees from Syria and Iraq.

Of course, they maintain that this is a different debate. It's not. As Josh Zeitz writes in Politico:

In short, most of the elements that conservatives like David Frum cite as differentiating factors between now and then—fear of refugee violence, fear of their inability or desire to assimilate, concern over their economic dependence, suspicion of their ideological alienation and radicalism —were in fact central to the debate over admitting Jewish refugees in the 1930s.

Considering today's refugees in the same way does not diminish the plight of the Jews in the second world war, or in any way lessen the horrors of the Holocaust. This week, the United States Holocaust Memorial Museum felt the need to release a statement:

Acutely aware of the consequences to Jews who were unable to flee Nazism, the United States Holocaust Memorial Museum looks with concern upon the current refugee crisis. While recognizing that security concerns must be fully addressed, we should not turn our backs on the thousands of legitimate refugees. 

The Museum calls on public figures and citizens to avoid condemning today’s refugees as a group. It is important to remember that many are fleeing because they have been targeted by the Assad regime and ISIS for persecution and in some cases elimination on the basis of their identity.

The humanitarian case is clear, but immigration is also a net economic benefit in both the US and the UK. This leaves racism and xenophobia as the largest reasons to reject these refugees.

If you want to find racist and xenophobic arguments, you often have to look no further than Facebook. Here are two:

I've seen arguments that the immigrants are all fighting-age men, and a secret army is somehow being sent to destroy the US from within, like the plot of a bad 1980s cold war action movie. When I responded with the actual UN demographics of registered refugees, I was told one can't trust the UN because of their treatment of Israel. So far, the logic on display is so loose that I haven't found an adequate way to respond.

I've also seen many arguments which agree with Ted Cruz that we should be screening for Christians. Ironically, Cruz's suggestion itself proves that Christians aren't necessarily more moral than anyone else. "There is no meaningful risk of Christians committing acts of terror," Cruz said, forgetting that the majority of domestic terrorist attacks since 9/11 have been committed by white Christians.

I believe it's important to stand up to these kinds of arguments. For many people, discussing politics online - or around the Thanksgiving table - is taboo. But words matter, and deeds matter. The plight of an entire group of people fleeing terror and death in part depends on us changing the minds of the population, and sending a signal to our representatives that racism and xenophobia will not be tolerated.

Shouting at each other isn't necessarily effective, although we've developed a culture of it (and sometimes voices need to shout to be heard). We need to sit down, particularly with our loved ones, and have reasoned, fact-based conversations that lead to mutual understanding.

Love has to win. Peoples' lives are at stake.

Open issues: lessons learned building an open source business

16 min read

South Park


The first time I ever visited South Park, the tiny patch of grass in downtown San Francisco that the Matter garage would later back onto, Biz Stone bought me a coffee. We circled the park and talked about Elgg, our open source social networking product, and Twitter, the startup he was working on at the time.

The most important piece of advice he gave us was this: hold something back. It's fine to open source your code, to release an open product, but you've got to hold back the thing that will make you valuable.

This was the most important advice we received about Elgg. We ignored it completely.


Six years later: September 2014.

Erin and I stepped down from the Paley Center stage in New York, exhausted. Most accelerators have one demo day. Because Matter is so closely tied to both media and technology, it has two: one at the Folsom Street Foundry in San Francisco, in the heart of SoMa, and the other in New York, the city where most of America's media companies call home.

Known, we told an audience of media luminaries like Jeff Jarvis and industry investors, was a way for post-secondary students to save their coursework, notes and discussions on a site that they controlled. In a world where students are used to delightful apps and beautiful user experiences, the Learning Management Systems used by 93% of institutions are an abomination that actively hinder learning. Worse, when a course is over, all of the discussions and resources that were collaboratively made by the class are deleted forever. With Known, students can publish to their own site, and syndicate to these other platforms, allowing them to take control over their learning using a beautiful, mobile-first user interface.

Better yet, we told the audience, Known has an open source core. We know that one size doesn't fit all in education. With Known, every single feature has an API endpoint, and every single feature can be customized to fit both the needs of the institution and the student. The first pilot is happening right now, and we're getting great feedback.

Applause. Seven minutes later, we were done. This was day zero for our company: the next day, the hard work would begin.


Skip forward: September 2015.

I looked around the table at Garaje. Most of the alumni from Matter's third class were here, and had great stories to tell: Musey were thriving and building beautiful design apps; LocalData were helping to improve American cities; Louder were preparing their acquisition by Change.org. Over in New York, Stringr were delivering video to more and more news stations.

In some ways, Known was doing well. Our software was powering tens of thousands of websites. We had received great coverage at our launch, and continued to get fantastic feedback from educators all over the world. People were using Known to teach on five continents.

Yet at the same time, we didn't know how we were going to pay rent, and growth was linear. For a project, we were doing well. For a company, we weren't doing well - and there were still only two of us.

What went wrong?


First, you have to understand open source.

Open source is best defined by its four freedoms, which are inspired by Roosevelt's declaration of the four freedoms that every human should be able to enjoy. These dictate that you should be able to:

0. Run the program as you wish, for any purpose
1. Study how it works, and modify its function
2. Redistribute copies “so you can help your neighbor”
3. Distribute copies of your modified versions

The intention is that open source software is free as in speech: it grants you liberties over the code you run that you might not get with other products.

Unfortunately, the word "free" is overloaded: it has multiple possible meanings. In reality, open source has become synonymous with free as in beer: software that you can use without incurring any direct licensing costs.

Our strategy was to create an open core that people could freely distribute, and then layer premium services over the top. If you didn't want to worry about managing servers, we had an excellent SaaS product. If you didn't want to worry about managing APIs to third-party platforms, we offered Convoy. Finally, we wanted to provide access to a network of trusted consultants who could create customizations for institutional customers.

Our utopian vision was to have organic growth through sharing, leading to institutional customers. This didn't happen - at least, not as fast as we needed it to.


Second, you have to understand startups.

We have exact numbers internally, but a good rule of thumb in San Francisco is that, to break even, we need to bring in $10,000 per employee per month. This covers below market rate salaries, as well as all the overheads you incur when you're running a business (for example, taxes and moderate infrastructure costs). It doesn't cover some of the extra investment you really need to put into sales, marketing and product development.

To be relatively comfortable as a two-person company, we need to clear $240,000 per year. That's a tough ask for many businesses, which is one reason why investors are useful: they back your team and put money into your company, making a bet that you'll be profitable later on and will be able to pay them back and then some.

Consider, also, that most teams are not limited to two people. I've got a development and product management background; Erin is an analyst and user experience expert. We need to bring on a full-time technical lead and a front-end designer. I can't do either my CEO (sales! research! business development!) or web development jobs justice, and Erin can't do her user experience or front-end jobs justice. We also need to have redundancy on our staff, so if one of us is sick or out doing sales work, the company can continue to be productive. As soon as you start talking about building a real team, those numbers explode.

I don't believe it's possible to start a consumer startup as a full-time endeavor without significant investment. Unlike businesses, only a tiny minority of consumer users are willing to pay money. You need to have enough runway (the time left in your company before it runs out of money) to reach a mass-market audience, and then make sure you're either solving a problem that they are willing to pay for a solution to. Because it's so hard to get money from consumers, these businesses often make their money through advertising: reaching targeted, engaged audiences is absolutely a problem that advertisers will pay for a solution to.

Enterprise startups potentially require less investment, but the sales cycle - the time it takes to sell to an individual customer - is potentially much longer, and the total cost to acquire a single customer is much higher. You need to have enough money in the bank to make this work; investment is a useful vehicle to bring your company to the next stage of its development.

Investors protect their money by minimizing risk. In this context, open source is a liability: remember the free as in beer problem? By giving away the portion of your product that captures value, you're essentially devaluing your business to zero. Why would anybody invest in that? I'm sincerely grateful that Matter did invest in our team. In return, the least we can do is be a good steward of investor value.

That $240,000? It's a baseline. Biz was completely right: you need to hold back the thing that makes you valuable.


Feedback is a gift - and so is open source.

When they work well, open source communities are amazing things: collaborative groups of disparate people all agreeing to make software together for use by the commons. As a methodology, it's beautiful, and can showcase the best of humanity.

When you're building a product for sale, it's important that you've identified a problem that people will pay money to have solved for them, and that you're solving it well. That means talking to a lot of people, and both making and iterating a lot of rough prototypes. Your product has to be compelling, well-made and scalable. As it's concisely described in design thinking circles, you need to constantly be testing its desirability, feasibility and viability.

When your product is open source, you'll get a lot of feedback from the community. This is important to take on board, and the community is a hugely valuable part of your ecosystem - but at the same time, it's unlikely that open source community members are customers. It's possible that they're users; it's also possible that they're open source enthusiasts who are just happy to see another project join the movement.

Open source projects, as a whole, have famously bad usability. That's because their feedback loop is constrained to other developers. One recent example of this disconnect is a heated debate about using Slack vs Internet Relay Chat. To non-technical users, IRC is arcane and unfriendly (which also accurately describes many of the discussions that take place there), yet many open source maintainers couldn't understand the problem.

When you're building a compelling product, the license should be irrelevant. It should be compelling whether it's completely closed or released under the GPL: the license is how you distribute the product, not something that's inherent to the product itself.

Unfortunately, in the case of Known, I think a lot of people liked it because it was free and open source. This was a bad signal - and certainly not one that will lead to paying customers and a thriving business. (It's worth saying here that a consistent voice of real support has been the indie web community, alongside companies like Reclaim Hosting, which legitimately wants to see us succeed.)


I'm not Donald Trump, but ...

The biggest surprise I've had since starting Known is the amount of feedback complaining that we're trying to make money with it. Usually this comes with some kind of a complaint about startups and capitalism.

If you know me, you'll know that my politics err on the liberal side of liberal; Bernie Sanders and Elizabeth Warren are the US politicians who best describe the country I want to live in. I'm hardly a hardcore conservative capitalist. Nonetheless, I was taken aback to discover that we'd accidentally joined an anti-capitalist movement: we've been very open about being a business since the day we announced our existence.

In fact, I really wanted to show that it was possible to create a profitable, thriving business creating respectful software that gives users full control of their data. I think it's important.

Here are some real things I've heard about making money from open source:

  • We should have a universal basic income so people won't have to worry about how they'll make money.
    A universal basic income is not money from the sky; it's a proven way to create a real safety net, but it does rely on taxation. It doesn't work if everyone relies on a basic income, and the idea that you should have to live at the lowest possible income if you're going to build respectful software is both ridiculous and kind of offensive. Welfare is important, but not as a way to pay for open source software.
  • We should be striving to build a post-money society.
    I mean, to be fair, I'm a Star Trek fan too.
  • We should just build software for the love of it and not worry about making money.
    Most egregiously, we've heard this from people who literally take our free product and sell services around it.

All of these are obviously detatched from reality.

This culture of anti-capitalism in open source is actively harmful. It's a reason why so few women (1.5%!) participate in open source projects, for example, and why people in disadvantaged communities are underrepresented. Having the ability to work on a project for free represents enormous privilege. At its best, open source can be a way for people to contribute to a global commons and freely exchange ideas; at its worst, it's exploitative and exclusionary.

It's devalued our time. I get personal requests on all channels on a daily basis - email, Twitter, Facebook, even unsolicited phone calls - asking for free help. (I no longer give free personal help, except on the mailing list, where it can be used to grow a commons of support information that everyone can use.) Sometimes these calls for free help come from people who are making money from our labor.

Open source doesn't need folk songs. It needs a way to fairly compensate the people who participate in it. I'm not at all against anti-capitalism - but it sure is hard to build a business on it.


But aren't there a lot of profitable open source businesses?


We've most often been compared to WordPress, which powers over 23% of the web. Automattic is valued at over $1.1bn, has a huge team worldwide, and is widely held as the poster child for open source businesses.

In reality, the WordPress open source project is held by a non-profit foundation. Automattic concentrates solely on hosted services.

Ghost, another project we've been compared to, is a non-profit entity in its entirety. It made a lot of its money by crowdfunding as a WordPress plugin, before switching to becoming a node.js project. This technical change made it much harder to install, making their paid, hosted services an easy choice.

Ind.ie hasn't really launched Heartbeat, their distributed social network, but their project is significantly better-funded than Known. This is partially because they crowdfunded as a smartphone, before choosing to shift their attention to a more focused problem.

Mozilla has a long history that stems from Netscape. Their success is not something that a new entrant to the market could replicate.

Red Hat is held up as a model open source business: its current market cap is $14.8bn, or roughly 2.8% of a Google. It provides professional services and support licensing around its Linux distributions.

Infrastructure is a more profitable place for open source to thrive: MongoDB, CoreOS and Docker are all examples of well-funded open source startups. Each one sells better support, trustability and reliability - which makes sense to pay for if you're building a business on top of their technologies.

For these businesses, open source allows them to build a bigger market for their products, which they can then capitalize on. It's a smart strategy that has very little to do with freedom, and everything to do with growth.


What about other funding methods?

BountySource, the crowdfunding platform for open source projects, is one oft-mentioned funding method. It's actually a pretty great idea, that I think will wonderfully for hobbyists, and will encourage developers on distributed projects to work on smaller bugs and features. I don't foresee it covering our costs.

Similarly, Patreon works very well for personal projects, and is redefining how some artists make their money.

We currently make a significant portion of our income through professional services, but this isn't sustainable for a number of reasons. As Tomasz Tunguz at Redpoint Ventures pointed out earlier this year in this excellent analysis:

The data suggests that customers are willing to pay 20%+ margins on price points of greater than $200,000. Less than that price point, the data shows it to be difficult to operate a professional services team at better than breakeven.

When you consider all of the overheads inherent to running a company, you would actually make more money just being a freelance developer. Professional services jobs are often one-offs, and while they sometimes lead to contracts, it can be an equal effort to go find the next one. It's not a great way to grow.

That also negates the common argument about making money by providing tertiary services like support and customization. These strategies add more risk to the business, and don't cumulatively add value. At lower price points, it's not even a lifestyle business: it's hand to mouth.


What's next?

None of this should be a downer. I want to open a real conversation about making money sustainably with respectful software. Between Elgg and Known, I've spent the majority of my career working on these issues. I think they're solvable, and I think the result will be a better software ecosystem.

Known isn't at all going away, and we continue to release new versions every single month. We're evaluating the services we provide around it, but we love how the community has rallied around it, and we love how it's being used. We expect it to live and breathe for a long time.

However, we're learning from companies like Automattic, and non-profits like the WordPress Foundation. We're thinking hard about how the project is supported. And it should go without saying that we're committed to building a valuable, growing business.

There's a strong movement around creating alternatives to software that tracks and spies on us. I think that's a fantastic thing. Building software is about empowering people to do things they previously couldn't. But a part of building empowering tools is to make sure they can be provided sustainably. If you're doing something good, you need to be able to keep doing it - and whether you like it or not, that means money.

We need to have a stronger conversation about money in open source, and about building healthy businesses on respectful software.



As either Milton Friedman or Alfred P. Sloan said: "the business of business is business". Build a healthy business; don't be led by ideology. You're not helping build a more open world if you're showing that being open is unsustainable or detrimental; show that you can do well.

And when you succeed, use the fruits of your labor to do good.

We'll be here, cheering for you.

Help me debug a personal project

1 min read

Working title: ten questions

A fully anonymous site where you answer 10 questions that attempt to pinpoint your beliefs on social and economic issues. These aren't essay questions, exactly, but neither are they multiple choice: the idea is to get you to explain your point of view without judgment. Each question is followed up with: "why?"

Once you've answered the 10 questions, and their reasons, you answer some simple demographic questions: your very rough location in the world, your religion, optionally your race and gender, and how you self-identify on the political spectrum.

When you're done, you're given a special link that allows you to come back and edit. No email addresses or passwords are ever taken, and server logs aren't kept.

The interviews are then made available for filtering by those demographics - or there's a random button that picks a random one out for viewing. Essays can be flagged, but will only be removed for illegal / irrelevant content.

My hope is to create a body of essays that will explain belief systems other than your own, in a calm, non-judgmental way.

Dumb? Irrelevant? Redundant? Let me know (or let me know how you'd iterate).

Untestable, unsafe and on the freeway: why cars need to be open source

5 min read

Our devices are working against us. 

Recently, we learned that Volkswagen was falsifying its mandatory E.P.A. emissions tests. Because each test has a set of characteristics that don’t accurately match real-world driving conditions, the internal software running in 11 million cars could deduce that an official test was taking place. Under these conditions, the cars would turn on enhanced emissions controls, which allowing them to pass the tests but reduced mileage and other driver-friendly features. In the real world, the cars had better mileage and acceleration, but were spewing illegal levels of pollution.

This was far from an isolated incident. Less than a week later, it emerged that some Mercedes, BMW and Peugeot vehicles were using up to 50% more fuel than laboratory testing suggested. Meanwhile, on average, the gap between tested carbon emissions and real-world performance is 40% - up from 8% in 2001. And while Volkswagen’s software broke the law, detecting test conditions to cheat on results is a widespread practice that has become an open secret in the industry.

These practices extend far beyond cars: even our televisions are faking data. Recently, some Samsung TVs in Europe were found to use less electricity in laboratory tests than under real-world viewing conditions.

In each case, the software powering the device was unavailable to be tested. In a world where cars are heavily scrutinized to ensure passenger safety, examiners had very little way to determine what the software was doing at all. According to the Alliance of Automobile Manufacturers, providing access to this source code would create  serious threats to safety or security” (pdf). Even the Environmental Protection Agency agreed, arguing that opening the source code could lead to consumers hacking their cars to achieve better performance.

Anyone who’s worked in computer security will know that this is a spurious argument. Obscuring source code doesn't make software safer: on the contrary, more scrutiny allows manufacturers to find flaws more quickly.

Earlier this year, Wired demonstrated that hackers could remotely kill the brakes on a Jeep from a laptop in a house ten miles away from the vehicle. The same hackers had previously demonstrated that they could achieve complete control over a Ford Escape and a Toyota Prius: these vulnerabilities appear to be widespread and not limited to any particular manufacturer.

In the light of these exploits, it’s extremely likely that developers are already cheating tests by hacking their cars, without E.P.A. or manufacturer knowledge. Indeed, a cursory Google search reveals hackers talking about cheating on their emissions tests using Arduinos and other devices. To quote one participant in one of these forums: “I 100% believe that these tests are a complete joke/scam and therefore should be cheated with any and all available means.”

In a world where cars are increasingly driven by complex software, the only reliable way to test them is to inspect their source code. This is true following revelations that Volkswagen falsified their E.P.A. tests, and it will become increasingly crucial as autonomous vehicles become widespread.

McKinsey & Co predicts that autonomous vehicles will be widespread in around 15 years. The consequences of hacking your vehicle today are largely environmental: not something that should be discounted, but also not life-threatening except in aggregate. Once autonomous vehicles are commonplace, your car’s software algorithm will be the only thing keeping you from crashing into another family on the interstate. 

Autonomous vehicles will eliminate an enormous percentage of car accidents, and we should not fear their introduction. However, hackers will certainly attempt to alter the software running their vehicles in order to go faster, impress their friends, perform stunts, and so on. If the software infrastructure inside a vehicle is kept secret from regulators, only the manufacturer will have any way of determining if this has taken place.

More pressingly, it’s now become apparent that manufacturers can’t be trusted to protect our interests. Even if it is impossible to hack an autonomous vehicle – which would be hard to believe – we need to ensure that the algorithms and software that power these products are as rigorously testable as the steel and rubber we sit in.

Opening source code to scrutiny does not limit its owner’s intellectual property rights. It also doesn’t prevent a manufacturer from making a profit or protecting their unique inventions. It does, however, allow us to trust their products. This is important for all connected devices, but cars are uniquely life-threatening when misused. 

Legislators should act to protect our safety. In the same way that seatbelts and other safety measures were made mandatory, the source code that powers modern vehicles should be made available both to regulators and the public. Security through obscurity is no security at all.

Help is available: we’ve been doing this in the software industry for decades. The open source community should help manufacturers build more open software while retaining their intellectual property. 

Open software is in the public interest, particularly when lives are at stake.

I am not a developer

4 min read

Since co-founding Known with Erin Richey eighteen months ago, one of my biggest professional challenges, both inwardly and outwardly, has been this:

I am not a developer.

I have development skills and was a startup CTO for a decade. Absolutely. I know how to architect a system and write code. I can smell when someone is trying to bullshit me about what their technology is and how it works. I keep on top of emerging technology and I enjoy having conversations about it.

But I am not a developer.

That's not my role. Nor should it be.

When we started Known, I became (once again) a co-founder, but also a CEO - a crucial position in any company. Among other things, the CEO is responsible for:

  • Setting strategy and vision
  • Building a nurturing company culture
  • Creating an amazing team
  • Making money

The last one probably should have come first. I think a good way of putting it is that my job is to make myself redundant - but until then, do everything that needs to be done.

If the company doesn't grow, I've failed. If the company runs out of money, I've failed. If we put out a shitty product, I've failed. If we lose momentum in the market and people stop thinking of us, I've failed.

Engineering is crucial. Design is crucial. Business development is crucial. Sales and marketing are - guess what? - crucial. You can't get by with one of those things alone.

It turns out that I still write code. Sometimes, I write a lot of code. But the more time I spend building product, the more time is taken away from doing the hard work of validating and selling it. Writing code is like spinning your wheels when you're building the wrong thing.

Validation is crucial. I'm not Steve Jobs. (For one thing, I don't have a huge team of engineers and designers whose work I can take credit for.) Figuring out product / market fit, pricing and your go-to-market strategy are not things you can hand-wave away between other things. It's a full-time profession.

If we fail, it's because of decisions I made. I've made many mistakes - as any founder does - but one of the most important was to fail to have a technical co-founder. I thought because I was the technical partner, that we didn't need one. In fact, every technology startup needs a technical co-founder, even when the CEO is technical themselves.

If we succeed, it's because we've overcome this limitation and managed to grow an awesome team at a company with a nurturing culture and a killer vision. It will be beause we've made something that people want and pay for in significant numbers, and have captured value while providing even more value to the people we serve. It won't be because we have great code, although great code will be a component. It won't be because we have great design, although great design will be a component. It won't be because we have an amazing sales and marketing strategy, although we'll need it.

I'm lucky. Through Matter, we got high-end training in design thinking and access to an incredible community, as well as the ability to pitch like a pro. Through my network of peers, I'm constantly inspired by other CEOs who are building businesses they're proud of, and I learn from them as much as possible. By being in the San Francisco Bay Area, I'm a part of a community of experts. My task is to draw all of this together - as well as my startup experience, and my experience building technology. My task is to build a successful company.

I am not a developer.

The whole Internet: much more than the web, apps, or IoT

4 min read

This morning, I woke up and checked my notifications on my phone. (I know, I know, it's a terrible habit.) I took a shower while listening to a Spotify playlist, got dressed, and put my Fitbit in my pocket. I made some breakfast and ate it in front of last night's Daily Show on my Apple TV. Then I opened my laptop, logged into Slack, launched my browser and checked my email.

I've spent a lot of time over the last decade advocating for the web as a platform. To be fair to me, as platforms go, it's a good one: an easy-to-use, interconnected mesh of friendly documents and applications that anyone can contribute to. Lately, though, I've realized that many of us have been advocating for the web to the exclusion of other platforms - and this is a huge mistake.

It's not about mobile, either. I love my iPhone 6 Plus, which in some ways is the best computing device I've ever owned (it's certainly the most accessible). Apps are fluid, beautiful, immediate and tactile. Notifications regularly remind me that I'm connected to a vast universe of information and conversations. But, no, mobile apps aren't the natural heir to the web.

Nor is it about the Internet of Things, or the dedicated devices in my home. My Apple TV is the only entertainment device I need. My Fitbit lets me know when I haven't been moving enough. I have an Air Quality Egg that attempts to tell me about air quality. My Emotiv EEG headset can tell me when I'm focused. But none of these things, either, are the future of the Internet.

I think this is obvious, but it's worth saying: no single platform is the future of the Internet. We've evolved from a world where we all sat down at desktop and laptop computers to one where the Internet is all around us. Software really has eaten the world.

What ubiquitous Internet means is that a mobile strategy, or a web strategy, aren't enough. To effectively solve a problem for people, you need to have a strategy that holistically considers the whole Internet, and the entire galaxy of devices at your disposal.

That doesn't mean you need to have a solution that works on every single device. Ubiquity doesn't have to mean saturation. Instead, the Internet has evolved to a point where you can consider the platforms that are most appropriate to the solution you're providing. In the old days, you needed to craft a solution for the web. Now, you can craft a solution for people, and choose what kinds of devices you will use to deliver it. It's even becoming feasible to create your own, completely new connected devices.

The opportunities are almost endless. Data is flowing everywhere. But as with mobile and the web in earlier eras of the Internet, there will be land grabs. When any device can talk to any device and any person, the perception will be that owning the protocols and the pipes is incredibly valuable. Of course, the real value on the Internet is that the pipes are open, and the protocols are open, and anyone can build a solution on the network.

For me, this is a huge mental shift, but one that's incredibly exciting. The web is just one part of a nutritious breakfast. We have to get used to building software that touches every part of our lives - not just the screens on our desks and in our pockets. The implications for media and art are enormous. And more than any other era of the Internet, the way we all live will be profoundly changed.

Why it isn't rude to talk about politics (and I think we should be doing it more)

5 min read

It's often said that you shouldn't talk about politics, religion or money. I tend to think those are all part of the same thing: conversations about how the world is, and should be, organized. Anyone who's been watching the American electoral system warm up its engines will be in no doubt that your views and status in any one of those prongs affect the other two. And all are inseparable from the cognitive biases that your context in the world has given you.

So let's restate the maxim: it's rude to talk about the world.


The reason that's most often given is that people might disagree with you. It might start an argument, someone might be offended by your viewpoint, or you might be offended by a deeply-held position from someone else. As the thinking goes, we should try to avoid offending other people, and we shouldn't be starting arguments.

Living in a democracy, I take a different view. Each of us has a different context and different opinions, which we use to inform the votes we cast to elect the government we want, allowing us to help dictate how our communities should be organized. That's awesome, and a freedom we shouldn't take for granted. It's also the fundamental bedrock of being a democratic citizen.

I want to be better informed, so I can cast better votes and be a better citizen. Which means I want to hear different views, that potentially challenge my own. If you define offence as a kind of shock at someone else's disregard for your own principles, I want to be offended. I want to know other peoples' deeply-held beliefs and principles, because they allow me to calibrate mine. I don't exist in a vacuum.

I think the world would be better if we used our freedoms and were more open with our beliefs. The challenge is that it is not always safe to do so. Middle class politeness is one thing; for millions of Muslims in America, like communists and Jews before them, sharing their beliefs can be life-threatening. For a supposedly democratic nation, America is spectacularly good at stigmatizing entire groups of people.

I'd like to think that this is where the politeness principle comes from, as a kind of protection mechanism for more vulnerable members of our community. I don't think that's the case. I think it's much more to do with maintaining a cultural hegemony, and the harmful illusion that all citizens are united in their beliefs and principles.

Citizens don't all have the same beliefs and principles. This is part of the definition of democracy, and we should embrace it.

Citizens don't all have the same privileges and contexts. As a white, middle-class male, I have privileges that many people in this country are not afforded, and a very secure filter bubble to sit inside. I think it's my duty to listen and amplify beyond the walls of that bubble. Candidates for the President of the United States are, in 2015, suggesting that we have "a Muslim problem" in terms that echo the Jewish Question from before the Second World War. Even if you don't believe in advocating for people in ways that don't directly affect you, this directly affects you. It's all about what kind of country we want to be living in. It's all about how it's organized.

It's also about what kind of world we want to be living in. I think it's also my duty, as a citizen of one of the wealthiest nations on earth, to listen and amplify beyond our border walls. Citizens of countries like Iran, Yemen and Burkina Faso are people too, with their own personal politics, religions, hopes and dreams.

We've been given this incredible freedom to talk and advocate, to assemble and discuss, and we should use it.

Yes, there will be arguments. It would be nice to say, "hey, we shouldn't get angry with each other," but these are issues that cut to the core of society. Tone policing these debates is in itself oppressive and undemocratic. And while I'd like to be able to say, "we should have a free and even exchange of ideas, and I won't think less of you for what you believe," that actually isn't true. If you believe that Muslims are second class citizens, or that the Black Lives Matter movement isn't important, I do think a little worse of you, just as some of you will likely think worse of me for thinking socialism is an okay idea or for not believing in God. We can respect each other as citizens, and have respect for our right to have opinions. We should still talk. And as dearly held as my beliefs are, I want to know when you think I'm wrong: it's how I learn.

What we shouldn't do is tell people that they should just accept what they're given, and take the world as it is. That's not what being in a democracy is all about, and it's what we do when we tell people to shut up about what they believe.

Is crowdfunding the answer in a post-ad universe?

3 min read


Albert Wenger of Union Square Ventures asks:

How then is journalism to be financed? As I wrote in 2014, I continue to believe that crowdfunding is the answer. Since then great progress has been made by Beaconreader, Kickstarter’s Journalism category, and also Patreon. Together the amounts are still small but it is early days. Apple’s decision to support these adblockers may well help accelerate the growth of crowdfunding and that would be a good thing – I don’t like slow page loads and distracting ads but I will happily support content creation directly (just highly unlikely to do so through micropayments while reading). All of this provides one more reason to support Universal Basic Income – a floor for every content creator and also more people who can participate in crowdfunding.

I've also heard Universal Basic Income argued for as a solution to funding open source projects. I'm not sure I buy it, so to speak - I think it's not fair to assume that content creators should live on a minimum safety net wage. I do strongly believe in a Universal Basic Income, but as a strong safety net that promotes economic growth rather than something designed to be relied on. For one thing, what happens if everyone falls back to a Universal Basic Income? Could the system withstand that, and would the correct incentives be in place?

I love the idea of crowdfunding content. This does seem to put incentives in the correct place. However, when systems like Patreon work well (and they often do), the line between crowdfunding and a subscription begins to blur. When you're paying me whenever I create content, with a monthly cap, and I create content on a regular basis, it starts to look a lot like it's just a monthly subscription. If you pick up enough monthly subscriptions, it starts to look like real money - a thousand people at $10 a month would lead to a very comfortable wage for a single content creator (even in San Francisco and New York).

So remove the complexity: recurring crowdfunding is just a subscription with a social interface. Which is fine. For recurring content like news sources and shows, I think subscriptions are the future.

For individual content like movies, albums and books, campaigns begin to make a lot of sense. But crowdfunding isn't magic: funders won't necessarily show up. I've been told that I should really have 33% of my campaign contributions pre-confirmed before the campaign begins, and I suspect that's not possible for most unknown artists.

There needs to be a positive signal of quality. In the old days, there were PR campaigns, which were paid for by record labels and publishing companies. Unless we only want rich artists and established brands to make money making content, we need a great, reliable way to discover new, high-quality independent media. And then we need to be able to make an informed decision whether we want tob buy.

As great as Patreon, Kickstarter, Indiegogo and the others are, we're not there yet. Social media won't get us there alone - at least not as is. But there's money to be made, and I'm convinced that whoever unlocks discovery will unlock the future of content on the web.


Image: Crowdfunding by Rocío Lara

Meet the future of online commerce - and the future of the web.

3 min read

Meet the future of online commerce:

We're all used to content unbundling: very few of us are loyal to magazines, blogs or personal websites any more. We consume content through our social feeds, clicking on articles that people we care about have recommended. Articles are atoms of content, flowing in a social stream, unbundled from their parent publications. Very few of us visit content homepages any more.

Products like Stripe Relay let vendors do the same with commerce. Suddenly you can get products in your social stream, which you can share and comment on, as well as buy right there. There's no need to visit a store homepage like Amazon.com. You can find products anywhere on the web, and click to buy wherever you encounter them.

There's no point in vendors having apps: the app experience is handled by the social stream (be it Facebook, Twitter, or something more open). The homepage also becomes significantly less crucial to purchasing, just as it's become much less crucial to serving content. In fact, there's often no need to visit a standalone web page at all, except perhaps to learn more about the product. Even then, you can imagine this extended content traveling along the social stream with the main post, in the same way that Facebook's Instant Articles become part of their app.

It's no accident that Google and Twitter are creating an open source version of instant articles. Facebook's version is a proof of concept that shows the way: websites are not destinations any longer. The social stream has become a new kind of browser, where users navigate through social activities rather than thematic links.

Social streams used to be how we discovered content on web pages. Increasingly, the content will live in the stream itself.

A battle is raging over who will own this real estate, and Facebook is winning it hands down. However, that doesn't mean they'll win the war over how we discover information online - there's plenty of precedent in computing for the more open alternative eventually winning. And that's what Google and Twitter are betting on:

Another difference between the Google/Twitter plan and other mobile publishing projects is that Google and Twitter won’t host publishers’ content. Instead, the plan is to show readers cached Web pages — a “snapshot of [a] webpage,” in Google’s words — from publishers’ sites.

The language of the web will still be a crucial part of how we communicate. What's at stake is who controls how we learn about the world, and an open plan allows us to dictate how that content is consumed.

If Facebook is the Apple Mac of social feeds, Twitter has the potential to be the IBM PC. And that may, eventually, be how they succeed.

In the meantime, the web has turned a corner into a new era of social commerce and free-flowing content. There's no turning back the clock; platform owners need to embrace these dynamics and run fast.

"I'd like to introduce you to Elle": four September 11ths

11 min read

September 11, 2001

I was in Oxford, working for Daily Information. My dad actually came into the office to let me know that it had happened - I had been building a web app and had no idea. For the rest of the day I tried to reload news sites to learn more; the Guardian was the only one that consistently stayed up.

The terror of the event itself is obvious, but more than anything else, I remember being immediately hit by the overwhelming sadness of it. Thousands of people who had just gone to work that day, like we all had to, and were trapped in their office by something that had nothing to do with them. I remember waiting for the bus home that day, watching the faces in all the cars and buses that passed me almost in slow motion, thinking that it could have been any of us. I wondered what their lives were like; who they were going home to see. Each face was at once unknowable and subject to the same shared experiences we all have.

I was the only American among my friends, and so I was less distanced from it than them. I remember waiting to hear from my cousin who had been on the New York subway at the time. I'm kind of a stealth American (no accent), so nobody guarded what they said around me. They definitely had a different take, and among them, as well as more widely, there was a sense of "America deserved this". It's hard to accurately describe the anti-American resentment that still pervades liberal Britain, but it was very ugly that day. On Livejournal, someone I followed (and knew in real life) posted: "Burn, America, burn".

One thing I agreed with them on was that we couldn't be sure what the President would do. America had elected a wildcard, who had previously held the record for number of state executions. It seemed clear that he would declare war, and potentially use this as an excuse to erode freedoms and turn America into a different kind of country; we had enough distance to be having those discussions on day one.

There were so many questions in the days that followed. Nobody really understood what had happened, and the official Bush explanations were not considered trustworthy. People brought up the American-led Chilean coup on September 11, 2003, when Salvador Allende had been deposed and killed; had it been symbolically related to that? Al Qaeda seemed like it had come out of nowhere.

Meanwhile, the families of thousands of people were grieving.


September 11, 2002

I had an aisle to myself on the flight to California. The flight had been cheap, and it was obvious that if something were to happen on that day, it wouldn't be on a plane. Airport security at all levels was incredibly high; nobody could afford for there to be another attack.

I had graduated that summer. Earlier that year, my parents had moved back to California, mostly to take care of my grandmother. They were living in a small, agricultural town in the central valley, and I had decided to join them and help for a few months. This was what families do, I thought: when someone needs support, they band together and help them. Moreover, my Oma had brought her children through a Japanese internment camp in Indonesia, finding creative ways to keep them alive in horrifying circumstances. My dad is one of the youngest survivors of these camps, because of her. In turn, taking care of her at the end of her life was the right thing to do.

In contrast to the usual stereotype of California, the central valley is largely a conservative stronghold. When I first arrived, it was the kind of place where they only played country music on the radio and there was a flag on every house. Poorer communities are the ones that disproportionately fight our wars, and there was a collage in the local supermarket of everyone in the community who had joined the army and gone to fight in Afghanistan.

The central valley also has one of the largest Assyrian populations in the US, which would lead to some interesting perspectives a few years later, when the US invaded Iraq.

Our suspicions about Bush had proven to be correct, and the PATRIOT Act was in place. The implications seemed terrible, but these perspectives seemed to be strangely absent on the news. But there was the Internet, and conversations were happening all over the social web. (MetaFilter became my go-to place for intelligent, non-histrionic discussion.) I had started a comedy site the previous year, full of sarcastic personality tests and articles that were heavily influenced by both The Onion and Ben Brown's Uber.nu. Conversations were beginning to happen on the forum there, too.

I flew back to Edinburgh after Christmas, and found a job in educational technology at the university. Dave Tosh and I shared a tiny office, and bonded over talking about politics. It wasn't long before we had laid the groundwork for Elgg.


September 11, 2011

I was sitting at the kitchen table I'm sitting at now. It had been my turn to move to California to support a family-member; my mother was deeply ill and I had to be closer to her. I had left Elgg when she was diagnosed: there were disagreements about direction, and I was suddenly reminded how short and fragile life was.

My girlfriend had agreed that being here was important, and had come out with me, but had needed to go home for visa reasons. Eventually, after several more trips, she would decide that she didn't feel comfortable living in the US, or with marrying me. September was the first month I was by myself in my apartment, and I found myself without any friends, working remotely for latakoo in Austin.

Rather than settle in the valley, I had decided that the Bay Area was close enough. I didn't have a car, but you could BART to Dublin/Pleasanton, and be picked up from there. The valley itself had become more moderate over time, partially (I think) because of the influence of the new UC Merced campus, and the growth of CSU Stanislaus, closer to my parents. Certainly, you could hear more than country music on the radio, and the college radio station was both interesting and occasionally edgy.

I grew up in Oxford: a leafy university town just close enough to London. Maybe because of this, I picked Berkeley, another leafy university town, which is just close enough to San Francisco. (A train from Oxford to London takes 49 minutes; getting to San Francisco from Berkeley takes around 30.) My landlady is a Puerto Rican novelist who sometimes gave drum therapy sessions downstairs. If I look out through my kitchen window, I just see trees; the garden is dominated by a redwood that is just a little too close to the house. Squirrels, overweight from the nearby restaurants, often just sit and watch me, and I wonder what they're planning.

Yet, ask anyone who's just moved here what they notice first, and they'll bring up the homeless people. Inequality and social issues here are troublingly omnipresent. The American dream tells us that anyone can be anything, which means that whens someone doesn't make it, or they fall through the cracks, it must be their fault somehow. It's confronting to see people in so much pain every day, but not as confronting as the day you realize you're walking right by them without thinking about it.

Countless people told me that they wouldn't have moved to the US; not even if a parent was dying. I began to question whether I had done the right thing, but I also silently judged them. You wouldn't move to another country to support your family? I asked but didn't ask them. I'm sorry your family has so little love.

I don't know if that was fair, but it felt like an appropriate response to the lack of understanding.


September 11, 2014

"I'm Ben; this is my co-founder Erin; and I'd like to introduce you to Elle." Click. Cue story.

We were on stage at the Folsom Street Foundry in San Francisco, at the tail end of our journey through Matter. Over five months, we had taken a simple idea - that individuals and communities deserve to own their own spaces on the Internet - and used design thinking techniques to make it a more focused product that addressed a concrete need. Elle was a construct: a student we had invented to make our story more relatable and create a shared understanding.

After a long health journey, my mother had finally begun to feel better that spring. 2013 had been the most stressful year of my life, by a long way; mostly for her, but also for my whole family in a support role. I had also lost the relationship I had once hoped I'd have for the rest of my life, and the financial pressures of working for a startup and living in an expensive part of the world had often reared their head. Compared to that year, 2014 felt like I had found all my luck at once.

Through Matter, and before that, through the indie web community, I felt like I had communities of friends. There were people I could call on to grab a beer or some dinner, and I was grateful for that; the first year of being in the Bay Area had been lonely. The turning point had been at the first XOXO, which had been a reminder that individual creativity was not just a vital part of my life, but was somethign that could flourish on its own. I met lovely people there, and at the sequel the next year.

California had given me opportunities that I wouldn't have had anywhere else. It's also, by far, the most beautiful place I've ever lived. Standing on that stage, telling the world what we had built, I felt grateful. I still feel grateful now. I'm lucky as hell.

I miss everyone I left behind a great deal, but any time I want to, I can climb in a metal tube, sit for eleven hours while it shoots through the sky, and go see them. After all the health problems and startup adventures, I finally went back for three weeks last December. Air travel is odd: the reality you step out into supplants the reality you left. Suddenly, California felt like a dream, and Edinburgh and Oxford were immediate and there, like I had never left. The first thing I did was the first thing anyone would have done: I went to the pub with my friends.

But I could just as easily have walked out into Iran, or Israel, or Egypt, or Iraq, or Afghanistan. Those are all realities too, and all just a sky-ride in a metal tube away. The only difference is circumstance.

Just as so many people couldn't understand why I felt the need to move to America, we have the same cognitive distance from the people who live in those places. They're outside our immediate understanding, but they are living their own human realities - and our own reality is distant to them. The truth is, though, that we're all people, governed by the same base needs. I mean, of course we are.

My hope for the web has always been that getting on a plane wouldn't be necessary to understand each other more clearly. My hope for Known was that, in a small way, we could help bridge that distance, by giving everyone a voice that they control.

I think back to the people I watched from that bus stop often. You can zoom out from there, to think about all the people in a country, and then a region, and then the world. Each one an individual, at once unknowable and subject to the same shared experiences we all have. We are all connected, both by technology and by humanity. Understanding each other is how we will all progress together.

Get over yourself: notes from a developer-founder-CEO

11 min read

Known, the company I founded with Erin Jo Richey, is the third startup I've been deeply involved in. The first created Elgg, the open source social networking platform; I was CTO. The second is latakoo, which helps video professionals at organizations like NBC News send video quickly and in the correct format without needing to worry about compression or codecs. Again, I was CTO. In both cases, I was heavily involved in all aspects of the business, but my primary role was tending product, infrastructure and engineering.

At Known, I still write code and tend servers, but my role is to put myself out of that job. Despite having worked closely with two CEOs over ten years, and having spent a lot of time with CEOs of other companies, I've learned a lot while I've been doing this. I've also had conversations with developers that have revealed some incorrect but commonly-held assumptions.

Here are some notes I've made. Some of these I knew before; some of these I've learned on the job. But they've all come up in conversation, so I thought I'd make a list for anyone else who arrives at being a business founder via the engineering route. We're still finding our way - Known is not, yet, a unicorn - but here's what I have so far.


The less I code, the better my business does.

I could spend my time building software all day long, but that's only a fraction of the story. There's a lot more  to building a great product than writing code: you're going to need to talk to people, constantly, to empathize with the problems they actually have. (More on this in a second.) Most importantly, there's a lot more to building a great business than building a great product. You know how startup founders constantly, infuriatingly, talk about "hustling"? The language might be pure machismo, but the sentiment isn't bullshit.

When I'm sitting and coding, I'm not talking to people, I'm not selling, I'm not gaining insight and there's a real danger my business's wheels are spinning without gaining any traction.

The biggest mistake I made on Known is sitting down and building for the first six months of our life, as we went through the Matter program. If I could do it again, I would spend almost none of that time behind my screen.


Don't scratch your own itch.

In the open source world, there's a "scratch your own itch" mentality: build software to solve your own problems. It's true that you can gain insight to a problem that way. But you're probably not going to want to pay yourself for your own product, so you'd better be solving problems for a lot of other people, too. That means you need to learn what peoples' itches are, and most importantly, get over the idea that you know better than them.

Many developers, because they know computers better than their users, think they know problems better than them, too. The thing is, as a developer, your problems are very different indeed. You use computers dramatically differently to most people; you work in a different context to most people. The only way to gain insight is to talk to lots and lots of people, constantly.

If you care passionately about a problem, the challenge is then to accept it when it's not shared with enough people to be a viable business. A concrete example: we learned the hard way that people, generally, won't pay for an indie web product for individuals, and took too long to explore other business avenues. (Partially because I care dearly about that problem and solution.) A platform for lots of people to share resources in a private group, with tight integration with intranets and learning management systems? We're learning that this is more valuable, and more in need. We're investigating much more, and I'm certain we'll continue to evolve.


Pick the right market; make the right product. Make money.

Learning to ask people for money is the single hardest thing I've had to do. I'm getting better at it, in part thanks to the storytelling techniques we picked up at Matter.

Product-market fit is key. It can't be overstated how important this is.

Product-market fit means being in a good market with a product that can satisfy that market.

The problem you pick is directly related to how effectively you can sell - not just because you need to be solving real pain for people, but because different problems have different values. A "good market" is one that can support a business well, both in terms of growth and finance. Satisfy that market, and, well, you're in business.

We sell Known Pro for $10 a month: hardly a bank-breaking amount. Nonetheless, we've had plenty of feedback that it's much too expensive. That's partially because the problem we were solving wasn't painful enough, and partially because consumers are used to getting their applications for free, with ads to support them.

So part of "hustling" is about picking a really important problem for a valuable market and solving it well. Another part is making sure the people who can benefit from it know about it. The Field of Dreams fallacy - "if you build it, they will come" - takes a lot of work to avoid. I have a recurring task in Asana that tells me to reach out to new potential customers every day, multiple times a day, but sales is really about relationships, which takes time. Have conversations. Gain insight. See if you can solve their problems well. Social media is fun but virtually useless for this: you need to talk to people directly.

And here's something I've only latterly learned: point-blank ask people to pay. Be confident that what you're offering is valuable. If you've done your research, and built your product well, it is. (And if nobody says "yes", then it's time to go through that process again.)


Do things that don't scale in order to learn.

Startups need to do things that scale over time. It's better to design a refrigerator once and sell lots of them than to build bespoke refrigerators. But in the beginning, spending time solving individual problems, and holding peoples' hands, can give you insight that you can use to build those really scalable solutions.

Professional services like writing bespoke software are not a great way to run a startup - they're inherently unscalable - but they can be an interesting way to learn about which problems people find valuable. They're also a good way to bootstrap, in the early stages, as long as you don't become too dependent on them.


Be bloody-minded, but only about the right things.

Lots of people will tell you you're going to fail. You have to ignore those voices, while also knowing when you really are going to fail. That's why you keep talking to people, making prototypes, searching for that elusive product-market fit.

Choosing what to be bloody-minded about can be nuanced. For example:


Technology doesn't matter (except when it does).

Developers often fall down rabbit holes discussing the relative merits of operating systems and programming languages. Guess what: users don't care. Whether you use one framework or another isn't important to your bottom line - unless it will affect hiring or scalability later on. It's far better to use what you know.

But sometimes the technology you choose is integral to the problem. I care about the web, and figured that a responsive interface that works on any web browser would make us portable acros platforms. This was flat-out wrong: we needed to build an app. We still need to build an app.

The entire Internet landscape has changed over the last six years, and we were building for an outdated version that doesn't really exist anymore. As technologists, we tend to fall in love with particular products or solutions. Customers don't really work that way, and we need to meet them where they're at.


Non-technical customers don't like options.

As a technical person, I like to customize my software. I want lots of options, and I always have: I remember changing my desktop fonts and colors as a teenager, or writing scripts for the chatrooms I used to join. So I wasn't prepared, when we started to do more conversations with real people, for how little they want that. Apple is right: things should just work. Options are complexity; software should just do the right things.

I think that's one reason why there's a movement towards smaller apps and services that just do one thing. You can focus on solving one thing well, without making it configurable within an inch of its life. If a user wants it to work a different way, they can choose a different app. That's totally not how I wish computers worked for people, but if there's one thing I've learned, it's this: what I want is irrelevant.



Run fast. Keep adjusting your direction. But run like the wind. You're never the only person in the race.


Investment isn't just not-evil: it's often crucial.

Bootstrapping is very hard for any business, but particularly tough if you're trying to launch a consumer product, which needs very wide exposure to gain traction and win in the marketplace. Unless you're independently wealthy or have an amazing network of people who are, you will need to find support. Money aside, the right investors become members of your team, helping you find success. Their insights and contacts will be invaluable.

But that means you have to have your story straight. Sarah Milstein puts it perfectly:

Entrepreneurs understandably get upset when VCs don’t grasp your business’s potential or tell you your idea is too complex. While those things happen, and they’re shitty, it’s not just that VCs are under-informed. It’s also that their LPs won’t support investments they don’t understand. Additionally, to keep attracting LP money, VCs need to put their money in startups that other investors will like down the road. VCs thus have little incentive to try to wrap their heads around your obscure idea, even if it’s possibly ground-breaking. VCs are money managers; they do not exist to throw dollars into almost any idea.

Keep it simple, stupid. Your ultra-cunning complicated mousetrap or niche technical concept may not be investable. You know you're doing something awesome, but the perception of your team, product, market and solution has to be that it has a strong chance of success. Yes, that rules some ventures out from large-scape investment and partially explains why the current Silicon Valley landscape looks like it does. So, find another way:


Be scrappy.

Don't be afraid of hacks or doing things "the wrong way". If you follow all the rules, or you're afraid of going off-road and trying something new, you'll fail. Beware of recipes (but definitely learn from other peoples' experiences).


Most of all: get over yourself, and get over why you fell in love with computers.

If empathy-building conversations and user testing tell you one thing, it's this: your assumptions are almost always wrong. So don't assume you have all the answers.

You probably got into computers well before most people. Those people have never known the computing environment you loved, and it's never coming back. You're building for them, because they're the customer: in many ways the hardest thing is to let go of what you love about computers, and completely embrace what other people need. A business is about serving customers. Serve them well by respecting their opinions and their needs. You are not the customer.

It's a hard lesson to learn, but the more I embrace it, the better I do.


Need a way to privately share and discuss resources with up to 200 people? Check out Known Pro or get in touch to learn about our enterprise services.

On the new web, get used to paying for subscriptions

4 min read

Piccadilly Circus The Verge reports that YouTube is trying a new business model:

According to multiple sources, the world’s largest video-sharing site is preparing to launch its two separate subscription services before the end of 2015 — Music Key, which has been in beta since last November, and another unnamed service targeting YouTube’s premium content creators, which will come with a paywall. Taken together, YouTube will be a mix of free, ad-supported content and premium videos that sit behind a paywall.

At first glance, this seems like a brave new move for YouTube, which has been ad-supported since its inception. But it turns out that ads on the platform actually haven't been doing that well - and have been pulling down Google's Cost-Per-Click ad revenues as a whole.

However, during the company's earnings call on Thursday, Google's outgoing CFO Patrick Pichette dismissed mobile as the reason for the company's cost-per-click declines. Instead it is YouTube's fault. YouTube's skippable TrueView ads "currently monetize at lower rates than ad clicks on Google.com," Mr. Pichette said. He added that excluding TrueView ads -- which Google counts as ad clicks when people don't skip them -- the number of ad clicks on Google's own sites wouldn't have grown as much in the quarter but the average cost-per-click "would be healthy and growing year-over-year."

If Google's CPC ad revenue would otherwise be growing, it makes sense to switch YouTube to a different revenue model. Subscriptions are tough, but consumers have already shown that they're willing to pay to access music and entertainment services (think Spotify and Netflix).

But what if those revenues don't continue to climb? Back in May, Google confirmed that more searches take place on mobile than on desktop. That pattern continues all over the web: smartphones are fast becoming our primary computing devices, and you can already think of laptops and desktops as the minority.

Enter Apple, which is going to include native ad blocking in the next version of iOS:

Putting such “ad blockers” within reach of hundreds of millions of iPhone and iPad users threatens to disrupt the $70 billion annual mobile-marketing business, where many publishers and tech firms hope to generate far more revenue from a growing mobile audience. If fewer users see ads, publishers—and other players such as ad networks—will reap less revenue.

This is an obvious shot across the bow to Google, but it also serves another purpose. Media companies disproportionately depend on advertising for revenue. The same goes for consumer web apps: largely thanks to Google, it's very difficult to convince consumers to pay for software. They're used to getting high-quality apps like Gmail and Google Docs for free, in exchange for some promotional messages on the side. In a universe web web browsers block ads, the only path to revenue is to build your own app.

From Apple's perspective, this makes sense: it encourages more people to build native apps on their platform. The trouble is, users spend most of their time in just five apps - and most users don't download new apps at all. The idea of a smartphone user deftly flicking between hundreds of beautiful apps on their device is a myth. Media companies who create individual apps for their publications and networks are tilting at windmills and wasting their money.

Which brings us back to subscriptions. YouTube's experiment is important, because it's the first time a mass-market, ad-supported site - one that everybody uses - has switched to a subscription model. If it works, and users accept subscription fees as a way to receive content, more and more services will follow suit. I think this is healthy: it heralds a transition from a personalized advertising model that necessitates tracking your users to one that just takes money from people who find what you do valuable. You can even imagine Google providing a subscription mechanism that would allow long-tail sites with lower traffic to also see payment. (Google Contributor is another experiment in this direction.)

If it doesn't work, we can expect to see more native content ads: ads disguised as content, written on a bespoke basis. These are impossible to block, but they're fundamentally incompatible with long-tail sites with low traffic. They also violate the line between editorial and advertising.

Media companies find themselves in a tough spot. As Bloomberg wrote earlier this year:

This is the puzzle for companies built around publishing businesses that thrived in the 20th century. Ad revenue has proved ever harder to come by as reading moves online and mobile, but charging for digital content can drive readers away.

Something's got to give.


Photo by Moyan Brenn on Flickr.

What would it take to save #EdTech?

10 min read

Education has a software problem.

98% of higher educational institutions have a Learning Management System: a software platform designed to support the administration of courses. Larger institutions often spend over a million dollars a year on them, once all costs have been factored in, but the majority of people who use them - from educators through to students - hate the experience. In fact, when we did our initial user research for Known, we couldn't find a single person in either of those groups who had anything nice to say about them.

That's because the LMS has been designed to support administration, not teaching and learning. Administrators like the way they can keep track of student accounts and course activity, as well as the ability to retain certain data for years, should they need it in the event of a lawsuit. Meanwhile, we were appalled to discover that students are most often locked out of their LMS course spaces as soon as the course is over, meaning they can't refer back to their previous discussions and feedback as they continue their journey towards graduation.

The simple reason is that educators aren't the customers, whereas administrators have buying power. From a vendor's perspective, it makes sense to aim software products at the latter group. However, it's a tough market: institutions have a very long sales cycle. They might hear about a product six months before they run a pilot, and then deploy a product the next year. And they'll all do it at the same time, to fit in with the academic calendar. At the time of writing, institutions are looking at software that they might consider for a pilot in Spring 2016. Very few products will make it to campus deployment.

There are only a few kinds of software vendors that can withstand these long cycles for such a narrow market. By necessity, they must have "runway" - the length of time a company can survive without additional revenue - to last this cycle for multiple institutions. It follows that these products must have high sticker prices; once they've made a sale, vendors cling to their customers for dear life, which leads to outrageous lock-in strategies and occasionally vicious intra-vendor infighting.

Why can't educators buy software?

If it would lower costs and prevent lock-in, why don't institutions typically allow on-demand educator purchasing? One reason is what I call the Microsoft Access effect. Until the advent of cloud technologies, it was common for any medium to large organization to have hundreds, or even thousands, of Access databases dotted around their network, supporting various micro-activities. (I saw this first-hand early in my career, as IT staff at the University of Oxford's Saïd Business School.) While it's great that any member of staff can create a database, the IT department is then expected to maintain and repair it. The avalanche of applications can quickly become overwhelming - and sometimes they can overlap significantly, leading to inefficient overspending and further maintenance nightmares. For these and a hundred other reasons, purchasing needs to be planned.

A second reason is that, in the Internet age, applications do interesting things with user data. A professor of behavioral economics, for example, isn't necessarily also going to be an expert in privacy policies and data ownership. Institutions need to be very careful with student data, because of legislation like FERPA and other factors that could leave them exposed to being sued or prosecuted. Therefore, for very real legal reasons, software and services need to be approved.

The higher education bubble?

Some startups have decided to overcome these barriers by declaring that they will disrupt universities themselves. These companies provide Massively Open Online Courses directly, most often without accreditation or any real oversight. I don't believe they mean badly: in theory an open market for education is a great idea. However, institutions provide innumerable protections and opportunities for students that for-profit, independent MOOCs cannot provide. MOOCs definitely have a place in the educational landscape, but they cannot replace schools and universities, as much as it is financially convenient to say that they will. Similarly, some talk of a "higher education bubble" out of frustration that they can't efficiently make a profit from institutions. If it's a bubble, it's one that's been around for well over a thousand years. Universities, in general, work.

However, as much as startups aren't universities, universities are also not startups. Some institutions have decided to overcome their software problem by trying to write software themselves. Sometimes it even works. The trouble is that effective software design does not adhere to the same principles as academic discussion or planning; you can't do it by committee. Institutions will often try and create standards, forgetting that a technology is only a standard if people are using it by bottom-up convention (otherwise it's just bureaucracy). Discussions about features can sometimes take years. User experience design falls somewhere towards the bottom of the priority list. The software often emerges, but it's rarely world class.

Open source to the rescue.

Open source software like WordPress has been a godsend in this environment, not least because educators don't need to have a budget to deploy it. With a little help, they can modify it to support their teaching. The problem is that most of these platforms aren't designed for them, because there's no way for revenue to flow to the developers. (Even when educators use specialist hosting providers like Reclaim Hosting - which I am a huge fan of - no revenue makes its way to the application developers in an open source model.) Instead, they take platforms like WordPress, modify them, and are saddled with the maintenance burden for the modifications, minus the budget. While this may support teaching in the short-term, there's little room for long-term strategy. The result, once again, can be poor user experience and security risks. Most importantly, educators run the risk of fitting their teaching around available technology, rather than using technology to support their pedagogy. Teaching and learning should be paramount.

As Audrey Watters recently pointed out, education has nowhere near enough criticism about the impact of technology on teaching.

So where does this leave us?

We have a tangle of problems, including but not limited to:

  • Educators can't acquire software to support their teaching
  • Startups and developers can't make money by selling software that supports teaching
  • Institutions aren't good at making software
  • Existing educational software costs a fortune, has bad user experience and doesn't support teaching

I am the co-founder and CEO of a startup that sells its product to higher education institutions. I have skin in this game. Nonetheless, let's remove "startups" from the equation. There is no obligation for educational institutions to support new businesses (although they certainly have a role in, for example, spinning research projects into ventures). Instead, we should think about the inability of developers to make a living building software that supports teaching. Just as educators need a salary, so do the developers who make tools to help them.

When we remove startups, we also remove an interest in "disrupting" institutions, and locking institutions into particular kinds of technologies or contracts. We also remove a need to produce cookie-cutter one-size-fits-all software in order to scale revenue independently of production costs. In teaching, one size never fits all.

We also know that institutions don't have a lot of budget, and certainly can't support the kind of market-leading salaries you might expect to see at a company like Google or Facebook. The best developers, unless they're particularly mission-driven, are not likely to look at universities first when they're looking for an employer. The kinds of infrastructure that institutions use probably also don't support the continuous deployment, fail forward model of software development that has made Silicon Valley so innovative.

So here's my big "what if".

What if institutions pooled their resources into a consortium, similar to the Open Education Consortium (or, perhaps, Apereo), specifically for supporting educators with software tools?

Such an organization might have the following rules:

LMS and committee-free. The organization itself decides which software it will work on, based on the declared needs of member educators. Rather than a few large products, the organization builds lots of small, single-serving tools that do one thing well. Rather than trying to build standards ahead of time, compatibility between projects emerges over time by convention, with actual working code taking priority over bureaucracy.

Design driven. Educators are not software designers, but they need to be deeply involved in the process. Here, software is created through a design thinking process, with iterative user research and testing performed with both educators and students. The result is likely to be software that better meets their needs, released with an understanding that it is never finished, and instead will be rapidly improved during its use.

Fast. Release early, release often.

Open source. All software is maintained in a public repository and released under a very liberal license. (After all, the aim here is not to receive a return on investment in the form of revenue.) One can easily imagine students being encouraged to contribute to these projects as part of their courses.

A startup - but in the open. The organization is structured like a software company, with the same kinds of responsibilities. However, most communications take place on open channels, so that they can at least be read by students, educators and other organizations that want to learn from the model. The organization has autonomy from its member institutions, but reports to them. In some ways, these institutions are the VC investors of the organization (except there can never be a true "exit").

A mandate to experiment. The aim of the organization is not just to experiment with software, but also the models through which software can be created in an academic context. Ideally, the organization would also help institutions understand design thinking and iterative design.

There is no doubt that institutions have a lot to gain from innovative software that supports teaching on a deep level. I also think that well-made open source software that follows academic values rather than a pure profit motive could be broadly beneficial, in the same way that the Internet itself has turned out to pretty good thing for human civilization. As we know from public media, when products exist in the marketplace for reasons other than profit, it affects the whole market for the better. In other words, this kind of organization would be a public good as well as an academic one.

How would it be funded? Initially, through member institutions, perhaps on a sliding scale based on the institution's size and public / private status. I would hope that over time it would be considered worthy of federal government grants, or even international support. However, just as there's no point arguing about academic software standards on a mailing list for years, it's counter-productive to stall waiting for the perfect funding model. It's much more interesting to just get it moving and, finally, start building software that help teachers and students learn.

The Internet is more alive than it's ever been. But it needs our help.

5 min read

Another day, another eulogy for the Internet:

It's an internet driven not by human beings, but by content, at all costs. And none of us — neither media professionals, nor readers — can stop it. Every single one of us is building it every single day.

Over the last decade, the Internet has been growing at a frenetic pace. Since Facebook launched, over two billion people have joined, tripling the number of people who are connected online.

When I joined the Internet for the first time, I was one of only 25 million users. Now, there are a little over 3 billion. Most of them never knew the Internet many of us remember fondly; for them, phones and Facebook are what it has always looked like. There is certainly no going back, because there isn't anything to return to. The Internet we have today is the most accessible it's ever been; more people are connected than ever before. To yearn for the old Internet is to yearn for an elitist network that only a few people could be part of.

This is also the fastest the Internet will ever grow, unless there's some unprecedented population explosion. And it's a problem for the content-driven Facebook Internet. These sites and services need to show growth, which is why Google is sending balloons into the upper atmosphere to get more people online, and why Facebook is creating specially-built planes. They need more people online and using their services; their models start to break if growth is static.

Eventually, Internet growth has to be static. We can pour more things onto the Internet - hey, let's all connect our smoke alarms and our doorknobs - but ultimately, Internet growth has to be tethered to global population.

It's impressive that Facebook and Google have both managed to reach this sort of scale. But what happens once we hit the population limit and connectivity is ubiquitous?

From Vox:

In particular, it requires the idea that making money on this new internet requires scale, and if you need to always keep scaling up, you can't alienate readers, particularly those who arrive from social channels. The Gawker of 2015 can't afford to be mean, for this reason. But the Gawker of 2005 couldn't afford not to be mean. What happens when these changes scrub away something seen as central to a site's voice?

In saying that content needs to be as broadly accessible as possible, you're saying that the potential audience for any piece must be 3.17 billion people and counting. It's also a serious problem for journalism or any kind of factual content: if you're creating something that needs to be as broadly accessible as possible, you can't be nuanced, quiet, or considered.

The central thesis that you need to have a potential audience of the entire Internet to make money on it is flat-out wrong. On a much larger Internet, it should theoretically be easier to find the 1,000 true fans you need to be profitable than ever before. And then ten thousand, and a million, and so on. There are a lot of people out there.

In a growth bubble (yes, let's call it that), everyone's out to grab turf. On an Internet where there's no-one left to join and everyone is connected, the only way you can compete is the old-fashioned way: with quality. Having necessarily jettisoned the old-media model, where content is licensed to geographic regions and monopoly broadcasters, content will have to fight on its own terms.

And here's where it gets interesting. It's absolutely true that websites as destinations are dead. You're not reading this piece because you follow my blog; you're either picking it up via social media or, if you're part of the indie web community and practically no-one else, because it's in your feed reader.

That's not a bad thing at all. It means we're no longer loyal readers: the theory is that if content is good, we'll read and share it, no matter where it's from. That's egalitarian and awesome. Anyone can create great content and have it be discovered, whether they're working for News International or an individual blogger in Iran.

The challenge is this: in practice, that's not how it works at all. The challenge on the Internet is not to give everyone a place to publish: thanks to WordPress, Known, the indie web community and hundreds of other projects, they have that. The challenge is letting people be heard.

It's not about owning content. On an Internet where everyone is connected, the prize is to own discovery. In the 21st century more than ever before, information is power. If you're the way everyone learns about the world, you hold all the cards.

Information is vital for democracy, but it's not just socially bad for one or two large players to own how we discover content on the Internet. It's also bad for business. A highly-controlled discovery layer on the Internet means that what was an open market is now effectively run by one or two companies' proprietary business rules. A more open Internet doesn't just lead to freedom: it leads to free trade. Whether you're an activist or a startup founder, a liberal or a libertarian, that should be an idea you can get behind.

The Internet is not dead: it's more alive than it's ever been. The challenge is to secure its future.