Skip to main content
 

Untestable, unsafe and on the freeway: why cars need to be open source

Our devices are working against us. 

Recently, we learned that Volkswagen was falsifying its mandatory E.P.A. emissions tests. Because each test has a set of characteristics that don’t accurately match real-world driving conditions, the internal software running in 11 million cars could deduce that an official test was taking place. Under these conditions, the cars would turn on enhanced emissions controls, which allowing them to pass the tests but reduced mileage and other driver-friendly features. In the real world, the cars had better mileage and acceleration, but were spewing illegal levels of pollution.

This was far from an isolated incident. Less than a week later, it emerged that some Mercedes, BMW and Peugeot vehicles were using up to 50% more fuel than laboratory testing suggested. Meanwhile, on average, the gap between tested carbon emissions and real-world performance is 40% - up from 8% in 2001. And while Volkswagen’s software broke the law, detecting test conditions to cheat on results is a widespread practice that has become an open secret in the industry.

These practices extend far beyond cars: even our televisions are faking data. Recently, some Samsung TVs in Europe were found to use less electricity in laboratory tests than under real-world viewing conditions.

In each case, the software powering the device was unavailable to be tested. In a world where cars are heavily scrutinized to ensure passenger safety, examiners had very little way to determine what the software was doing at all. According to the Alliance of Automobile Manufacturers, providing access to this source code would create  serious threats to safety or security” (pdf). Even the Environmental Protection Agency agreed, arguing that opening the source code could lead to consumers hacking their cars to achieve better performance.

Anyone who’s worked in computer security will know that this is a spurious argument. Obscuring source code doesn't make software safer: on the contrary, more scrutiny allows manufacturers to find flaws more quickly.

Earlier this year, Wired demonstrated that hackers could remotely kill the brakes on a Jeep from a laptop in a house ten miles away from the vehicle. The same hackers had previously demonstrated that they could achieve complete control over a Ford Escape and a Toyota Prius: these vulnerabilities appear to be widespread and not limited to any particular manufacturer.

In the light of these exploits, it’s extremely likely that developers are already cheating tests by hacking their cars, without E.P.A. or manufacturer knowledge. Indeed, a cursory Google search reveals hackers talking about cheating on their emissions tests using Arduinos and other devices. To quote one participant in one of these forums: “I 100% believe that these tests are a complete joke/scam and therefore should be cheated with any and all available means.”

In a world where cars are increasingly driven by complex software, the only reliable way to test them is to inspect their source code. This is true following revelations that Volkswagen falsified their E.P.A. tests, and it will become increasingly crucial as autonomous vehicles become widespread.

McKinsey & Co predicts that autonomous vehicles will be widespread in around 15 years. The consequences of hacking your vehicle today are largely environmental: not something that should be discounted, but also not life-threatening except in aggregate. Once autonomous vehicles are commonplace, your car’s software algorithm will be the only thing keeping you from crashing into another family on the interstate. 

Autonomous vehicles will eliminate an enormous percentage of car accidents, and we should not fear their introduction. However, hackers will certainly attempt to alter the software running their vehicles in order to go faster, impress their friends, perform stunts, and so on. If the software infrastructure inside a vehicle is kept secret from regulators, only the manufacturer will have any way of determining if this has taken place.

More pressingly, it’s now become apparent that manufacturers can’t be trusted to protect our interests. Even if it is impossible to hack an autonomous vehicle – which would be hard to believe – we need to ensure that the algorithms and software that power these products are as rigorously testable as the steel and rubber we sit in.

Opening source code to scrutiny does not limit its owner’s intellectual property rights. It also doesn’t prevent a manufacturer from making a profit or protecting their unique inventions. It does, however, allow us to trust their products. This is important for all connected devices, but cars are uniquely life-threatening when misused. 

Legislators should act to protect our safety. In the same way that seatbelts and other safety measures were made mandatory, the source code that powers modern vehicles should be made available both to regulators and the public. Security through obscurity is no security at all.

Help is available: we’ve been doing this in the software industry for decades. The open source community should help manufacturers build more open software while retaining their intellectual property. 

Open software is in the public interest, particularly when lives are at stake.

· Posts · Share this post

 

I am not a developer

Since co-founding Known with Erin Richey eighteen months ago, one of my biggest professional challenges, both inwardly and outwardly, has been this:

I am not a developer.

I have development skills and was a startup CTO for a decade. Absolutely. I know how to architect a system and write code. I can smell when someone is trying to bullshit me about what their technology is and how it works. I keep on top of emerging technology and I enjoy having conversations about it.

But I am not a developer.

That's not my role. Nor should it be.

When we started Known, I became (once again) a co-founder, but also a CEO - a crucial position in any company. Among other things, the CEO is responsible for:

  • Setting strategy and vision
  • Building a nurturing company culture
  • Creating an amazing team
  • Making money

The last one probably should have come first. I think a good way of putting it is that my job is to make myself redundant - but until then, do everything that needs to be done.

If the company doesn't grow, I've failed. If the company runs out of money, I've failed. If we put out a shitty product, I've failed. If we lose momentum in the market and people stop thinking of us, I've failed.

Engineering is crucial. Design is crucial. Business development is crucial. Sales and marketing are - guess what? - crucial. You can't get by with one of those things alone.

It turns out that I still write code. Sometimes, I write a lot of code. But the more time I spend building product, the more time is taken away from doing the hard work of validating and selling it. Writing code is like spinning your wheels when you're building the wrong thing.

Validation is crucial. I'm not Steve Jobs. (For one thing, I don't have a huge team of engineers and designers whose work I can take credit for.) Figuring out product / market fit, pricing and your go-to-market strategy are not things you can hand-wave away between other things. It's a full-time profession.

If we fail, it's because of decisions I made. I've made many mistakes - as any founder does - but one of the most important was to fail to have a technical co-founder. I thought because I was the technical partner, that we didn't need one. In fact, every technology startup needs a technical co-founder, even when the CEO is technical themselves.

If we succeed, it's because we've overcome this limitation and managed to grow an awesome team at a company with a nurturing culture and a killer vision. It will be beause we've made something that people want and pay for in significant numbers, and have captured value while providing even more value to the people we serve. It won't be because we have great code, although great code will be a component. It won't be because we have great design, although great design will be a component. It won't be because we have an amazing sales and marketing strategy, although we'll need it.

I'm lucky. Through Matter, we got high-end training in design thinking and access to an incredible community, as well as the ability to pitch like a pro. Through my network of peers, I'm constantly inspired by other CEOs who are building businesses they're proud of, and I learn from them as much as possible. By being in the San Francisco Bay Area, I'm a part of a community of experts. My task is to draw all of this together - as well as my startup experience, and my experience building technology. My task is to build a successful company.

I am not a developer.

· Posts · Share this post

 

The whole Internet: much more than the web, apps, or IoT

This morning, I woke up and checked my notifications on my phone. (I know, I know, it's a terrible habit.) I took a shower while listening to a Spotify playlist, got dressed, and put my Fitbit in my pocket. I made some breakfast and ate it in front of last night's Daily Show on my Apple TV. Then I opened my laptop, logged into Slack, launched my browser and checked my email.

I've spent a lot of time over the last decade advocating for the web as a platform. To be fair to me, as platforms go, it's a good one: an easy-to-use, interconnected mesh of friendly documents and applications that anyone can contribute to. Lately, though, I've realized that many of us have been advocating for the web to the exclusion of other platforms - and this is a huge mistake.

It's not about mobile, either. I love my iPhone 6 Plus, which in some ways is the best computing device I've ever owned (it's certainly the most accessible). Apps are fluid, beautiful, immediate and tactile. Notifications regularly remind me that I'm connected to a vast universe of information and conversations. But, no, mobile apps aren't the natural heir to the web.

Nor is it about the Internet of Things, or the dedicated devices in my home. My Apple TV is the only entertainment device I need. My Fitbit lets me know when I haven't been moving enough. I have an Air Quality Egg that attempts to tell me about air quality. My Emotiv EEG headset can tell me when I'm focused. But none of these things, either, are the future of the Internet.

I think this is obvious, but it's worth saying: no single platform is the future of the Internet. We've evolved from a world where we all sat down at desktop and laptop computers to one where the Internet is all around us. Software really has eaten the world.

What ubiquitous Internet means is that a mobile strategy, or a web strategy, aren't enough. To effectively solve a problem for people, you need to have a strategy that holistically considers the whole Internet, and the entire galaxy of devices at your disposal.

That doesn't mean you need to have a solution that works on every single device. Ubiquity doesn't have to mean saturation. Instead, the Internet has evolved to a point where you can consider the platforms that are most appropriate to the solution you're providing. In the old days, you needed to craft a solution for the web. Now, you can craft a solution for people, and choose what kinds of devices you will use to deliver it. It's even becoming feasible to create your own, completely new connected devices.

The opportunities are almost endless. Data is flowing everywhere. But as with mobile and the web in earlier eras of the Internet, there will be land grabs. When any device can talk to any device and any person, the perception will be that owning the protocols and the pipes is incredibly valuable. Of course, the real value on the Internet is that the pipes are open, and the protocols are open, and anyone can build a solution on the network.

For me, this is a huge mental shift, but one that's incredibly exciting. The web is just one part of a nutritious breakfast. We have to get used to building software that touches every part of our lives - not just the screens on our desks and in our pockets. The implications for media and art are enormous. And more than any other era of the Internet, the way we all live will be profoundly changed.

· Posts · Share this post

 

Why it isn't rude to talk about politics (and I think we should be doing it more)

It's often said that you shouldn't talk about politics, religion or money. I tend to think those are all part of the same thing: conversations about how the world is, and should be, organized. Anyone who's been watching the American electoral system warm up its engines will be in no doubt that your views and status in any one of those prongs affect the other two. And all are inseparable from the cognitive biases that your context in the world has given you.

So let's restate the maxim: it's rude to talk about the world.

Why?

The reason that's most often given is that people might disagree with you. It might start an argument, someone might be offended by your viewpoint, or you might be offended by a deeply-held position from someone else. As the thinking goes, we should try to avoid offending other people, and we shouldn't be starting arguments.

Living in a democracy, I take a different view. Each of us has a different context and different opinions, which we use to inform the votes we cast to elect the government we want, allowing us to help dictate how our communities should be organized. That's awesome, and a freedom we shouldn't take for granted. It's also the fundamental bedrock of being a democratic citizen.

I want to be better informed, so I can cast better votes and be a better citizen. Which means I want to hear different views, that potentially challenge my own. If you define offence as a kind of shock at someone else's disregard for your own principles, I want to be offended. I want to know other peoples' deeply-held beliefs and principles, because they allow me to calibrate mine. I don't exist in a vacuum.

I think the world would be better if we used our freedoms and were more open with our beliefs. The challenge is that it is not always safe to do so. Middle class politeness is one thing; for millions of Muslims in America, like communists and Jews before them, sharing their beliefs can be life-threatening. For a supposedly democratic nation, America is spectacularly good at stigmatizing entire groups of people.

I'd like to think that this is where the politeness principle comes from, as a kind of protection mechanism for more vulnerable members of our community. I don't think that's the case. I think it's much more to do with maintaining a cultural hegemony, and the harmful illusion that all citizens are united in their beliefs and principles.

Citizens don't all have the same beliefs and principles. This is part of the definition of democracy, and we should embrace it.

Citizens don't all have the same privileges and contexts. As a white, middle-class male, I have privileges that many people in this country are not afforded, and a very secure filter bubble to sit inside. I think it's my duty to listen and amplify beyond the walls of that bubble. Candidates for the President of the United States are, in 2015, suggesting that we have "a Muslim problem" in terms that echo the Jewish Question from before the Second World War. Even if you don't believe in advocating for people in ways that don't directly affect you, this directly affects you. It's all about what kind of country we want to be living in. It's all about how it's organized.

It's also about what kind of world we want to be living in. I think it's also my duty, as a citizen of one of the wealthiest nations on earth, to listen and amplify beyond our border walls. Citizens of countries like Iran, Yemen and Burkina Faso are people too, with their own personal politics, religions, hopes and dreams.

We've been given this incredible freedom to talk and advocate, to assemble and discuss, and we should use it.

Yes, there will be arguments. It would be nice to say, "hey, we shouldn't get angry with each other," but these are issues that cut to the core of society. Tone policing these debates is in itself oppressive and undemocratic. And while I'd like to be able to say, "we should have a free and even exchange of ideas, and I won't think less of you for what you believe," that actually isn't true. If you believe that Muslims are second class citizens, or that the Black Lives Matter movement isn't important, I do think a little worse of you, just as some of you will likely think worse of me for thinking socialism is an okay idea or for not believing in God. We can respect each other as citizens, and have respect for our right to have opinions. We should still talk. And as dearly held as my beliefs are, I want to know when you think I'm wrong: it's how I learn.

What we shouldn't do is tell people that they should just accept what they're given, and take the world as it is. That's not what being in a democracy is all about, and it's what we do when we tell people to shut up about what they believe.

· Posts · Share this post

 

Is crowdfunding the answer in a post-ad universe?

Crowdfunding

Albert Wenger of Union Square Ventures asks:

How then is journalism to be financed? As I wrote in 2014, I continue to believe that crowdfunding is the answer. Since then great progress has been made by Beaconreader, Kickstarter’s Journalism category, and also Patreon. Together the amounts are still small but it is early days. Apple’s decision to support these adblockers may well help accelerate the growth of crowdfunding and that would be a good thing – I don’t like slow page loads and distracting ads but I will happily support content creation directly (just highly unlikely to do so through micropayments while reading). All of this provides one more reason to support Universal Basic Income – a floor for every content creator and also more people who can participate in crowdfunding.

I've also heard Universal Basic Income argued for as a solution to funding open source projects. I'm not sure I buy it, so to speak - I think it's not fair to assume that content creators should live on a minimum safety net wage. I do strongly believe in a Universal Basic Income, but as a strong safety net that promotes economic growth rather than something designed to be relied on. For one thing, what happens if everyone falls back to a Universal Basic Income? Could the system withstand that, and would the correct incentives be in place?

I love the idea of crowdfunding content. This does seem to put incentives in the correct place. However, when systems like Patreon work well (and they often do), the line between crowdfunding and a subscription begins to blur. When you're paying me whenever I create content, with a monthly cap, and I create content on a regular basis, it starts to look a lot like it's just a monthly subscription. If you pick up enough monthly subscriptions, it starts to look like real money - a thousand people at $10 a month would lead to a very comfortable wage for a single content creator (even in San Francisco and New York).

So remove the complexity: recurring crowdfunding is just a subscription with a social interface. Which is fine. For recurring content like news sources and shows, I think subscriptions are the future.

For individual content like movies, albums and books, campaigns begin to make a lot of sense. But crowdfunding isn't magic: funders won't necessarily show up. I've been told that I should really have 33% of my campaign contributions pre-confirmed before the campaign begins, and I suspect that's not possible for most unknown artists.

There needs to be a positive signal of quality. In the old days, there were PR campaigns, which were paid for by record labels and publishing companies. Unless we only want rich artists and established brands to make money making content, we need a great, reliable way to discover new, high-quality independent media. And then we need to be able to make an informed decision whether we want tob buy.

As great as Patreon, Kickstarter, Indiegogo and the others are, we're not there yet. Social media won't get us there alone - at least not as is. But there's money to be made, and I'm convinced that whoever unlocks discovery will unlock the future of content on the web.

 

Image: Crowdfunding by Rocío Lara

· Posts · Share this post

 

Meet the future of online commerce - and the future of the web.

Meet the future of online commerce:

We're all used to content unbundling: very few of us are loyal to magazines, blogs or personal websites any more. We consume content through our social feeds, clicking on articles that people we care about have recommended. Articles are atoms of content, flowing in a social stream, unbundled from their parent publications. Very few of us visit content homepages any more.

Products like Stripe Relay let vendors do the same with commerce. Suddenly you can get products in your social stream, which you can share and comment on, as well as buy right there. There's no need to visit a store homepage like Amazon.com. You can find products anywhere on the web, and click to buy wherever you encounter them.

There's no point in vendors having apps: the app experience is handled by the social stream (be it Facebook, Twitter, or something more open). The homepage also becomes significantly less crucial to purchasing, just as it's become much less crucial to serving content. In fact, there's often no need to visit a standalone web page at all, except perhaps to learn more about the product. Even then, you can imagine this extended content traveling along the social stream with the main post, in the same way that Facebook's Instant Articles become part of their app.

It's no accident that Google and Twitter are creating an open source version of instant articles. Facebook's version is a proof of concept that shows the way: websites are not destinations any longer. The social stream has become a new kind of browser, where users navigate through social activities rather than thematic links.

Social streams used to be how we discovered content on web pages. Increasingly, the content will live in the stream itself.

A battle is raging over who will own this real estate, and Facebook is winning it hands down. However, that doesn't mean they'll win the war over how we discover information online - there's plenty of precedent in computing for the more open alternative eventually winning. And that's what Google and Twitter are betting on:

Another difference between the Google/Twitter plan and other mobile publishing projects is that Google and Twitter won’t host publishers’ content. Instead, the plan is to show readers cached Web pages — a “snapshot of [a] webpage,” in Google’s words — from publishers’ sites.

The language of the web will still be a crucial part of how we communicate. What's at stake is who controls how we learn about the world, and an open plan allows us to dictate how that content is consumed.

If Facebook is the Apple Mac of social feeds, Twitter has the potential to be the IBM PC. And that may, eventually, be how they succeed.

In the meantime, the web has turned a corner into a new era of social commerce and free-flowing content. There's no turning back the clock; platform owners need to embrace these dynamics and run fast.

· Posts · Share this post

 

"I'd like to introduce you to Elle": four September 11ths

September 11, 2001

I was in Oxford, working for Daily Information. My dad actually came into the office to let me know that it had happened - I had been building a web app and had no idea. For the rest of the day I tried to reload news sites to learn more; the Guardian was the only one that consistently stayed up.

The terror of the event itself is obvious, but more than anything else, I remember being immediately hit by the overwhelming sadness of it. Thousands of people who had just gone to work that day, like we all had to, and were trapped in their office by something that had nothing to do with them. I remember waiting for the bus home that day, watching the faces in all the cars and buses that passed me almost in slow motion, thinking that it could have been any of us. I wondered what their lives were like; who they were going home to see. Each face was at once unknowable and subject to the same shared experiences we all have.

I was the only American among my friends, and so I was less distanced from it than them. I remember waiting to hear from my cousin who had been on the New York subway at the time. I'm kind of a stealth American (no accent), so nobody guarded what they said around me. They definitely had a different take, and among them, as well as more widely, there was a sense of "America deserved this". It's hard to accurately describe the anti-American resentment that still pervades liberal Britain, but it was very ugly that day. On Livejournal, someone I followed (and knew in real life) posted: "Burn, America, burn".

One thing I agreed with them on was that we couldn't be sure what the President would do. America had elected a wildcard, who had previously held the record for number of state executions. It seemed clear that he would declare war, and potentially use this as an excuse to erode freedoms and turn America into a different kind of country; we had enough distance to be having those discussions on day one.

There were so many questions in the days that followed. Nobody really understood what had happened, and the official Bush explanations were not considered trustworthy. People brought up the American-led Chilean coup on September 11, 2003, when Salvador Allende had been deposed and killed; had it been symbolically related to that? Al Qaeda seemed like it had come out of nowhere.

Meanwhile, the families of thousands of people were grieving.

 

September 11, 2002

I had an aisle to myself on the flight to California. The flight had been cheap, and it was obvious that if something were to happen on that day, it wouldn't be on a plane. Airport security at all levels was incredibly high; nobody could afford for there to be another attack.

I had graduated that summer. Earlier that year, my parents had moved back to California, mostly to take care of my grandmother. They were living in a small, agricultural town in the central valley, and I had decided to join them and help for a few months. This was what families do, I thought: when someone needs support, they band together and help them. Moreover, my Oma had brought her children through a Japanese internment camp in Indonesia, finding creative ways to keep them alive in horrifying circumstances. My dad is one of the youngest survivors of these camps, because of her. In turn, taking care of her at the end of her life was the right thing to do.

In contrast to the usual stereotype of California, the central valley is largely a conservative stronghold. When I first arrived, it was the kind of place where they only played country music on the radio and there was a flag on every house. Poorer communities are the ones that disproportionately fight our wars, and there was a collage in the local supermarket of everyone in the community who had joined the army and gone to fight in Afghanistan.

The central valley also has one of the largest Assyrian populations in the US, which would lead to some interesting perspectives a few years later, when the US invaded Iraq.

Our suspicions about Bush had proven to be correct, and the PATRIOT Act was in place. The implications seemed terrible, but these perspectives seemed to be strangely absent on the news. But there was the Internet, and conversations were happening all over the social web. (MetaFilter became my go-to place for intelligent, non-histrionic discussion.) I had started a comedy site the previous year, full of sarcastic personality tests and articles that were heavily influenced by both The Onion and Ben Brown's Uber.nu. Conversations were beginning to happen on the forum there, too.

I flew back to Edinburgh after Christmas, and found a job in educational technology at the university. Dave Tosh and I shared a tiny office, and bonded over talking about politics. It wasn't long before we had laid the groundwork for Elgg.

 

September 11, 2011

I was sitting at the kitchen table I'm sitting at now. It had been my turn to move to California to support a family-member; my mother was deeply ill and I had to be closer to her. I had left Elgg when she was diagnosed: there were disagreements about direction, and I was suddenly reminded how short and fragile life was.

My girlfriend had agreed that being here was important, and had come out with me, but had needed to go home for visa reasons. Eventually, after several more trips, she would decide that she didn't feel comfortable living in the US, or with marrying me. September was the first month I was by myself in my apartment, and I found myself without any friends, working remotely for latakoo in Austin.

Rather than settle in the valley, I had decided that the Bay Area was close enough. I didn't have a car, but you could BART to Dublin/Pleasanton, and be picked up from there. The valley itself had become more moderate over time, partially (I think) because of the influence of the new UC Merced campus, and the growth of CSU Stanislaus, closer to my parents. Certainly, you could hear more than country music on the radio, and the college radio station was both interesting and occasionally edgy.

I grew up in Oxford: a leafy university town just close enough to London. Maybe because of this, I picked Berkeley, another leafy university town, which is just close enough to San Francisco. (A train from Oxford to London takes 49 minutes; getting to San Francisco from Berkeley takes around 30.) My landlady is a Puerto Rican novelist who sometimes gave drum therapy sessions downstairs. If I look out through my kitchen window, I just see trees; the garden is dominated by a redwood that is just a little too close to the house. Squirrels, overweight from the nearby restaurants, often just sit and watch me, and I wonder what they're planning.

Yet, ask anyone who's just moved here what they notice first, and they'll bring up the homeless people. Inequality and social issues here are troublingly omnipresent. The American dream tells us that anyone can be anything, which means that whens someone doesn't make it, or they fall through the cracks, it must be their fault somehow. It's confronting to see people in so much pain every day, but not as confronting as the day you realize you're walking right by them without thinking about it.

Countless people told me that they wouldn't have moved to the US; not even if a parent was dying. I began to question whether I had done the right thing, but I also silently judged them. You wouldn't move to another country to support your family? I asked but didn't ask them. I'm sorry your family has so little love.

I don't know if that was fair, but it felt like an appropriate response to the lack of understanding.

 

September 11, 2014

"I'm Ben; this is my co-founder Erin; and I'd like to introduce you to Elle." Click. Cue story.

We were on stage at the Folsom Street Foundry in San Francisco, at the tail end of our journey through Matter. Over five months, we had taken a simple idea - that individuals and communities deserve to own their own spaces on the Internet - and used design thinking techniques to make it a more focused product that addressed a concrete need. Elle was a construct: a student we had invented to make our story more relatable and create a shared understanding.

After a long health journey, my mother had finally begun to feel better that spring. 2013 had been the most stressful year of my life, by a long way; mostly for her, but also for my whole family in a support role. I had also lost the relationship I had once hoped I'd have for the rest of my life, and the financial pressures of working for a startup and living in an expensive part of the world had often reared their head. Compared to that year, 2014 felt like I had found all my luck at once.

Through Matter, and before that, through the indie web community, I felt like I had communities of friends. There were people I could call on to grab a beer or some dinner, and I was grateful for that; the first year of being in the Bay Area had been lonely. The turning point had been at the first XOXO, which had been a reminder that individual creativity was not just a vital part of my life, but was somethign that could flourish on its own. I met lovely people there, and at the sequel the next year.

California had given me opportunities that I wouldn't have had anywhere else. It's also, by far, the most beautiful place I've ever lived. Standing on that stage, telling the world what we had built, I felt grateful. I still feel grateful now. I'm lucky as hell.

I miss everyone I left behind a great deal, but any time I want to, I can climb in a metal tube, sit for eleven hours while it shoots through the sky, and go see them. After all the health problems and startup adventures, I finally went back for three weeks last December. Air travel is odd: the reality you step out into supplants the reality you left. Suddenly, California felt like a dream, and Edinburgh and Oxford were immediate and there, like I had never left. The first thing I did was the first thing anyone would have done: I went to the pub with my friends.

But I could just as easily have walked out into Iran, or Israel, or Egypt, or Iraq, or Afghanistan. Those are all realities too, and all just a sky-ride in a metal tube away. The only difference is circumstance.

Just as so many people couldn't understand why I felt the need to move to America, we have the same cognitive distance from the people who live in those places. They're outside our immediate understanding, but they are living their own human realities - and our own reality is distant to them. The truth is, though, that we're all people, governed by the same base needs. I mean, of course we are.

My hope for the web has always been that getting on a plane wouldn't be necessary to understand each other more clearly. My hope for Known was that, in a small way, we could help bridge that distance, by giving everyone a voice that they control.

I think back to the people I watched from that bus stop often. You can zoom out from there, to think about all the people in a country, and then a region, and then the world. Each one an individual, at once unknowable and subject to the same shared experiences we all have. We are all connected, both by technology and by humanity. Understanding each other is how we will all progress together.

· Posts · Share this post

 

Get over yourself: notes from a developer-founder-CEO

Known, the company I founded with Erin Jo Richey, is the third startup I've been deeply involved in. The first created Elgg, the open source social networking platform; I was CTO. The second is latakoo, which helps video professionals at organizations like NBC News send video quickly and in the correct format without needing to worry about compression or codecs. Again, I was CTO. In both cases, I was heavily involved in all aspects of the business, but my primary role was tending product, infrastructure and engineering.

At Known, I still write code and tend servers, but my role is to put myself out of that job. Despite having worked closely with two CEOs over ten years, and having spent a lot of time with CEOs of other companies, I've learned a lot while I've been doing this. I've also had conversations with developers that have revealed some incorrect but commonly-held assumptions.

Here are some notes I've made. Some of these I knew before; some of these I've learned on the job. But they've all come up in conversation, so I thought I'd make a list for anyone else who arrives at being a business founder via the engineering route. We're still finding our way - Known is not, yet, a unicorn - but here's what I have so far.

 

The less I code, the better my business does.

I could spend my time building software all day long, but that's only a fraction of the story. There's a lot more  to building a great product than writing code: you're going to need to talk to people, constantly, to empathize with the problems they actually have. (More on this in a second.) Most importantly, there's a lot more to building a great business than building a great product. You know how startup founders constantly, infuriatingly, talk about "hustling"? The language might be pure machismo, but the sentiment isn't bullshit.

When I'm sitting and coding, I'm not talking to people, I'm not selling, I'm not gaining insight and there's a real danger my business's wheels are spinning without gaining any traction.

The biggest mistake I made on Known is sitting down and building for the first six months of our life, as we went through the Matter program. If I could do it again, I would spend almost none of that time behind my screen.

 

Don't scratch your own itch.

In the open source world, there's a "scratch your own itch" mentality: build software to solve your own problems. It's true that you can gain insight to a problem that way. But you're probably not going to want to pay yourself for your own product, so you'd better be solving problems for a lot of other people, too. That means you need to learn what peoples' itches are, and most importantly, get over the idea that you know better than them.

Many developers, because they know computers better than their users, think they know problems better than them, too. The thing is, as a developer, your problems are very different indeed. You use computers dramatically differently to most people; you work in a different context to most people. The only way to gain insight is to talk to lots and lots of people, constantly.

If you care passionately about a problem, the challenge is then to accept it when it's not shared with enough people to be a viable business. A concrete example: we learned the hard way that people, generally, won't pay for an indie web product for individuals, and took too long to explore other business avenues. (Partially because I care dearly about that problem and solution.) A platform for lots of people to share resources in a private group, with tight integration with intranets and learning management systems? We're learning that this is more valuable, and more in need. We're investigating much more, and I'm certain we'll continue to evolve.

 

Pick the right market; make the right product. Make money.

Learning to ask people for money is the single hardest thing I've had to do. I'm getting better at it, in part thanks to the storytelling techniques we picked up at Matter.

Product-market fit is key. It can't be overstated how important this is.

Product-market fit means being in a good market with a product that can satisfy that market.

The problem you pick is directly related to how effectively you can sell - not just because you need to be solving real pain for people, but because different problems have different values. A "good market" is one that can support a business well, both in terms of growth and finance. Satisfy that market, and, well, you're in business.

We sell Known Pro for $10 a month: hardly a bank-breaking amount. Nonetheless, we've had plenty of feedback that it's much too expensive. That's partially because the problem we were solving wasn't painful enough, and partially because consumers are used to getting their applications for free, with ads to support them.

So part of "hustling" is about picking a really important problem for a valuable market and solving it well. Another part is making sure the people who can benefit from it know about it. The Field of Dreams fallacy - "if you build it, they will come" - takes a lot of work to avoid. I have a recurring task in Asana that tells me to reach out to new potential customers every day, multiple times a day, but sales is really about relationships, which takes time. Have conversations. Gain insight. See if you can solve their problems well. Social media is fun but virtually useless for this: you need to talk to people directly.

And here's something I've only latterly learned: point-blank ask people to pay. Be confident that what you're offering is valuable. If you've done your research, and built your product well, it is. (And if nobody says "yes", then it's time to go through that process again.)

 

Do things that don't scale in order to learn.

Startups need to do things that scale over time. It's better to design a refrigerator once and sell lots of them than to build bespoke refrigerators. But in the beginning, spending time solving individual problems, and holding peoples' hands, can give you insight that you can use to build those really scalable solutions.

Professional services like writing bespoke software are not a great way to run a startup - they're inherently unscalable - but they can be an interesting way to learn about which problems people find valuable. They're also a good way to bootstrap, in the early stages, as long as you don't become too dependent on them.

 

Be bloody-minded, but only about the right things.

Lots of people will tell you you're going to fail. You have to ignore those voices, while also knowing when you really are going to fail. That's why you keep talking to people, making prototypes, searching for that elusive product-market fit.

Choosing what to be bloody-minded about can be nuanced. For example:

 

Technology doesn't matter (except when it does).

Developers often fall down rabbit holes discussing the relative merits of operating systems and programming languages. Guess what: users don't care. Whether you use one framework or another isn't important to your bottom line - unless it will affect hiring or scalability later on. It's far better to use what you know.

But sometimes the technology you choose is integral to the problem. I care about the web, and figured that a responsive interface that works on any web browser would make us portable acros platforms. This was flat-out wrong: we needed to build an app. We still need to build an app.

The entire Internet landscape has changed over the last six years, and we were building for an outdated version that doesn't really exist anymore. As technologists, we tend to fall in love with particular products or solutions. Customers don't really work that way, and we need to meet them where they're at.

 

Non-technical customers don't like options.

As a technical person, I like to customize my software. I want lots of options, and I always have: I remember changing my desktop fonts and colors as a teenager, or writing scripts for the chatrooms I used to join. So I wasn't prepared, when we started to do more conversations with real people, for how little they want that. Apple is right: things should just work. Options are complexity; software should just do the right things.

I think that's one reason why there's a movement towards smaller apps and services that just do one thing. You can focus on solving one thing well, without making it configurable within an inch of its life. If a user wants it to work a different way, they can choose a different app. That's totally not how I wish computers worked for people, but if there's one thing I've learned, it's this: what I want is irrelevant.

 

Run.

Run fast. Keep adjusting your direction. But run like the wind. You're never the only person in the race.

 

Investment isn't just not-evil: it's often crucial.

Bootstrapping is very hard for any business, but particularly tough if you're trying to launch a consumer product, which needs very wide exposure to gain traction and win in the marketplace. Unless you're independently wealthy or have an amazing network of people who are, you will need to find support. Money aside, the right investors become members of your team, helping you find success. Their insights and contacts will be invaluable.

But that means you have to have your story straight. Sarah Milstein puts it perfectly:

Entrepreneurs understandably get upset when VCs don’t grasp your business’s potential or tell you your idea is too complex. While those things happen, and they’re shitty, it’s not just that VCs are under-informed. It’s also that their LPs won’t support investments they don’t understand. Additionally, to keep attracting LP money, VCs need to put their money in startups that other investors will like down the road. VCs thus have little incentive to try to wrap their heads around your obscure idea, even if it’s possibly ground-breaking. VCs are money managers; they do not exist to throw dollars into almost any idea.

Keep it simple, stupid. Your ultra-cunning complicated mousetrap or niche technical concept may not be investable. You know you're doing something awesome, but the perception of your team, product, market and solution has to be that it has a strong chance of success. Yes, that rules some ventures out from large-scape investment and partially explains why the current Silicon Valley landscape looks like it does. So, find another way:

 

Be scrappy.

Don't be afraid of hacks or doing things "the wrong way". If you follow all the rules, or you're afraid of going off-road and trying something new, you'll fail. Beware of recipes (but definitely learn from other peoples' experiences).

 

Most of all: get over yourself, and get over why you fell in love with computers.

If empathy-building conversations and user testing tell you one thing, it's this: your assumptions are almost always wrong. So don't assume you have all the answers.

You probably got into computers well before most people. Those people have never known the computing environment you loved, and it's never coming back. You're building for them, because they're the customer: in many ways the hardest thing is to let go of what you love about computers, and completely embrace what other people need. A business is about serving customers. Serve them well by respecting their opinions and their needs. You are not the customer.

It's a hard lesson to learn, but the more I embrace it, the better I do.

 

Need a way to privately share and discuss resources with up to 200 people? Check out Known Pro or get in touch to learn about our enterprise services.

· Posts · Share this post

 

On the new web, get used to paying for subscriptions

Piccadilly Circus The Verge reports that YouTube is trying a new business model:

According to multiple sources, the world’s largest video-sharing site is preparing to launch its two separate subscription services before the end of 2015 — Music Key, which has been in beta since last November, and another unnamed service targeting YouTube’s premium content creators, which will come with a paywall. Taken together, YouTube will be a mix of free, ad-supported content and premium videos that sit behind a paywall.

At first glance, this seems like a brave new move for YouTube, which has been ad-supported since its inception. But it turns out that ads on the platform actually haven't been doing that well - and have been pulling down Google's Cost-Per-Click ad revenues as a whole.

However, during the company's earnings call on Thursday, Google's outgoing CFO Patrick Pichette dismissed mobile as the reason for the company's cost-per-click declines. Instead it is YouTube's fault. YouTube's skippable TrueView ads "currently monetize at lower rates than ad clicks on Google.com," Mr. Pichette said. He added that excluding TrueView ads -- which Google counts as ad clicks when people don't skip them -- the number of ad clicks on Google's own sites wouldn't have grown as much in the quarter but the average cost-per-click "would be healthy and growing year-over-year."

If Google's CPC ad revenue would otherwise be growing, it makes sense to switch YouTube to a different revenue model. Subscriptions are tough, but consumers have already shown that they're willing to pay to access music and entertainment services (think Spotify and Netflix).

But what if those revenues don't continue to climb? Back in May, Google confirmed that more searches take place on mobile than on desktop. That pattern continues all over the web: smartphones are fast becoming our primary computing devices, and you can already think of laptops and desktops as the minority.

Enter Apple, which is going to include native ad blocking in the next version of iOS:

Putting such “ad blockers” within reach of hundreds of millions of iPhone and iPad users threatens to disrupt the $70 billion annual mobile-marketing business, where many publishers and tech firms hope to generate far more revenue from a growing mobile audience. If fewer users see ads, publishers—and other players such as ad networks—will reap less revenue.

This is an obvious shot across the bow to Google, but it also serves another purpose. Media companies disproportionately depend on advertising for revenue. The same goes for consumer web apps: largely thanks to Google, it's very difficult to convince consumers to pay for software. They're used to getting high-quality apps like Gmail and Google Docs for free, in exchange for some promotional messages on the side. In a universe web web browsers block ads, the only path to revenue is to build your own app.

From Apple's perspective, this makes sense: it encourages more people to build native apps on their platform. The trouble is, users spend most of their time in just five apps - and most users don't download new apps at all. The idea of a smartphone user deftly flicking between hundreds of beautiful apps on their device is a myth. Media companies who create individual apps for their publications and networks are tilting at windmills and wasting their money.

Which brings us back to subscriptions. YouTube's experiment is important, because it's the first time a mass-market, ad-supported site - one that everybody uses - has switched to a subscription model. If it works, and users accept subscription fees as a way to receive content, more and more services will follow suit. I think this is healthy: it heralds a transition from a personalized advertising model that necessitates tracking your users to one that just takes money from people who find what you do valuable. You can even imagine Google providing a subscription mechanism that would allow long-tail sites with lower traffic to also see payment. (Google Contributor is another experiment in this direction.)

If it doesn't work, we can expect to see more native content ads: ads disguised as content, written on a bespoke basis. These are impossible to block, but they're fundamentally incompatible with long-tail sites with low traffic. They also violate the line between editorial and advertising.

Media companies find themselves in a tough spot. As Bloomberg wrote earlier this year:

This is the puzzle for companies built around publishing businesses that thrived in the 20th century. Ad revenue has proved ever harder to come by as reading moves online and mobile, but charging for digital content can drive readers away.

Something's got to give.

 

Photo by Moyan Brenn on Flickr.

· Posts · Share this post

 

What would it take to save #EdTech?

Education has a software problem.

98% of higher educational institutions have a Learning Management System: a software platform designed to support the administration of courses. Larger institutions often spend over a million dollars a year on them, once all costs have been factored in, but the majority of people who use them - from educators through to students - hate the experience. In fact, when we did our initial user research for Known, we couldn't find a single person in either of those groups who had anything nice to say about them.

That's because the LMS has been designed to support administration, not teaching and learning. Administrators like the way they can keep track of student accounts and course activity, as well as the ability to retain certain data for years, should they need it in the event of a lawsuit. Meanwhile, we were appalled to discover that students are most often locked out of their LMS course spaces as soon as the course is over, meaning they can't refer back to their previous discussions and feedback as they continue their journey towards graduation.

The simple reason is that educators aren't the customers, whereas administrators have buying power. From a vendor's perspective, it makes sense to aim software products at the latter group. However, it's a tough market: institutions have a very long sales cycle. They might hear about a product six months before they run a pilot, and then deploy a product the next year. And they'll all do it at the same time, to fit in with the academic calendar. At the time of writing, institutions are looking at software that they might consider for a pilot in Spring 2016. Very few products will make it to campus deployment.

There are only a few kinds of software vendors that can withstand these long cycles for such a narrow market. By necessity, they must have "runway" - the length of time a company can survive without additional revenue - to last this cycle for multiple institutions. It follows that these products must have high sticker prices; once they've made a sale, vendors cling to their customers for dear life, which leads to outrageous lock-in strategies and occasionally vicious intra-vendor infighting.

Why can't educators buy software?

If it would lower costs and prevent lock-in, why don't institutions typically allow on-demand educator purchasing? One reason is what I call the Microsoft Access effect. Until the advent of cloud technologies, it was common for any medium to large organization to have hundreds, or even thousands, of Access databases dotted around their network, supporting various micro-activities. (I saw this first-hand early in my career, as IT staff at the University of Oxford's Saïd Business School.) While it's great that any member of staff can create a database, the IT department is then expected to maintain and repair it. The avalanche of applications can quickly become overwhelming - and sometimes they can overlap significantly, leading to inefficient overspending and further maintenance nightmares. For these and a hundred other reasons, purchasing needs to be planned.

A second reason is that, in the Internet age, applications do interesting things with user data. A professor of behavioral economics, for example, isn't necessarily also going to be an expert in privacy policies and data ownership. Institutions need to be very careful with student data, because of legislation like FERPA and other factors that could leave them exposed to being sued or prosecuted. Therefore, for very real legal reasons, software and services need to be approved.

The higher education bubble?

Some startups have decided to overcome these barriers by declaring that they will disrupt universities themselves. These companies provide Massively Open Online Courses directly, most often without accreditation or any real oversight. I don't believe they mean badly: in theory an open market for education is a great idea. However, institutions provide innumerable protections and opportunities for students that for-profit, independent MOOCs cannot provide. MOOCs definitely have a place in the educational landscape, but they cannot replace schools and universities, as much as it is financially convenient to say that they will. Similarly, some talk of a "higher education bubble" out of frustration that they can't efficiently make a profit from institutions. If it's a bubble, it's one that's been around for well over a thousand years. Universities, in general, work.

However, as much as startups aren't universities, universities are also not startups. Some institutions have decided to overcome their software problem by trying to write software themselves. Sometimes it even works. The trouble is that effective software design does not adhere to the same principles as academic discussion or planning; you can't do it by committee. Institutions will often try and create standards, forgetting that a technology is only a standard if people are using it by bottom-up convention (otherwise it's just bureaucracy). Discussions about features can sometimes take years. User experience design falls somewhere towards the bottom of the priority list. The software often emerges, but it's rarely world class.

Open source to the rescue.

Open source software like WordPress has been a godsend in this environment, not least because educators don't need to have a budget to deploy it. With a little help, they can modify it to support their teaching. The problem is that most of these platforms aren't designed for them, because there's no way for revenue to flow to the developers. (Even when educators use specialist hosting providers like Reclaim Hosting - which I am a huge fan of - no revenue makes its way to the application developers in an open source model.) Instead, they take platforms like WordPress, modify them, and are saddled with the maintenance burden for the modifications, minus the budget. While this may support teaching in the short-term, there's little room for long-term strategy. The result, once again, can be poor user experience and security risks. Most importantly, educators run the risk of fitting their teaching around available technology, rather than using technology to support their pedagogy. Teaching and learning should be paramount.

As Audrey Watters recently pointed out, education has nowhere near enough criticism about the impact of technology on teaching.

So where does this leave us?

We have a tangle of problems, including but not limited to:

  • Educators can't acquire software to support their teaching
  • Startups and developers can't make money by selling software that supports teaching
  • Institutions aren't good at making software
  • Existing educational software costs a fortune, has bad user experience and doesn't support teaching

I am the co-founder and CEO of a startup that sells its product to higher education institutions. I have skin in this game. Nonetheless, let's remove "startups" from the equation. There is no obligation for educational institutions to support new businesses (although they certainly have a role in, for example, spinning research projects into ventures). Instead, we should think about the inability of developers to make a living building software that supports teaching. Just as educators need a salary, so do the developers who make tools to help them.

When we remove startups, we also remove an interest in "disrupting" institutions, and locking institutions into particular kinds of technologies or contracts. We also remove a need to produce cookie-cutter one-size-fits-all software in order to scale revenue independently of production costs. In teaching, one size never fits all.

We also know that institutions don't have a lot of budget, and certainly can't support the kind of market-leading salaries you might expect to see at a company like Google or Facebook. The best developers, unless they're particularly mission-driven, are not likely to look at universities first when they're looking for an employer. The kinds of infrastructure that institutions use probably also don't support the continuous deployment, fail forward model of software development that has made Silicon Valley so innovative.

So here's my big "what if".

What if institutions pooled their resources into a consortium, similar to the Open Education Consortium (or, perhaps, Apereo), specifically for supporting educators with software tools?

Such an organization might have the following rules:

LMS and committee-free. The organization itself decides which software it will work on, based on the declared needs of member educators. Rather than a few large products, the organization builds lots of small, single-serving tools that do one thing well. Rather than trying to build standards ahead of time, compatibility between projects emerges over time by convention, with actual working code taking priority over bureaucracy.

Design driven. Educators are not software designers, but they need to be deeply involved in the process. Here, software is created through a design thinking process, with iterative user research and testing performed with both educators and students. The result is likely to be software that better meets their needs, released with an understanding that it is never finished, and instead will be rapidly improved during its use.

Fast. Release early, release often.

Open source. All software is maintained in a public repository and released under a very liberal license. (After all, the aim here is not to receive a return on investment in the form of revenue.) One can easily imagine students being encouraged to contribute to these projects as part of their courses.

A startup - but in the open. The organization is structured like a software company, with the same kinds of responsibilities. However, most communications take place on open channels, so that they can at least be read by students, educators and other organizations that want to learn from the model. The organization has autonomy from its member institutions, but reports to them. In some ways, these institutions are the VC investors of the organization (except there can never be a true "exit").

A mandate to experiment. The aim of the organization is not just to experiment with software, but also the models through which software can be created in an academic context. Ideally, the organization would also help institutions understand design thinking and iterative design.

There is no doubt that institutions have a lot to gain from innovative software that supports teaching on a deep level. I also think that well-made open source software that follows academic values rather than a pure profit motive could be broadly beneficial, in the same way that the Internet itself has turned out to pretty good thing for human civilization. As we know from public media, when products exist in the marketplace for reasons other than profit, it affects the whole market for the better. In other words, this kind of organization would be a public good as well as an academic one.

How would it be funded? Initially, through member institutions, perhaps on a sliding scale based on the institution's size and public / private status. I would hope that over time it would be considered worthy of federal government grants, or even international support. However, just as there's no point arguing about academic software standards on a mailing list for years, it's counter-productive to stall waiting for the perfect funding model. It's much more interesting to just get it moving and, finally, start building software that help teachers and students learn.

· Posts · Share this post

 

The Internet is more alive than it's ever been. But it needs our help.

Another day, another eulogy for the Internet:

It's an internet driven not by human beings, but by content, at all costs. And none of us — neither media professionals, nor readers — can stop it. Every single one of us is building it every single day.

Over the last decade, the Internet has been growing at a frenetic pace. Since Facebook launched, over two billion people have joined, tripling the number of people who are connected online.

When I joined the Internet for the first time, I was one of only 25 million users. Now, there are a little over 3 billion. Most of them never knew the Internet many of us remember fondly; for them, phones and Facebook are what it has always looked like. There is certainly no going back, because there isn't anything to return to. The Internet we have today is the most accessible it's ever been; more people are connected than ever before. To yearn for the old Internet is to yearn for an elitist network that only a few people could be part of.

This is also the fastest the Internet will ever grow, unless there's some unprecedented population explosion. And it's a problem for the content-driven Facebook Internet. These sites and services need to show growth, which is why Google is sending balloons into the upper atmosphere to get more people online, and why Facebook is creating specially-built planes. They need more people online and using their services; their models start to break if growth is static.

Eventually, Internet growth has to be static. We can pour more things onto the Internet - hey, let's all connect our smoke alarms and our doorknobs - but ultimately, Internet growth has to be tethered to global population.

It's impressive that Facebook and Google have both managed to reach this sort of scale. But what happens once we hit the population limit and connectivity is ubiquitous?

From Vox:

In particular, it requires the idea that making money on this new internet requires scale, and if you need to always keep scaling up, you can't alienate readers, particularly those who arrive from social channels. The Gawker of 2015 can't afford to be mean, for this reason. But the Gawker of 2005 couldn't afford not to be mean. What happens when these changes scrub away something seen as central to a site's voice?

In saying that content needs to be as broadly accessible as possible, you're saying that the potential audience for any piece must be 3.17 billion people and counting. It's also a serious problem for journalism or any kind of factual content: if you're creating something that needs to be as broadly accessible as possible, you can't be nuanced, quiet, or considered.

The central thesis that you need to have a potential audience of the entire Internet to make money on it is flat-out wrong. On a much larger Internet, it should theoretically be easier to find the 1,000 true fans you need to be profitable than ever before. And then ten thousand, and a million, and so on. There are a lot of people out there.

In a growth bubble (yes, let's call it that), everyone's out to grab turf. On an Internet where there's no-one left to join and everyone is connected, the only way you can compete is the old-fashioned way: with quality. Having necessarily jettisoned the old-media model, where content is licensed to geographic regions and monopoly broadcasters, content will have to fight on its own terms.

And here's where it gets interesting. It's absolutely true that websites as destinations are dead. You're not reading this piece because you follow my blog; you're either picking it up via social media or, if you're part of the indie web community and practically no-one else, because it's in your feed reader.

That's not a bad thing at all. It means we're no longer loyal readers: the theory is that if content is good, we'll read and share it, no matter where it's from. That's egalitarian and awesome. Anyone can create great content and have it be discovered, whether they're working for News International or an individual blogger in Iran.

The challenge is this: in practice, that's not how it works at all. The challenge on the Internet is not to give everyone a place to publish: thanks to WordPress, Known, the indie web community and hundreds of other projects, they have that. The challenge is letting people be heard.

It's not about owning content. On an Internet where everyone is connected, the prize is to own discovery. In the 21st century more than ever before, information is power. If you're the way everyone learns about the world, you hold all the cards.

Information is vital for democracy, but it's not just socially bad for one or two large players to own how we discover content on the Internet. It's also bad for business. A highly-controlled discovery layer on the Internet means that what was an open market is now effectively run by one or two companies' proprietary business rules. A more open Internet doesn't just lead to freedom: it leads to free trade. Whether you're an activist or a startup founder, a liberal or a libertarian, that should be an idea you can get behind.

The Internet is not dead: it's more alive than it's ever been. The challenge is to secure its future.

· Posts · Share this post

 

Market source: an open source ecosystem that pays

Open source is a transformative model for building software. However, there are a few important problems with it, including but not limited to:

  1. "Libre" has become synonymous with "no recurring license", meaning it's hard for vendors to make money from open source software in a scalable way.
  2. As a result, "Open source businesses" are few and far between, except for development shops that provide services on top of platforms that other people have built for free, and service businesses like Red Hat. (Red Hat is the only sizeable open source business.)
  3. Even if the cost to the end user is zero, the total cost to produce and support the software does not go down.
  4. There is a diversity problem in open source, because only a few kinds of people can afford to give their time for free, meaning that open source software misses out on a lot of potential contributions from talented people.

I believe that the core product produced by a business can never be open source. In Red Hat's case, it's services. In Automattic's case, it's the Akismet and the WordPress.com ecosystem (WordPress itself is run by a non-profit entity). In Mozilla's case, it's arguably advertising. Even GitHub, which has enabled so much of today's open source ecosystem, itself depends on a closed-source platform. After all, they need to make money.

Nonetheless, having an open codebase is beneficial:

  1. It gives the userbase a much greater say in the direction of the software.
  2. It allows the software to be audited for security purposes.
  3. It allows the software to be adapted for environments and contexts that the original designers and architects did not consider.

So how can we retain the benefits of being open while allowing for scalable businesses?

One option I've been thinking about combines the mechanics of crowdfunding platforms like Patreon with an open source dynamic. I call it market source:

  1. End users pay a license fee to use the software. This could be as low as $1, depending on the kind of software, and the dynamics of its audience. (For example, $1 is totally fair for a mobile app; an enterprise intranet platform might be significantly higher.)
  2. In return, users receive a higher level of support than they would from a free open source project, perhaps including a well-defined SLA where appropriate.
  3. Users also get access to the source code, as with any open source codebase. Participants are encouraged to file issues and pull requests.
  4. Accepted pull requests are rewarded with a share of the pool of license money. Rather than rewarding by volume of code committed - after all, some of the best commits remove code - this is decided by the project maintainers on a simple scale. Less-vital commits are rewarded with a smaller share of the pool than more important commits.
  5. Optionally: users can additionally place bounties on individual issues, such that any user with an accepted pull request that solves the issue also receives the bounty.
  6. The pool is divided up at the end of every month and automatically credited to each contributor's account.

For the first time, committers are guaranteed to be compensated for the unsolicited work they do on an open source project. Perhaps more importantly, funding is baked into the ecosystem: it becomes much easier for a project to bootstrap based on revenue, because it is understood by all stakeholders that money is a component.

The effect is that an open source project using this mechanism is a lot like a co-operative. Anyone can contribute, as long as they adhere to certain rules, and they will also receive a share of the work they have contributed to.

These dynamics are not appropriate for every open source project. However, they create new incentives to participate in open source projects, and - were they to be successful - would create a way for new businesses to make more secure, open software without committing to giving away the value in their core product.

· Posts · Share this post

 

Two years of being on the #indieweb

For the last two years, I haven't directly posted a single tweet on Twitter, a single post on Facebook or LinkedIn, or a photo on Flickr. Instead, I publish on my own site at werd.io, and syndicate to my other services.

If Flickr goes away, I keep all my photos. If Twitter pivots to another content model, I keep all my tweets. If I finally shut my Facebook profile, I get to keep everything I've posted there. And because my site is powered by Known, I can search across all of it, both by content and content type.

My site is Known site zero. It's hosted on my own server, using a MongoDB back-end. I'm also writing 750 words a day on a withknown.com site - kept away from here because this site is mostly about technology, and those pieces are closer to streams of consciousness. Very shortly, though, I'll be able to syndicate from one Known site to another.

The indie web community has created a set of fantastic protocols (like webmention) and use patterns (like POSSE). I'm personally invested in making those technologies accessible to both non-technical and impatient users - partially because I'm very impatient myself.

This is a community that's been very good to me, and I find it really rewarding to participate. I'm looking forward to continuing to be a part of it as it goes from strength to strength.

· Posts · Share this post

 

Let's expand the Second Amendment to include encryption.

The German media is up in arms today because both German politicians and journalists were surveilled by the United States. Meanwhile, Germany is being sued by Reporters Without Borders this week for intercepting email communications. Over in the UK, Amnesty International released a statement yesterday after learning that their communications had been illegally intercepted. (Prime Minister David Cameron also declared his intention to ban strong encryption this week.) France legalized mass surveillance in June.

Everyone, in other words, is spying on everyone else. This has profound democratic implications.

From Amnesty International's statement:

Mass surveillance is invasive and a dangerous overreach of government power into our private lives and freedom of expression. In specific circumstances it can also put lives at risk, be used to discredit people or interfere with investigations into human rights violations by governments.

Furthermore:

We have good reasons to believe that the British government is interested in our work. Over the past few years we have investigated possible war crimes by UK and US forces in Iraq, Western government involvement in the CIA's torture scheme known as the extraordinary rendition programme, and the callous killing of civilians in US drone strikes in Pakistan: it was recently revealed that GCHQ may have provided assistance for US drone attacks.

It has been shown that widespread surveillance creates a chilling effect on journalism, free speech and dissent. Just the fact that you know you're being surveilled changes your behavior, and as the PEN American Center discovered, this includes journalism. Journalism, in turn, is vital for a healthy democracy. A voting population is only as effective as the information they act upon.

Today is July 3. It seems appropriate to revisit the Second Amendment to the Constitution of the United States, which was passed by Congress and ratified by the States in two forms:

A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.

A well regulated militia being necessary to the security of a free state, the right of the people to keep and bear arms shall not be infringed.

The Supreme Court has confirmed [PDF] that this has a historical link to the older right to bear arms in the English Bill of Rights: "That the Subjects which are Protestants may have Arms for their Defence suitable to their Conditions and as allowed by Law." The Supreme Court has also verified multiple times that the right to bear arms is an individual right.

In 2015, guns are useless at "preserving the security of a free state", and cause inordinate societal harm. Meanwhile, encryption is one of the most important tools we have for preserving democratic freedom. We already subject encryption to export controls on the munitions list. It seems reasonable, and very relevant, to expand the definition of "arms" in the Second Amendment to include it. Let's use the effort that has been put into allowing individual citizens to own firearms, and finally direct it to preserving democracy.

While this would protect the democratic rights of US citizens, it would not impact the global surveillance arms race in itself. It would be foolish to only consider the freedom of domestic citizens: Americans are not more important than anyone else. However, considering the prevalence of American Internet services, and the global influence of American policy as a whole, it would be a very good first step.

· Posts · Share this post

 

Just zoom out.

Sometimes it's important to step out of your life for a while.

I spent the last week in Zürich, reconnecting with my Swiss family in the area. A long time ago, I named an open source software platform after a nearby town that my family made their home hundreds of years ago: Elgg. I hadn't been back to the town since I was a child, and visiting it stirred echoes of memories that gained new focus and perspective.

I grew up in the UK, more or less, and while I had some extended family there, mostly they were in Switzerland, the Netherlands and the United States. I've always been fairly in touch with my US family, but not nearly enough with my European cousins. Effectively meeting your family for the first time is surreal, and it happens to me in waves.

Growing up as an immigrant, and then having strong family ties to many places, means that everywhere feels like home and nothing does. I often say that family is both my nationality and my religion. Moving to the US, where I've always been a citizen, feels no more like coming home than moving to Australia, say. Similarly, walking around Zürich felt like a combination of completely alien and somewhere I'd always been - just like San Francisco, or Amsterdam. My ancestors were textile traders there, centuries ago, and had a say in the running of the city, and as a result, our name crops up here and there, in museums and on street corners.

So, Zürich is in the atoms of who I am. So is the Ukrainian town where another set of ancestors fled the Pogroms; so are the ancestors who boarded the Mayflower and settled in Plymouth; so are the thousands of people and places before and since. My dad is one of the youngest survivors of the Japanese concentration camps in Indonesia, who survived because of the intelligence and determination of my Oma. His dad, my Opa, was a prominent member of the resistance. My Grandpa translated Crime and Punishment into English and hid his Jewishness when he was captured as a prisoner of war by the Nazis. My great grandfather, after arriving in New England from Ukraine, was a union organizer, fighting for workers' rights. My Grandma was one of the kindest, calmest, wisest, most uniting forces I've ever known in my life, together with my mother, even in the face of wars, hardship, and an incurable disease.

All atoms. Entire universes in their own right, but also ingredients.

And so is Oxford, the city where I grew up. Pooh Sticks from the top of the rainbow bridge in the University Parks; the giant horse chestnut tree that in my hands became both a time machine and a spaceship; the Space Super Heroes that my friends and I became over the course of our entire childhoods. Waking up at 5am to finish drawing a comic book before I went to school. Going to an English version of the school prom with my friends, a drunken marquee on the sports field, our silk shirts subverting the expected uniform in primary colors. Tying up the phone lines to talk to people on Usenet and IRC when I got home every day, my open window air cooling my desktop chassis. My friends coming round on Friday nights to watch Friends, Red Dwarf and Father Ted; LAN parties on a Saturday.

In Edinburgh, turning my Usenet community into real-life friends. Absinthe in catacombs. Taking over my house on New Year's Eve, 1999, and having a week-long house party. Walking up Arthur's Seat at 2am in a misguided, and vodka-fuelled attempt to watch the dawn. Hugging in heaps and climbing endless stairs to house parties in high-ceilinged tenements. Being bopped on the head with John Knox's britches at graduation. The tiny, ex-closet workspace we shared when we created Elgg, where the window didn't close properly and the smell of chips wafted upwards every lunchtime. And then, falling in love, which I can't begin to express in list form. The incredible people I have been lucky enough to have in my life; the incredible people who I have also lost.

And California too. We are all tapestries, universes, and ingredients. Works in progress.

If we hold a screen to our faces for too long, the world becomes obscured. Sometimes it's important to step out of your life for a while, so you can see it in its true perspective.

· Posts · Share this post

 

If we want open software to win, we need to get off our armchairs and compete.

The reason Facebook dominates the Internet is that while we were busy having endless discussions about open protocols, about software licenses, about feed formats and about ownership, they were busy fucking making money.

David Weinberger writes in the Atlantic:

In the past I would have said that so long as this architecture endures, so will the transfer of values from that architecture to the systems that run on top of it. But while the Internet’s architecture is still in place, the values transfer may actually be stifled by the many layers that have been built on top of it.

In short, David worries that the Internet has been paved, going so far as to link to Joni Mitchell's Big Yellow Taxi as he does it. If the decentralized, open Internet is paradise, he implies, then Facebook is the parking lot.

While he goes on to argue, rightly, that the Internet isn't completely paved and that open culture is alive and well, the assumption that an open network will necessarily translate to open values is obviously flawed. I buy into those values completely, but technological determinism is a fallacy.

I've been using the commercial Internet (and building websites) since 1994 - which is a lifetime in comparison to some Internet users, but makes me a spring chicken in comparison to others. The danger for people like us is that we tend to think of the early web in glowing terms: every website felt like it was built by an intern and hosted in a closet somewhere (and may well have been), so the experience was gloriously egalitarian. Anyone could make a home on the web, whether you were a megacorporation or sole enthusiast, and it had an equal chance of gaining an audience. Many of us would like this web back.

Before Reddit, there were Usenet newsgroups. (I'll take a moment to let the Usenet faithful calm down again.) Every September, a new group of students would arrive at their respective universities and get online, without any understanding of the cultural mores that had come before. They would begin chatting on Usenet newsgroups, and long-standing users would groan inwardly as they quietly taught the new batch all about - remember this word? - netiquette.

In September, 1993, AOL began offering Usenet access to its commercial subscribers. This moment became known as "the eternal September", because the annual influx of new Internet users became a constant trickle, and then a deluge. There was no going back, and the Internet culture that had existed before began to give way to a new culture, defined by the commercial users who were finding their way online.

"Eternal September" is a loaded, elitist term, used by people who wanted to keep the Internet for themselves. As early web users, rich with technostalgia and a warm regard for the way things were, we run the risk of carrying the torch of that elitism.

The central deal of the technology industry is this: keep making new shit. Innovate or die. You can be incredibly successful, making a cultural impact and/or personal wealth beyond your wildest dreams, but the moment you rest on your laurels, someone is going to eat your lunch. In fact, this is liable to happen even if you don't rest on your laurels. It's a geek-eat-geek world out there.

For many people, Facebook is the Internet. The average smartphone user checks it 14 times a day, which of course means that a lot of smartphone users check it far more than that. In the first quarter of this year, Facebook had 1.44 billion monthly active users. That means that almost 20% of the people on Earth don't just have a Facebook account: they check it regularly. In comparison, WordPress, which is probably the platform most used to run an independent personal website, powers around 75 million sites in total - but Apple's App store has powered over 100 billion app downloads.

Are all those people wrong? Does the influx of people using Facebook as the center of their Internet experience represent a gargantuan eternal September? Or have apps just snuck up and eaten the web's lunch?

Back in 2011, I sat on a SXSW panel (yes, I'm that guy) about decentralized web identity with Blaine Cook and Christian Sandvig. While Blaine talked about open protocols including webfinger, and I talked about the ideas that would eventually underly Known, Christian was noticeably contrarian. When presented with the central concepts around decentralized social networking, his stance was to ask, "why do we need this?" And: "why will this work?"

In the Atlantic, David Weinberger references Christian's paper "The Internet as the Anti-Television" (PDF), where he argues that the rise of CDNs and other technologies built to solve commercial distribution problems have meant that the egalitarian playing field that we all remember on the web is gone forever. While services like CloudFlare allow more people than ever before to make use of a CDN, it requires some investment - as do domain names, secure certificates, and even hosting itself. (The visual and topical diversity of GeoCities and even MySpace, though roundly mocked, was very healthy in my opinion, but is gone for good.)

For most people, Facebook is faster, easier to use, and, crucially, free.

Rather than solving these essential user problems, the open web community disappeared up its own activity streams. Mailing list after mailing list filled with technical arguments, without any products or actual technical innovation to back them up. Worse, in many organizations, participating in these arguments was seen as productive work, rather than meaningless circling around the void. Very little software was shipped, and as a result, very little actual innovation took place.

Organizations who encourage endless discussion about web technologies are, in a very real way, promoting the death of the open web. The same is true for organizations that choose to snark about companies like Facebook and Google rather than understanding that users are actually empowered by their products. We need to meet people where they're at - something the open web community has been failing at abysmally. We are blindsided by technostalgia and have lost sight of innovation, and in doing so, we erase the agency of our own users.

"They can't possibly want this," we say, dismissively, remembering our early web and the way things used to be. Guess what: yes they fucking do.

This stopped being a game some time ago. Ubiquitous surveillance, diversity in publishing and freedom of the press are hardly niche issues: they're vital to global democracy. A world in which most of our news is delivered to us through a single provider (like Facebook), and where our every movement and intention can be tracked by an organization (like Google) is not where any of us should want to be. That's not inherently Facebook or Google's fault: as American corporations, they will continue to follow commercial opportunities. It's not a problem we can legislate or just code away. The issue is that there isn't enough of a commercial counterbalancing force, and it really matters.

Part of the problem is that respectful software - software that protects a user's privacy and gives them full control over their data - has become political. In particular, "open source" has become synonymous with "free of charge", and even tied up with anti-capitalism causes. This is a mistake: open source and libre software were never intended to be independent from cost. The result of tying up software that respects your privacy with the idea that software should come without cost is that it's much harder to make money from it.

If it's easier to make money by violating a user's autonomy than protecting it, guess which way the market will go?

A criticism I personally receive on a regular basis is that we're trying to make money with Known (which is an open source product using the Apache license). A common question is, "shouldn't an open source project be free from profit?"

My answer is, emphatically, no. The idea behind open source and libre software is that you can audit the code, to ensure that it's not doing something untoward behind your back, and that you can modify its function. Most crucially, if we as a company go bust, your data isn't held hostage. These are important, empowering values, and the idea that you shouldn't make money from products that support them is crazy.

More importantly, by divorcing open software from commercial forces, you actually remove some of the pressure to innovate. In a commercial software environment, discussing an open standard for three years without releasing any code would not be tolerated - or if it was, it would be because that standard was not significant to the company's bottom line, or because the company was so mismanaged that it was about to disappear without trace. (Special mention goes to the indie web community here, for specifically banning mailing lists and emphasizing shipping software.)

The web is no longer a movement: it's a market. There is no vanguard of super-users who are more qualified to say which products and technologies people should use, just as there should be no vanguard of people more qualified than others to make political decisions. Consumers will speak with their wallets, just as citizens speak with their votes.

If we want products that protect people's privacy and give people control over their data and identities - and we absolutely should - then we have to make them, ship them, and do it quickly so we can iterate, refine and make something that people really love and want to pay for. This isn't politics, it's innovation. The business models that promote surveillance and take control can be subverted: if we decide to compete, we can sneak up and eat their lunch.

Let's get to work.

· Posts · Share this post

 

Community is the most important part of open source (and most people get it wrong)

This post by Bill Mills about power and communication in open source is great:

Being belittled and threatened and told to shut up as a matter of course when growing up is the experience of many; and it does not correlate to programming ability at all. It is not enough to simply not be overtly rude to contributors; the tone was set by someone else long before your first commit. What are we losing by hearing only the brash?

Bottom line: if you, either as a maintainer or as a community, are telling people to shut up then you're not open at all.

If you make opaque demands of people to test their legitimacy before participating then you're not open at all.

If you require that only certain kinds of people participate then you're not open at all.

The potential of open source is, much like the web, that anyone can participate. On Known, we're really keen to embrace not just developers, but designers, writers, QA testers - anyone who wants to chip in and create great software with us. That's not going to happen if we're unfriendly or project the vibe that only certain kinds of people can play. Donating time and resources on an open project is a very generous act, that not everyone can participate in. Frankly, as a community we should be grateful that anyone wants to take part.

As a project founder, a lot of that is about leading by example. That means being talkative and open. I get a lot of direct messages and emails from people, and I try and direct people to participate in the IRC channel and the mailing list - not just because it allows our conversations to be findable if people in the future have similar questions, but because every single message adds to the positive feedback loop. If there's public conversation going on, and it's friendly, then hopefully more people will feel comfortable taking part in it.

Like any positive communication, a lot of this is related to empathy. I'm pretty shy: what would make me feel welcome to participate in a community? Probably not abrupt messages, terse technical corrections or (as we see in many communities) name-calling. Further to that, explicitly marking the community as a safe space is important. We're one of the few communities to have an anti-harassment policy; I'm pleased to say that we've never had to invoke it. More communities should do this.

Which isn't to say that there isn't more that we can do. There is: we need better documentation, better user discussion spaces, a better showcase for people to show off what they've built on top of Known. We're working on it, but let us know what you think.

And please! Whether you're a writer, designer, illustrator, eager user, or a developer, we'd love for you to get involved.

· Posts · Share this post

 

10 things to consider about the future of web applications

  1. Twitter - by far the social network that I use the most - is struggling to break 300 million monthly active users and is not hitting revenue targets. (Contrast with Facebook's 1.44 billion monthly actives.) Even investor Chris Sacca has warned that he's going to start making "suggestions".
  2. Instagram - still a newcomer in many peoples' eyes - is beginning to send re-engagement emails in response to flagging user growth.
  3. The 2016 US election is apparently going to be huge on Snapchat. Translation: Snapchat is over. The next generation of young users are already looking for something else. Snapchat was released in September 2011.
  4. The Document Object Model - core to how web pages are manipulated inside the browser - is slow, and may never catch up to native apps. We've known that responsiveness matters for engagement for over a decade.
  5. It's possible to build more responsive web apps by going around the DOM. But these JavaScript-based web apps are harder to parse and often can't be seen by search engines (unless you provide a fallback, which requires a lot of extra programming time).
  6. Push notifications - which are core to apps like Snapchat, and possibly the future of Internet applications - are not available on the open web. Browsers like Chrome are implementing them on a browser-by-browser basis.
  7. Facebook has no HTML fallbacks, renders almost entirely in JavaScript and lives off push notifications. Twitter has HTML fallbacks, is very standards-based, uses push notifications but also SMS and email, and is generally a good player (with respect to the web, at least, although it's less good at important features like abuse management). Facebook is kicking Twitter's ass.
  8. The thing that may save Twitter? Periscope, a native live video app, which is highly responsive and live-video-heavy.
  9. Users have stopped paying for apps, and instead opt for free apps that have in-app purchases, so they can try before they buy. We're a long way off having a payments standard for the web.
  10. There's no way to transcode video in a web browser, which means uploading video via the web is effectively impossible on most mobile connections. (Who wants to sit and wait for a 1GB file to upload, even on an LTE connection?) Meanwhile, the web audio API saves WAV files, rather than some other, more highly-compressed formats you may have heard of. Similarly, resampling images is difficult. In other words, while the web has been optimized for consumption (albeit in a slower way than native apps, as we've seen), it has a long way to go when it comes to letting people produce content, particularly from mobile devices.

What does all of this mean?

I don't mean to be pessimistic, but I think it's important to understand where users are at. The people making the web aren't always the people using it, and there's a serious danger that we find ourselves trying to remake the platform we all enjoyed when we first discovered it.

Instead, we need to make something new, and understand that if we're building applications to serve people, the experience is more important to our users than our principles.

All of these things can be solved. But while we're solving them at length, native app developers are going off and building experiences that may become the future of the Internet.

· Posts · Share this post

 

Remembrance on Memorial Day

Memorial Day is an American holiday that commemerates people who have lost their lives in service to the United States. It's similar to Remembrance Day in the commonwealth countries, except that it's also a long weekend that marks the start of the summer.

My dad spent the first few years of his life in a Japanese-run internment camp, while my grandfather was captured by the Nazis and had to deny his Jewishness to stay alive. My great grandfather's family fled pogroms in Ukraine. I'm a pacifist, and war is evil, but I also believe it is sometimes necessary as a last resort.

But in remembering the brave people who fought and died, it's also important to remember who sent them, and why. Many people died in Iraq for shameful political reasons. Many people died in Afghanistan. Vietnam. Patriotism can't just be about remembering their sacrifice: it has to also be about trying to make sure it never happens again, whether we consider war to be just or not. One life lost is too many. There are still war criminals involved in some of those wars who need to see justice. There are still politicians who make joining the armed services and dying for their country the only viable career choice for many people.

I also want to remember, as Marc has in his own post, the people who are fighting for freedom domenstically. People have lost their lives for civil rights, for equality, for better working conditions for immigrants, for the right to form a union, for the right to vote. In contrast to resource skirmishes for policial purposes, their sacrifices have changed our societies for the better in tangible ways. They should never be forgotten.

· Posts · Share this post

 

A walk in the park

"Daddy," Wendy said, looking up at the sky, the cloudless blue stretching opaquely in all directions. "Can people fly?"

I smiled at her. "They say that some people can," I said.

"Can I fly?"

I shook my head, smiling. "No, honey," I said, gently. "Mommy and daddy can't fly either."

"Why not?"

"Well, for one thing, being on the ground's just fine," I said, patting the grass in the park. "It's nice down here."

And for another, I thought, I can't afford that upgrade.

I can't be completely certain on which day I died, but I'm pretty sure it was the Wednesday before Thanksgiving: the first real cold day in months, when the wind picked up and remembered it was fall, and the auburn of the leaves finally overwhelmed the green. Wendy was with her grandparents, and I just thought, well, why the hell not?

I was the last on my block to die. None of us knew it yet, of course. If anything, we were excited for it and what it meant: the Mayor had been the first to go all the way, and he came back from the store telling us about all the new colors he was seeing. A more vivid spectrum, he said; he could see the life-force in everything around him, from the squirrels in the trees to the trees themselves. He said it was magical. He used that word: magical.

Pretty soon, everyone was doing it. The police all did it as a group, before the police disbanded, and then the local businesspeople in town started to die one by one, and then the rich part of town, and then our neighbors. Finally, when it seemed like everyone else had gone and done it and I was overwhelmed with fear that I was missing out on something vital, I went and did it, too.

Wendy was the last living girl I knew.

When Samantha died, she was gone, and it felt like there was a hole in my heart that was torn fresh each morning and could never heal. I clawed at the walls wishing time would peel back with the wallpaper, and found ways to smile for our daughter even as I wanted to rip myself apart. When she slept, mercifully in her own room because she was a big girl now, I retired to our bed and wept. Were it not for the life that was entrusted to me, my precious girl, I would have simply found my way to the bridge and stepped into nothingness.

Death isn't what it used to be. When I died, I saw new colors in the trees, my metabolism was automatically managed for me by an intelligent software agent, and I gained the ability to manage my emotions from an elegant control panel on my wrist. My memories were organized. For the first time in my existence, I felt fully in control. It was magical.

When I first stepped out of the store, having paid the staff to configure me, I was struck by the silence in my head. The pain of Samantha's legacy death was gone, along with the background ebb and flow of dread that had been my soundtrack. Samantha's face was not staring at me from my mind's eye, as it had done every day for the past four years. I had no desire to whisper her name and call out to her. I had no desire at all.

Instead, I saw information. My mind had become a dashboard, and everything in the world was a point of data. I felt free, and suddenly the world seemed full of possibilities. They had worked hard on designing that first emotion: it was spectacular. The app told me I was empowered.

Wendy and I held hands as we walked through the park. The glow of the energy in the trees rivaled the sunshine. She sang to herself, softly, and my controller app told me it was an old Sandie Shaw tune. It suggested I sing a few bars along with her to build empathy, and I complied.

"Wow, that's a real old song," I said. "Where did you learn that?"

"It's from a commercial," Wendy said, matter-of-factly.

The controller told me which one and played a few seconds for me, and I smiled.

"Do you want to go get a burger?" I asked, sensing that she was hungry.

"How did you know it was from that commercial?" Wendy said, smiling. She nodded.

The burger joint was on the edge of the park; we crossed a tall bridge to get there, and paused briefly to play pooh sticks. We stood at the top of the bridge and each dropped a stick into the water. The winner would be the stick that the current passed through first; like always, I engineered my throw so that it would be Wendy's, and like always, she was delighted. One day she would tire of this game and I would have to find new ways to entertain her. Luckily, I had access to a vast database of games.

It was a traditional sort of place, not unlike the kind my dad had taken me to when I was Wendy's age. Everything was primary-colored, and the tables were wipe-clean. Booths lined with glittered cushioning sat against the walls, while the floor was dotted with circular tables. Once, almost every table would have been occupied with families like ours. These days, almost nobody needed food, so the loop of soft, upbeat music played to an empty room. I already knew what Wendy wanted, and my app communicated this to the kitchen wordlessly; we simply took a booth, and a drone waiter flew out the food once it was ready.

"How's your hamburger?" I asked, knowing the answer was written on her smiling face. The hamburger meat was made with insect meal, but that was all that Wendy had ever known, and anyway, they found plenty of ways to make it just as juicy and delicious as the beef I had enjoyed as a child. I felt her emotions as if they were my emotions and knew that it was good.

"Don't you want to eat anything?" she asked between mouthfuls.

"I'll eat later," I said.

I sighed happily, although my oxygen is processed for me and I no longer use my lungs. "What next, pumpkin?"

"I haven't finished my fries yet," Wendy pointed out, and although I knew she didn't really need them nutritionally, I saw that they would have an emotional benefit. "Once I've finished my french fries, let's go home," she said.

"Sounds like a plan," I said, smiling. The app suggested I look out the window, and I took in the amber light of the dimming sun.

The app strongly suggested I step outside. "I'll be right back, okay, honey?"

"Okay, daddy."

I slid myself out from the booth seat and walked through the swing doors to the street outside. The controller app gave me directions and I turned to follow them exactly, suddenly aware that my energy levels were running low. My pace increasing, I walked around to the rear of the restaurant and scanned the backlot.

There was an old Ford Fusion parked against the dumpsters, and I could see a man rummaging around inside them, his torso fully submerged in trash.

"Excuse me," I said, understanding why I was here.

With the advent of the controller apps, there was no need for a police force: everyone became the eyes and arms of the law. When it was introduced halfway through a lavish keynote presentation, crowdsourced enforcement was hailed by the press as the future of policing. They took great pains to point out that it was not the same as vigilante justice: the apps were highly-regulated, and everything app users saw and heard was recorded for later algorithmic judicial compliance. Anyway, using the app to begin with enforced compliance with social norms; virtually everyone had it installed, so it was rare that policing was even required. Staying on the straight and narrow felt good. The app made sure of it.

The man kept rummaging.

"Excuse me," I said again.

The man pulled himself out of the dumpster and turned to look at me. He was wearing a flannel shirt, and had an unsanitary beard. He'd managed to pull out some discarded food: leaves of lettuce, a few packages of insect meal.

"That food is the property of this restaurant," I said. "You really can't take it."

"They're just throwing it out, dude," the man said.

"It's theft," I said. "If you're hungry, there are ways to get credit so you can buy your own food."

"It's wasteful," the man said.

"It's their property," I said.

He sighed. "Fucking users," he said, under his breath.

"Pardon me?"

"Fucking users," he said again, loudly and deliberately.

He turned to run, but naturally, my reaction time was faster. I caught up to him before he managed to get up to speed without needing to increase my heart rate, and forced his arm behind his back.

He yelled in pain. "I've got kids, man."

"You need to respect the rules of your community," I said. "Your children don't outweigh the rights of this restaurant to their property."

My body sent me a notification. I was critically low on energy; I needed to recharge quickly, or I would shut down.

"Put the property down," I said. "Now."

"Fuck you," the man said, spitting. "I need to feed my family."

Another notification. I would be down soon. "You should put down the food," I said, holding the arm lock. Behind the scenes, the app sent a request to the cloud controller.

Request approved. Unthinkingly, I let go of the man's arm, and put both hands around his head. For a moment, he screamed out in fear, but the app downloaded quickly, and he fell limp. For thirty seconds, we stood in the backlot together, me cradling his head, distant birdsong the only sound. Silently, a spot of light on my wrist pulsed to the rhythm of my long-discarded heartbeat.

Fully charged; needs met. With a single movement, I picked up the carcass and threw it in the dumpster.

Wendy had finished her burger and fries, and was patiently watching for me through the window. She waved at me through the glass as I approached, and flashed me a toothy grin as I sat back down opposite her. "That was delicious," she said, smiling. "Are we ready to go home?"

I smiled at my daughter, the only living child in town. "Absolutely."

We lived in a small, suburban house with a driveway in front and a small yard in the back, similar to around 60% of our town's population. It was a little too big for just two of us, but moving would have been one stressful experience too many for Wendy, so we stayed. The remnants of Samantha's presence no longer bothered me, and Wendy had been too young. For her, they were a curiosity.

Much of the bandwidth for the National Internet had been dedicated to controller apps; almost nobody strolled the web or talked to each other over immersive video, because they didn't need to. We were all a mesh now, united by our controllers and tethered to the cloud. But I had Wendy, so I continued to purchase Internet service so she could tap into the movies. We would often sit on the couch together until she fell asleep, a blanket covering us both, like we had always done. I could sense that it was comforting to her, and it was a pattern that I found pleasant, too. I don't know if what I felt was real closeness, or a well-designed authentic emotional experience. They are materially the same, so I don't believe it matters.

There we lie, father and daughter, as the movie runs to its credits. She dreams of adventures and new experiences, and I watch the electrons dance inside her brain. And then I carry her up to her bed, ready for another day, my hands cradling her head just a little. She is beneficial.

· Posts · Share this post

 

Publish on your Own Site, Reflect Inwardly

Known gives you the ability to share the content you create across social media platforms at the point of publishing, with just one click. I'm deliberately not doing that with this post.

If you're reading it, it's because you came to my site, or you picked up the content in a feed reader.

One reason to publish on the web is to make a name for yourself, and create an audience for your content or services. But that's not the only reason, or even the best one. I think structured self-reflection is more valuable - with or without feedback.

We've been trained to worry about audience and analytics for our posts. How many people read a piece about X vs a piece about Y? Is it better to post at 2pm on a Thursday or 10pm on a Sunday? Which demographic segments are most interested?

That's fine and dandy if you're a brand, but not all of us need to be brands. Not every piece of content needs to be a performance. If we unduly worry about audience, we run the risk of diluting our work in order to appeal to a perceived segment. Sometimes the audience is you, and that's enough.

The dopamine hit that comes from a retweet or a favorite creates a kind of awkward emotional dependence. A need for audience. There's a lot to be said for slow reflection for its own sake. That's what we encourage when we give blogs to students, and that's probably what we should be practicing more of ourselves.

And of course, I'm speaking for myself. I've decided I need to ease back on social media interactions, and start using my own space as just that: my own space. My own space to reflect, to think out loud, and to publish because I want to. That's how we used to do it on the web. As I've said before: I think the world would be better if we revealed more of ourselves.

· Posts · Share this post

 

Elgg and Known: how deep insight can help you build a better community platform

Ten years ago last November, we released the first version of Elgg. An open source social networking platform originally designed for higher education, where it was used by Harvard and Stanford, it spread to organizations like Oxfam, Orange, Hill & Knowlton and the World Bank, as well as national governments in countries like Australia, Canada and the Netherlands.

Not bad for a couple of industry outsiders based in Scotland.

Elgg is still in wide use today. I credit that to a technical emphasis on extensibility and ease of use, as well as our focus on being responsive to the needs of the community - but not too responsive. We never veered from the vision we had of an open social networking infrastructure for organizations.

The web has changed unrecognizably since 2004, and Known takes those changes into account: mobile-first, with an emphasis on streams and shorter bursts of content. You can still run it in an organization, but you can run it as a personal platform, too. Higher education institutions are using it to give self-reflective websites to all their students, and more and more private companies are using it to create social feeds internally, too. Design thinking is core to our process, which helps us stay responsive and build tools that truly solve a deep user need.

Known, Inc goes beyond Known the platform: we're exploring new applications that use design thinking, and our deep community platform knowledge, to solve problems in different verticals.

Most importantly, we offer that experience and toolset to other organizations. If you need platform strategy advice, or even to build a new social website or app for your organization, we can help. We have over a decade of experience in building organizational social platforms, and you can put it to work. To get started, get in touch via our website.

· Posts · Share this post

 

The full-stack employer

Just over four years since I permanently moved to California, I'm beginning to understand the differences in work style between the US and Europe. America still has a largely time-based view of productivity, even in Silicon Valley. But with tech industries in other nations catching up fast, and remote working becoming a more viable option, you need to compete for talent with companies all over the world. You have to be a full-stack employer.

What is a full-stack employer?

Over the past year in particular, there's been a lot of discussion about "full-stack employees", "full-stack developers" and "full-stack startups". The trend is that employees are expected to have a broader range of skills, and be able to switch between them seamlessly. Employees apply their skills in a more holistic manner, moving away from a dedicated position on an assembly line.

This is arguably related to startup culture, and the growing trend for even larger companies to innovate by creating much smaller, more autonomous internal product teams. It has worked for a number of corporations. But the full-stack expectation places greater demands on employees, which must be met by the employer. Simply put, to innovate, you need support.

I'm shamelessly repurposing the term "full-stack" to mean not just the technology you use to build your products and services, not just the skills you are expected to use in the course of doing business, but also the support mechanisms that humans need in the course of doing their job. A full-stack employer is one that sees their employees as a community of people, and that provides structures, support networks and services for them based on that understanding.

The good news is, treating the people who work for you as human beings has a real effect on productivity, sales and the health of a company.

What it's like to have a full-stack employer

The new reality is that you are expected to use a variety of skills in the course of doing your job. But those skills didn't just come to you, fully-formed. Nobody puts on a headset, Matrix-style, and emerges minutes later knowing kung fu: mastering a new skill requires time, effort and investment. Full-stack employers recognize that providing dedicated training and professional development for their employees creates a measurable return on investment. You're expected to join the company with intelligence, an enthusiasm for learning, and skills that relate to your core role - but the reality is that even those will develop as you continue at your company. Importantly, it's needs-led: as an employee, you can tailor your professional development based on your needs. Training isn't dropped on you from above.

You are not expected to be always-on. Phone calls at 3am, Slack messages late at night, urgent emails that need to be responded to out of hours are not on the table. It comes down to respect: the employer understands that you have your own life, and that your choice to work for a company is a relationship that goes both ways.

The ethics of this are clear, but it turns out that workers are more productive when they are rested and take more breaks. The same is true during working hours. Employees are trusted to do their work, and aren't measured in terms of the amount of time they spend at their desks. In fact, research suggests that they should build some break-time into every hour, and they may be encouraged to do this. Similarly, they may be encouraged to take a full hour for lunch, perhaps with a walk. And they're encouraged to go home after eight hours. It all results in happier, healthier employees, and more productivity.

Employees have some choice over their benefits, but they always have full medical coverage in countries where this isn't a given. Childcare is paid for (because the employer saves money by making it available).

Generous parental leave is provided, for both mothers and fathers, engineered in such a way that both parents take it more or less equally. Studies have shown that equal paternity leave helps to prevent against pay discrimination, and provides happiness (and therefore productivity) boosts across both parents. The company becomes a better place for women to work, allowing it to remain competitive.

Finally, the aspect that may make American managers shake their heads with disbelief. Every employee gets a minimum of six weeks of vacation time a year, and they are strongly incentivized to use it. In order to combat a culture of working all the time, full-stack employers may create a financial bonus for employees that go on holiday, knowing that vacations dramatically increase productivity. Happier workers are more productive.

Why it's great to be a full-stack employer

While it sounds expensive, full-stack employers actually save money. Happier employees are measurably more productive, and arguably more creative, leading to more innovative solutions. These measure also reduce churn, which is important when the cost of employee turnover can be as much as twice their salary.

Providing a better place to work can be a differentiator, which can help companies compete for employees in a cost-effective way. By way of example, I was once offered a position at a very well-known Silicon Valley web company; one whose services people use every day. The salary was great, and it would have looked good on my resumé. At the end of our meeting, the interviewer remarked to me, "you do end up spending a lot of time at work, but it's okay, because your colleagues become like your family." I refused the job, and started noticing when employees posted Instagram photos of their office on Saturdays and after midnight.

Optimizing the workplace for people who have lives outside of work allows for more experienced employees (who are more likely to have families), and people who have outside hobbies and interests. All of these things provide more bang for your salary buck as an employer - as well as creating a far more enjoyable place to work.

What does this mean for employees and managers?

Although some of the benefits sound costly, most of them actually save the company money over time. The biggest adjustment it requires is an attitude shift.

I would argue that the American reliance on time-at-desk as a productivity metric is an ideological, rather than fact-based, approach. German workers, for example, are at least as productive as their US counterparts, while enjoying six week vacations, more regular working hours, and so on.

Therefore, the biggest required change is for managers to respect the time of the people on their teams, and to create conditions where employees don't feel guilty for stepping away from their desks, putting their phones down, and spending regular, quality time away from work. In fact, they should feel empowered to do so, because they will be better workers as a result. The same goes for asking for professional development resources, and expecting fair compensation in both monetary and benefit terms.

It's a seismic shift for a country so deeply involved in the Protestant work ethic.

What of the future of work?

Ubiquitous mobile Internet was supposed to give us more freedom and allow us to live more fulfilling lives. It has not necessarily lived up to this potential.

Just because an employer can contact an employee at midnight on a Thursday night, does not mean that we should do so. We have gained all kinds of new, amazing tools to help us be productive and create more innovative companies. Now it's time to learn to do so responsibly.

After the industrial revolution, Henry Ford and others learned that five-day weeks and eight-hour days allow us to be more productive. After the information revolution, it seems we now have to relearn these lessons.

· Posts · Share this post

 

Convoy is a new kind of progressive enhancement for self-hosted web applications.

We just released Convoy, an add-on service for Known that lets you use our servers to more easily share content with social media.

Previously, self-hosted site owners needed to create individual developer accounts on Twitter, Facebook, LinkedIn, etc, and manage API integrations with each one. It was a pain, and lots of people told us that they'd prefer to avoid it.

You still can mess around with API integrations if you really want to. But Convoy gives you another option: connect your site, and then log into each service as easily as you would with any centralized cloud application. You still own all your data on your own server, but we handle the fiddly technical bits.

Service integrations aren't the only things that add value to a self-hosted site. Typically, shared hosts aren't good at search: there's no great dynamic search engine in the LAMP stack, and we get lots of complaints about MySQL's build-in full-text search. So we're looking at how we can add this to Convoy, too. The same goes for notifications and a host of other services that can help your self-hosted site use all the great new web technologies that the centralized services do.

I think of this as progressive enhancement: your site will still work without them, and you still store all your data. These services serve to add to the user experience without taking away any of your agency.

We're not removing functionality from the open source codebase: you can self-host everything if you have the technical ability. But if you'd prefer not to, Convoy is here for you.

And of course, it needn't be limited to Known at all ...

You can learn more about self-hosted Known on our open source site, or get started with our free hosted service.

· Posts · Share this post

 

Forget the mass market: the untribe is the new normal.

There was one conversation I used to hate more than any other. I used to brace myself for it; grit my teeth in anticipation.

Here's how it would happen.

"Where are you from?" someone would ask, detecting that my accent wasn't quite British; something else was lurking just underneath.

"That's a long story," I would start to say, hoping to leave it at that and turn to some other, more interesting topic. But once someone has the scent that you're not a part of the tribe, that perhaps you don't belong, they never let it go. So I would explain. "My mother's Ukrainian / American, my dad's Dutch / Swiss / Indonesian, I grew up here, and -"

"Oh," they would say, stopping me. "You could become a British citizen, you know."

Then I would have to nod and smile, and perhaps bumble something along the lines of, "oh, well, yes, I suppose I could, now that you mention it, that's a rather good idea ..." Hugh Grant style, a polite stream of bashfulness that stood in for the lies, "nobody has ever suggested this before" and "clearly that's exactly what I need to do".

The implication was: "you could do this one thing and then you would belong!" It was as if being a part of the tribe would elevate me, somehow. The implication was that having the right sort of passport would make me the right sort of person, and I despised it.

There was exactly one time, every five years or so, when I ever wished I had been a British citizen. Although I could vote in local elections, I wasn't able to take part in the general elections that would help pick who would run the country. I couldn't help select a Prime Minister, but I cared deeply. I would pay close attention to the campaign pledges, flinch into my real ale when a politician mentioned immigration reform, and, once the polls were closed and results were announced, suck it up until next time.

Perhaps I could have become British, but that would have required - literally, in this case - giving up a part of who I was.

I grew up in Oxford, a few miles east of the university's picturesque dreaming spires, in a pebbledash duplex between a pub and a gas station. My high school's catchment area covered both my side of town - arguably the bad side, with its tiny houses and tower blocks - and the fancy North Oxford Victorians where the professors raised their children. High school is notorious for nurturing cliques, and England is notorious for class awareness. One friend's professor parent was very open about looking down on me because I came from the wrong side of town, and in a perverse way I'm thankful to him for his honesty; I'm not sure how many others did so behind my back. Meanwhile, many of the kinds from my end of town openly despised me for being visibly interested in learning. My parents are both highly academic, and I was overtly a computer geek who loved writing.  As is the high school way, everyone picked on everyone else for the ways in which they didn't fit their ideal archetype.

By the end of high school, I was a nervous wreck, but I had also found the people I identified most strongly with: the people who were not easily described. I guess you could call us the people who weren't part of the other cliques. In some ways, that was a tribe, too, and we found strength in our friendships. These are people I love dearly and I still talk to some of them every day.

But I had also discovered the Internet, which felt like a magical, invisible layer on top of the world. Hidden in invisible corners there, where no-one else could find us, on newsgroups and IRC, not limited by geography, space or time, we reached out to each other. I believe we were the first generation of teenagers to make friends in this way. I think we were also probably the last to be allowed to travel and meet each other without supervision. We traveled across Britain to show up at "meets", where we acted like British teenagers do, loitering outside pubs, hanging out in parks, and cementing friendships that could not have been created any other way.

Our untribe grew with the Internet. It turns out that everybody is a niche. Everyone has their own complicated mix of interests, background, skills and personality, which is unique to us like a fingerprint. The fact that I have an individual background is not, in itself, unique to me. We all do. With the accessibility of international travel, as well as the ubiquity of international communication, more and more people have backgrounds and contexts that cross traditional borders.

Chris Anderson called this endless parade of uniqueness "the long tail". I don't know if it's a tail: over time, mainstream demographics have revealed themselves to be a ruse, a kind of statistical slight of hand that allows us to dehumanize collections of people, remove the characteristics that make them real, and average them out into buckets so they can be marketed to efficiently. Desires and demographic groups have been manufactured in order to sell; their glue is a kind of peer pressure, the desire to belong bringing with it manufactured social norms. But it's not a tail. In reality, there is no mass market. We are all part of an untribe.

As the Internet has matured, marketers are discovering new ways to sell and to target their messages. Commerce has adapted to the untribe. But a further cultural shift is still to come.

"I don't think you want to belong," someone once said to me, in reference to my outside-ness with respect to nationality. I think it was meant as a criticism, but it was completely true. I had always felt that by bending myself into the social norms of a particular group, I would lose - at least to an extent - the part of myself that didn't fit into those norms, unless it was nurtured in some other way. Most importantly, in a world where we can all make connections based on our individuality, why should an artificial group membership based on where we are born matter at all? To make the point bluntly, if we get along, why should I give a shit where you're from?

Nationality is an artificial demographic. There are more differences within nationalities than between them. Because of the Internet, the friendships we make, the business we negotiate, the media we consume and even the people we fall in love with are not bound by these borders. In this context, being proud of being from a particular place is ridiculous. One might as well be proud of rolling a double 6 in a game of Backgammon, and there are uncomfortable implications. By being born in the United States, say, are you inherently a better person than someone who was born in India? Would you give preference to someone from your home country over someone who wasn't? This is no longer a hypothetical question.

Nationalities no longer make sense as a tribe. In contrast, the Internet has allowed people to define themselves by their passionate interest groups. Consider fandoms, which always existed but previously had been relegated to photocopied fanzines and minor conventions. Those conventions are booming; fanzines are now major publications. Rather than define ourselves by things entirely out of our control, we can use the things we really care about; the tribes we have chosen for ourselves. Nobody will reject you from a convention. There is no entrance requirement.

Really what we're defining ourselves by are our values, of which our interests are a subset. It could be argued that this is one reason why politics have become more divided on the web: increasingly, we are either liberal or conservative. But I think this is an artifact of our moment in time, between the Internet's demographic fracturing and a necessary split from a two-party system. Representative democracy is a good idea, but our parties no longer represent us. If we are all part of an untribe, we need a more granular way of thinking about our values. Our governments are likely to fracture, too, to coalitions of smaller representative groups, but the result will be political structures that more closely resembles who we are as populations. I have to wonder if the same could be true of religions.

In a world where I can have a conversation with someone in Iran as easily as someone in Palo Alto, clinging to traditional borders is an anachronism. Traditional flag-waving patriotism feels like a relic from the past, because it is. We don't need to let go of these traditions and backgrounds entirely - history is a crucial part of who we are, and there's nothing wrong with advocating for a place you love - but we do need to accept our new reality. I am not arguing that ethnicities or sexualities are not a part of who we are, or that they should not be acknowledged, but I do think the aspects of ourselves that we choose are more important than the ones we do not. There is no need to pledge allegiance to a demographic, or discriminate against people who are not of ours. We are all people, connected to each other by our interests, our skills and our personalities.

There is so much to be gained by embracing this, and so little value - as always - in clinging meaninglessly to the past, and robbing people of their individuality. We are all different. The world is so much richer when we think of it in those terms.

· Posts · Share this post