Skip to main content
 

On the new web, get used to paying for subscriptions

4 min read

Piccadilly Circus The Verge reports that YouTube is trying a new business model:

According to multiple sources, the world’s largest video-sharing site is preparing to launch its two separate subscription services before the end of 2015 — Music Key, which has been in beta since last November, and another unnamed service targeting YouTube’s premium content creators, which will come with a paywall. Taken together, YouTube will be a mix of free, ad-supported content and premium videos that sit behind a paywall.

At first glance, this seems like a brave new move for YouTube, which has been ad-supported since its inception. But it turns out that ads on the platform actually haven't been doing that well - and have been pulling down Google's Cost-Per-Click ad revenues as a whole.

However, during the company's earnings call on Thursday, Google's outgoing CFO Patrick Pichette dismissed mobile as the reason for the company's cost-per-click declines. Instead it is YouTube's fault. YouTube's skippable TrueView ads "currently monetize at lower rates than ad clicks on Google.com," Mr. Pichette said. He added that excluding TrueView ads -- which Google counts as ad clicks when people don't skip them -- the number of ad clicks on Google's own sites wouldn't have grown as much in the quarter but the average cost-per-click "would be healthy and growing year-over-year."

If Google's CPC ad revenue would otherwise be growing, it makes sense to switch YouTube to a different revenue model. Subscriptions are tough, but consumers have already shown that they're willing to pay to access music and entertainment services (think Spotify and Netflix).

But what if those revenues don't continue to climb? Back in May, Google confirmed that more searches take place on mobile than on desktop. That pattern continues all over the web: smartphones are fast becoming our primary computing devices, and you can already think of laptops and desktops as the minority.

Enter Apple, which is going to include native ad blocking in the next version of iOS:

Putting such “ad blockers” within reach of hundreds of millions of iPhone and iPad users threatens to disrupt the $70 billion annual mobile-marketing business, where many publishers and tech firms hope to generate far more revenue from a growing mobile audience. If fewer users see ads, publishers—and other players such as ad networks—will reap less revenue.

This is an obvious shot across the bow to Google, but it also serves another purpose. Media companies disproportionately depend on advertising for revenue. The same goes for consumer web apps: largely thanks to Google, it's very difficult to convince consumers to pay for software. They're used to getting high-quality apps like Gmail and Google Docs for free, in exchange for some promotional messages on the side. In a universe web web browsers block ads, the only path to revenue is to build your own app.

From Apple's perspective, this makes sense: it encourages more people to build native apps on their platform. The trouble is, users spend most of their time in just five apps - and most users don't download new apps at all. The idea of a smartphone user deftly flicking between hundreds of beautiful apps on their device is a myth. Media companies who create individual apps for their publications and networks are tilting at windmills and wasting their money.

Which brings us back to subscriptions. YouTube's experiment is important, because it's the first time a mass-market, ad-supported site - one that everybody uses - has switched to a subscription model. If it works, and users accept subscription fees as a way to receive content, more and more services will follow suit. I think this is healthy: it heralds a transition from a personalized advertising model that necessitates tracking your users to one that just takes money from people who find what you do valuable. You can even imagine Google providing a subscription mechanism that would allow long-tail sites with lower traffic to also see payment. (Google Contributor is another experiment in this direction.)

If it doesn't work, we can expect to see more native content ads: ads disguised as content, written on a bespoke basis. These are impossible to block, but they're fundamentally incompatible with long-tail sites with low traffic. They also violate the line between editorial and advertising.

Media companies find themselves in a tough spot. As Bloomberg wrote earlier this year:

This is the puzzle for companies built around publishing businesses that thrived in the 20th century. Ad revenue has proved ever harder to come by as reading moves online and mobile, but charging for digital content can drive readers away.

Something's got to give.

 

Photo by Moyan Brenn on Flickr.

 

What would it take to save #EdTech?

10 min read

Education has a software problem.

98% of higher educational institutions have a Learning Management System: a software platform designed to support the administration of courses. Larger institutions often spend over a million dollars a year on them, once all costs have been factored in, but the majority of people who use them - from educators through to students - hate the experience. In fact, when we did our initial user research for Known, we couldn't find a single person in either of those groups who had anything nice to say about them.

That's because the LMS has been designed to support administration, not teaching and learning. Administrators like the way they can keep track of student accounts and course activity, as well as the ability to retain certain data for years, should they need it in the event of a lawsuit. Meanwhile, we were appalled to discover that students are most often locked out of their LMS course spaces as soon as the course is over, meaning they can't refer back to their previous discussions and feedback as they continue their journey towards graduation.

The simple reason is that educators aren't the customers, whereas administrators have buying power. From a vendor's perspective, it makes sense to aim software products at the latter group. However, it's a tough market: institutions have a very long sales cycle. They might hear about a product six months before they run a pilot, and then deploy a product the next year. And they'll all do it at the same time, to fit in with the academic calendar. At the time of writing, institutions are looking at software that they might consider for a pilot in Spring 2016. Very few products will make it to campus deployment.

There are only a few kinds of software vendors that can withstand these long cycles for such a narrow market. By necessity, they must have "runway" - the length of time a company can survive without additional revenue - to last this cycle for multiple institutions. It follows that these products must have high sticker prices; once they've made a sale, vendors cling to their customers for dear life, which leads to outrageous lock-in strategies and occasionally vicious intra-vendor infighting.

Why can't educators buy software?

If it would lower costs and prevent lock-in, why don't institutions typically allow on-demand educator purchasing? One reason is what I call the Microsoft Access effect. Until the advent of cloud technologies, it was common for any medium to large organization to have hundreds, or even thousands, of Access databases dotted around their network, supporting various micro-activities. (I saw this first-hand early in my career, as IT staff at the University of Oxford's Saïd Business School.) While it's great that any member of staff can create a database, the IT department is then expected to maintain and repair it. The avalanche of applications can quickly become overwhelming - and sometimes they can overlap significantly, leading to inefficient overspending and further maintenance nightmares. For these and a hundred other reasons, purchasing needs to be planned.

A second reason is that, in the Internet age, applications do interesting things with user data. A professor of behavioral economics, for example, isn't necessarily also going to be an expert in privacy policies and data ownership. Institutions need to be very careful with student data, because of legislation like FERPA and other factors that could leave them exposed to being sued or prosecuted. Therefore, for very real legal reasons, software and services need to be approved.

The higher education bubble?

Some startups have decided to overcome these barriers by declaring that they will disrupt universities themselves. These companies provide Massively Open Online Courses directly, most often without accreditation or any real oversight. I don't believe they mean badly: in theory an open market for education is a great idea. However, institutions provide innumerable protections and opportunities for students that for-profit, independent MOOCs cannot provide. MOOCs definitely have a place in the educational landscape, but they cannot replace schools and universities, as much as it is financially convenient to say that they will. Similarly, some talk of a "higher education bubble" out of frustration that they can't efficiently make a profit from institutions. If it's a bubble, it's one that's been around for well over a thousand years. Universities, in general, work.

However, as much as startups aren't universities, universities are also not startups. Some institutions have decided to overcome their software problem by trying to write software themselves. Sometimes it even works. The trouble is that effective software design does not adhere to the same principles as academic discussion or planning; you can't do it by committee. Institutions will often try and create standards, forgetting that a technology is only a standard if people are using it by bottom-up convention (otherwise it's just bureaucracy). Discussions about features can sometimes take years. User experience design falls somewhere towards the bottom of the priority list. The software often emerges, but it's rarely world class.

Open source to the rescue.

Open source software like WordPress has been a godsend in this environment, not least because educators don't need to have a budget to deploy it. With a little help, they can modify it to support their teaching. The problem is that most of these platforms aren't designed for them, because there's no way for revenue to flow to the developers. (Even when educators use specialist hosting providers like Reclaim Hosting - which I am a huge fan of - no revenue makes its way to the application developers in an open source model.) Instead, they take platforms like WordPress, modify them, and are saddled with the maintenance burden for the modifications, minus the budget. While this may support teaching in the short-term, there's little room for long-term strategy. The result, once again, can be poor user experience and security risks. Most importantly, educators run the risk of fitting their teaching around available technology, rather than using technology to support their pedagogy. Teaching and learning should be paramount.

As Audrey Watters recently pointed out, education has nowhere near enough criticism about the impact of technology on teaching.

So where does this leave us?

We have a tangle of problems, including but not limited to:

  • Educators can't acquire software to support their teaching
  • Startups and developers can't make money by selling software that supports teaching
  • Institutions aren't good at making software
  • Existing educational software costs a fortune, has bad user experience and doesn't support teaching

I am the co-founder and CEO of a startup that sells its product to higher education institutions. I have skin in this game. Nonetheless, let's remove "startups" from the equation. There is no obligation for educational institutions to support new businesses (although they certainly have a role in, for example, spinning research projects into ventures). Instead, we should think about the inability of developers to make a living building software that supports teaching. Just as educators need a salary, so do the developers who make tools to help them.

When we remove startups, we also remove an interest in "disrupting" institutions, and locking institutions into particular kinds of technologies or contracts. We also remove a need to produce cookie-cutter one-size-fits-all software in order to scale revenue independently of production costs. In teaching, one size never fits all.

We also know that institutions don't have a lot of budget, and certainly can't support the kind of market-leading salaries you might expect to see at a company like Google or Facebook. The best developers, unless they're particularly mission-driven, are not likely to look at universities first when they're looking for an employer. The kinds of infrastructure that institutions use probably also don't support the continuous deployment, fail forward model of software development that has made Silicon Valley so innovative.

So here's my big "what if".

What if institutions pooled their resources into a consortium, similar to the Open Education Consortium (or, perhaps, Apereo), specifically for supporting educators with software tools?

Such an organization might have the following rules:

LMS and committee-free. The organization itself decides which software it will work on, based on the declared needs of member educators. Rather than a few large products, the organization builds lots of small, single-serving tools that do one thing well. Rather than trying to build standards ahead of time, compatibility between projects emerges over time by convention, with actual working code taking priority over bureaucracy.

Design driven. Educators are not software designers, but they need to be deeply involved in the process. Here, software is created through a design thinking process, with iterative user research and testing performed with both educators and students. The result is likely to be software that better meets their needs, released with an understanding that it is never finished, and instead will be rapidly improved during its use.

Fast. Release early, release often.

Open source. All software is maintained in a public repository and released under a very liberal license. (After all, the aim here is not to receive a return on investment in the form of revenue.) One can easily imagine students being encouraged to contribute to these projects as part of their courses.

A startup - but in the open. The organization is structured like a software company, with the same kinds of responsibilities. However, most communications take place on open channels, so that they can at least be read by students, educators and other organizations that want to learn from the model. The organization has autonomy from its member institutions, but reports to them. In some ways, these institutions are the VC investors of the organization (except there can never be a true "exit").

A mandate to experiment. The aim of the organization is not just to experiment with software, but also the models through which software can be created in an academic context. Ideally, the organization would also help institutions understand design thinking and iterative design.

There is no doubt that institutions have a lot to gain from innovative software that supports teaching on a deep level. I also think that well-made open source software that follows academic values rather than a pure profit motive could be broadly beneficial, in the same way that the Internet itself has turned out to pretty good thing for human civilization. As we know from public media, when products exist in the marketplace for reasons other than profit, it affects the whole market for the better. In other words, this kind of organization would be a public good as well as an academic one.

How would it be funded? Initially, through member institutions, perhaps on a sliding scale based on the institution's size and public / private status. I would hope that over time it would be considered worthy of federal government grants, or even international support. However, just as there's no point arguing about academic software standards on a mailing list for years, it's counter-productive to stall waiting for the perfect funding model. It's much more interesting to just get it moving and, finally, start building software that help teachers and students learn.

 

The Internet is more alive than it's ever been. But it needs our help.

5 min read

Another day, another eulogy for the Internet:

It's an internet driven not by human beings, but by content, at all costs. And none of us — neither media professionals, nor readers — can stop it. Every single one of us is building it every single day.

Over the last decade, the Internet has been growing at a frenetic pace. Since Facebook launched, over two billion people have joined, tripling the number of people who are connected online.

When I joined the Internet for the first time, I was one of only 25 million users. Now, there are a little over 3 billion. Most of them never knew the Internet many of us remember fondly; for them, phones and Facebook are what it has always looked like. There is certainly no going back, because there isn't anything to return to. The Internet we have today is the most accessible it's ever been; more people are connected than ever before. To yearn for the old Internet is to yearn for an elitist network that only a few people could be part of.

This is also the fastest the Internet will ever grow, unless there's some unprecedented population explosion. And it's a problem for the content-driven Facebook Internet. These sites and services need to show growth, which is why Google is sending balloons into the upper atmosphere to get more people online, and why Facebook is creating specially-built planes. They need more people online and using their services; their models start to break if growth is static.

Eventually, Internet growth has to be static. We can pour more things onto the Internet - hey, let's all connect our smoke alarms and our doorknobs - but ultimately, Internet growth has to be tethered to global population.

It's impressive that Facebook and Google have both managed to reach this sort of scale. But what happens once we hit the population limit and connectivity is ubiquitous?

From Vox:

In particular, it requires the idea that making money on this new internet requires scale, and if you need to always keep scaling up, you can't alienate readers, particularly those who arrive from social channels. The Gawker of 2015 can't afford to be mean, for this reason. But the Gawker of 2005 couldn't afford not to be mean. What happens when these changes scrub away something seen as central to a site's voice?

In saying that content needs to be as broadly accessible as possible, you're saying that the potential audience for any piece must be 3.17 billion people and counting. It's also a serious problem for journalism or any kind of factual content: if you're creating something that needs to be as broadly accessible as possible, you can't be nuanced, quiet, or considered.

The central thesis that you need to have a potential audience of the entire Internet to make money on it is flat-out wrong. On a much larger Internet, it should theoretically be easier to find the 1,000 true fans you need to be profitable than ever before. And then ten thousand, and a million, and so on. There are a lot of people out there.

In a growth bubble (yes, let's call it that), everyone's out to grab turf. On an Internet where there's no-one left to join and everyone is connected, the only way you can compete is the old-fashioned way: with quality. Having necessarily jettisoned the old-media model, where content is licensed to geographic regions and monopoly broadcasters, content will have to fight on its own terms.

And here's where it gets interesting. It's absolutely true that websites as destinations are dead. You're not reading this piece because you follow my blog; you're either picking it up via social media or, if you're part of the indie web community and practically no-one else, because it's in your feed reader.

That's not a bad thing at all. It means we're no longer loyal readers: the theory is that if content is good, we'll read and share it, no matter where it's from. That's egalitarian and awesome. Anyone can create great content and have it be discovered, whether they're working for News International or an individual blogger in Iran.

The challenge is this: in practice, that's not how it works at all. The challenge on the Internet is not to give everyone a place to publish: thanks to WordPress, Known, the indie web community and hundreds of other projects, they have that. The challenge is letting people be heard.

It's not about owning content. On an Internet where everyone is connected, the prize is to own discovery. In the 21st century more than ever before, information is power. If you're the way everyone learns about the world, you hold all the cards.

Information is vital for democracy, but it's not just socially bad for one or two large players to own how we discover content on the Internet. It's also bad for business. A highly-controlled discovery layer on the Internet means that what was an open market is now effectively run by one or two companies' proprietary business rules. A more open Internet doesn't just lead to freedom: it leads to free trade. Whether you're an activist or a startup founder, a liberal or a libertarian, that should be an idea you can get behind.

The Internet is not dead: it's more alive than it's ever been. The challenge is to secure its future.

 

Market source: an open source ecosystem that pays

4 min read

Open source is a transformative model for building software. However, there are a few important problems with it, including but not limited to:

  1. "Libre" has become synonymous with "no recurring license", meaning it's hard for vendors to make money from open source software in a scalable way.
  2. As a result, "Open source businesses" are few and far between, except for development shops that provide services on top of platforms that other people have built for free, and service businesses like Red Hat. (Red Hat is the only sizeable open source business.)
  3. Even if the cost to the end user is zero, the total cost to produce and support the software does not go down.
  4. There is a diversity problem in open source, because only a few kinds of people can afford to give their time for free, meaning that open source software misses out on a lot of potential contributions from talented people.

I believe that the core product produced by a business can never be open source. In Red Hat's case, it's services. In Automattic's case, it's the Akismet and the WordPress.com ecosystem (WordPress itself is run by a non-profit entity). In Mozilla's case, it's arguably advertising. Even GitHub, which has enabled so much of today's open source ecosystem, itself depends on a closed-source platform. After all, they need to make money.

Nonetheless, having an open codebase is beneficial:

  1. It gives the userbase a much greater say in the direction of the software.
  2. It allows the software to be audited for security purposes.
  3. It allows the software to be adapted for environments and contexts that the original designers and architects did not consider.

So how can we retain the benefits of being open while allowing for scalable businesses?

One option I've been thinking about combines the mechanics of crowdfunding platforms like Patreon with an open source dynamic. I call it market source:

  1. End users pay a license fee to use the software. This could be as low as $1, depending on the kind of software, and the dynamics of its audience. (For example, $1 is totally fair for a mobile app; an enterprise intranet platform might be significantly higher.)
  2. In return, users receive a higher level of support than they would from a free open source project, perhaps including a well-defined SLA where appropriate.
  3. Users also get access to the source code, as with any open source codebase. Participants are encouraged to file issues and pull requests.
  4. Accepted pull requests are rewarded with a share of the pool of license money. Rather than rewarding by volume of code committed - after all, some of the best commits remove code - this is decided by the project maintainers on a simple scale. Less-vital commits are rewarded with a smaller share of the pool than more important commits.
  5. Optionally: users can additionally place bounties on individual issues, such that any user with an accepted pull request that solves the issue also receives the bounty.
  6. The pool is divided up at the end of every month and automatically credited to each contributor's account.

For the first time, committers are guaranteed to be compensated for the unsolicited work they do on an open source project. Perhaps more importantly, funding is baked into the ecosystem: it becomes much easier for a project to bootstrap based on revenue, because it is understood by all stakeholders that money is a component.

The effect is that an open source project using this mechanism is a lot like a co-operative. Anyone can contribute, as long as they adhere to certain rules, and they will also receive a share of the work they have contributed to.

These dynamics are not appropriate for every open source project. However, they create new incentives to participate in open source projects, and - were they to be successful - would create a way for new businesses to make more secure, open software without committing to giving away the value in their core product.

 

Two years of being on the #indieweb

2 min read

For the last two years, I haven't directly posted a single tweet on Twitter, a single post on Facebook or LinkedIn, or a photo on Flickr. Instead, I publish on my own site at werd.io, and syndicate to my other services.

If Flickr goes away, I keep all my photos. If Twitter pivots to another content model, I keep all my tweets. If I finally shut my Facebook profile, I get to keep everything I've posted there. And because my site is powered by Known, I can search across all of it, both by content and content type.

My site is Known site zero. It's hosted on my own server, using a MongoDB back-end. I'm also writing 750 words a day on a withknown.com site - kept away from here because this site is mostly about technology, and those pieces are closer to streams of consciousness. Very shortly, though, I'll be able to syndicate from one Known site to another.

The indie web community has created a set of fantastic protocols (like webmention) and use patterns (like POSSE). I'm personally invested in making those technologies accessible to both non-technical and impatient users - partially because I'm very impatient myself.

This is a community that's been very good to me, and I find it really rewarding to participate. I'm looking forward to continuing to be a part of it as it goes from strength to strength.

 

Let's expand the Second Amendment to include encryption.

3 min read

The German media is up in arms today because both German politicians and journalists were surveilled by the United States. Meanwhile, Germany is being sued by Reporters Without Borders this week for intercepting email communications. Over in the UK, Amnesty International released a statement yesterday after learning that their communications had been illegally intercepted. (Prime Minister David Cameron also declared his intention to ban strong encryption this week.) France legalized mass surveillance in June.

Everyone, in other words, is spying on everyone else. This has profound democratic implications.

From Amnesty International's statement:

Mass surveillance is invasive and a dangerous overreach of government power into our private lives and freedom of expression. In specific circumstances it can also put lives at risk, be used to discredit people or interfere with investigations into human rights violations by governments.

Furthermore:

We have good reasons to believe that the British government is interested in our work. Over the past few years we have investigated possible war crimes by UK and US forces in Iraq, Western government involvement in the CIA's torture scheme known as the extraordinary rendition programme, and the callous killing of civilians in US drone strikes in Pakistan: it was recently revealed that GCHQ may have provided assistance for US drone attacks.

It has been shown that widespread surveillance creates a chilling effect on journalism, free speech and dissent. Just the fact that you know you're being surveilled changes your behavior, and as the PEN American Center discovered, this includes journalism. Journalism, in turn, is vital for a healthy democracy. A voting population is only as effective as the information they act upon.

Today is July 3. It seems appropriate to revisit the Second Amendment to the Constitution of the United States, which was passed by Congress and ratified by the States in two forms:

A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.

A well regulated militia being necessary to the security of a free state, the right of the people to keep and bear arms shall not be infringed.

The Supreme Court has confirmed [PDF] that this has a historical link to the older right to bear arms in the English Bill of Rights: "That the Subjects which are Protestants may have Arms for their Defence suitable to their Conditions and as allowed by Law." The Supreme Court has also verified multiple times that the right to bear arms is an individual right.

In 2015, guns are useless at "preserving the security of a free state", and cause inordinate societal harm. Meanwhile, encryption is one of the most important tools we have for preserving democratic freedom. We already subject encryption to export controls on the munitions list. It seems reasonable, and very relevant, to expand the definition of "arms" in the Second Amendment to include it. Let's use the effort that has been put into allowing individual citizens to own firearms, and finally direct it to preserving democracy.

While this would protect the democratic rights of US citizens, it would not impact the global surveillance arms race in itself. It would be foolish to only consider the freedom of domestic citizens: Americans are not more important than anyone else. However, considering the prevalence of American Internet services, and the global influence of American policy as a whole, it would be a very good first step.

 

Just zoom out.

4 min read

Sometimes it's important to step out of your life for a while.

I spent the last week in Zürich, reconnecting with my Swiss family in the area. A long time ago, I named an open source software platform after a nearby town that my family made their home hundreds of years ago: Elgg. I hadn't been back to the town since I was a child, and visiting it stirred echoes of memories that gained new focus and perspective.

I grew up in the UK, more or less, and while I had some extended family there, mostly they were in Switzerland, the Netherlands and the United States. I've always been fairly in touch with my US family, but not nearly enough with my European cousins. Effectively meeting your family for the first time is surreal, and it happens to me in waves.

Growing up as an immigrant, and then having strong family ties to many places, means that everywhere feels like home and nothing does. I often say that family is both my nationality and my religion. Moving to the US, where I've always been a citizen, feels no more like coming home than moving to Australia, say. Similarly, walking around Zürich felt like a combination of completely alien and somewhere I'd always been - just like San Francisco, or Amsterdam. My ancestors were textile traders there, centuries ago, and had a say in the running of the city, and as a result, our name crops up here and there, in museums and on street corners.

So, Zürich is in the atoms of who I am. So is the Ukrainian town where another set of ancestors fled the Pogroms; so are the ancestors who boarded the Mayflower and settled in Plymouth; so are the thousands of people and places before and since. My dad is one of the youngest survivors of the Japanese concentration camps in Indonesia, who survived because of the intelligence and determination of my Oma. His dad, my Opa, was a prominent member of the resistance. My Grandpa translated Crime and Punishment into English and hid his Jewishness when he was captured as a prisoner of war by the Nazis. My great grandfather, after arriving in New England from Ukraine, was a union organizer, fighting for workers' rights. My Grandma was one of the kindest, calmest, wisest, most uniting forces I've ever known in my life, together with my mother, even in the face of wars, hardship, and an incurable disease.

All atoms. Entire universes in their own right, but also ingredients.

And so is Oxford, the city where I grew up. Pooh Sticks from the top of the rainbow bridge in the University Parks; the giant horse chestnut tree that in my hands became both a time machine and a spaceship; the Space Super Heroes that my friends and I became over the course of our entire childhoods. Waking up at 5am to finish drawing a comic book before I went to school. Going to an English version of the school prom with my friends, a drunken marquee on the sports field, our silk shirts subverting the expected uniform in primary colors. Tying up the phone lines to talk to people on Usenet and IRC when I got home every day, my open window air cooling my desktop chassis. My friends coming round on Friday nights to watch Friends, Red Dwarf and Father Ted; LAN parties on a Saturday.

In Edinburgh, turning my Usenet community into real-life friends. Absinthe in catacombs. Taking over my house on New Year's Eve, 1999, and having a week-long house party. Walking up Arthur's Seat at 2am in a misguided, and vodka-fuelled attempt to watch the dawn. Hugging in heaps and climbing endless stairs to house parties in high-ceilinged tenements. Being bopped on the head with John Knox's britches at graduation. The tiny, ex-closet workspace we shared when we created Elgg, where the window didn't close properly and the smell of chips wafted upwards every lunchtime. And then, falling in love, which I can't begin to express in list form. The incredible people I have been lucky enough to have in my life; the incredible people who I have also lost.

And California too. We are all tapestries, universes, and ingredients. Works in progress.

If we hold a screen to our faces for too long, the world becomes obscured. Sometimes it's important to step out of your life for a while, so you can see it in its true perspective.

 

If we want open software to win, we need to get off our armchairs and compete.

9 min read

The reason Facebook dominates the Internet is that while we were busy having endless discussions about open protocols, about software licenses, about feed formats and about ownership, they were busy fucking making money.

David Weinberger writes in the Atlantic:

In the past I would have said that so long as this architecture endures, so will the transfer of values from that architecture to the systems that run on top of it. But while the Internet’s architecture is still in place, the values transfer may actually be stifled by the many layers that have been built on top of it.

In short, David worries that the Internet has been paved, going so far as to link to Joni Mitchell's Big Yellow Taxi as he does it. If the decentralized, open Internet is paradise, he implies, then Facebook is the parking lot.

While he goes on to argue, rightly, that the Internet isn't completely paved and that open culture is alive and well, the assumption that an open network will necessarily translate to open values is obviously flawed. I buy into those values completely, but technological determinism is a fallacy.

I've been using the commercial Internet (and building websites) since 1994 - which is a lifetime in comparison to some Internet users, but makes me a spring chicken in comparison to others. The danger for people like us is that we tend to think of the early web in glowing terms: every website felt like it was built by an intern and hosted in a closet somewhere (and may well have been), so the experience was gloriously egalitarian. Anyone could make a home on the web, whether you were a megacorporation or sole enthusiast, and it had an equal chance of gaining an audience. Many of us would like this web back.

Before Reddit, there were Usenet newsgroups. (I'll take a moment to let the Usenet faithful calm down again.) Every September, a new group of students would arrive at their respective universities and get online, without any understanding of the cultural mores that had come before. They would begin chatting on Usenet newsgroups, and long-standing users would groan inwardly as they quietly taught the new batch all about - remember this word? - netiquette.

In September, 1993, AOL began offering Usenet access to its commercial subscribers. This moment became known as "the eternal September", because the annual influx of new Internet users became a constant trickle, and then a deluge. There was no going back, and the Internet culture that had existed before began to give way to a new culture, defined by the commercial users who were finding their way online.

"Eternal September" is a loaded, elitist term, used by people who wanted to keep the Internet for themselves. As early web users, rich with technostalgia and a warm regard for the way things were, we run the risk of carrying the torch of that elitism.

The central deal of the technology industry is this: keep making new shit. Innovate or die. You can be incredibly successful, making a cultural impact and/or personal wealth beyond your wildest dreams, but the moment you rest on your laurels, someone is going to eat your lunch. In fact, this is liable to happen even if you don't rest on your laurels. It's a geek-eat-geek world out there.

For many people, Facebook is the Internet. The average smartphone user checks it 14 times a day, which of course means that a lot of smartphone users check it far more than that. In the first quarter of this year, Facebook had 1.44 billion monthly active users. That means that almost 20% of the people on Earth don't just have a Facebook account: they check it regularly. In comparison, WordPress, which is probably the platform most used to run an independent personal website, powers around 75 million sites in total - but Apple's App store has powered over 100 billion app downloads.

Are all those people wrong? Does the influx of people using Facebook as the center of their Internet experience represent a gargantuan eternal September? Or have apps just snuck up and eaten the web's lunch?

Back in 2011, I sat on a SXSW panel (yes, I'm that guy) about decentralized web identity with Blaine Cook and Christian Sandvig. While Blaine talked about open protocols including webfinger, and I talked about the ideas that would eventually underly Known, Christian was noticeably contrarian. When presented with the central concepts around decentralized social networking, his stance was to ask, "why do we need this?" And: "why will this work?"

In the Atlantic, David Weinberger references Christian's paper "The Internet as the Anti-Television" (PDF), where he argues that the rise of CDNs and other technologies built to solve commercial distribution problems have meant that the egalitarian playing field that we all remember on the web is gone forever. While services like CloudFlare allow more people than ever before to make use of a CDN, it requires some investment - as do domain names, secure certificates, and even hosting itself. (The visual and topical diversity of GeoCities and even MySpace, though roundly mocked, was very healthy in my opinion, but is gone for good.)

For most people, Facebook is faster, easier to use, and, crucially, free.

Rather than solving these essential user problems, the open web community disappeared up its own activity streams. Mailing list after mailing list filled with technical arguments, without any products or actual technical innovation to back them up. Worse, in many organizations, participating in these arguments was seen as productive work, rather than meaningless circling around the void. Very little software was shipped, and as a result, very little actual innovation took place.

Organizations who encourage endless discussion about web technologies are, in a very real way, promoting the death of the open web. The same is true for organizations that choose to snark about companies like Facebook and Google rather than understanding that users are actually empowered by their products. We need to meet people where they're at - something the open web community has been failing at abysmally. We are blindsided by technostalgia and have lost sight of innovation, and in doing so, we erase the agency of our own users.

"They can't possibly want this," we say, dismissively, remembering our early web and the way things used to be. Guess what: yes they fucking do.

This stopped being a game some time ago. Ubiquitous surveillance, diversity in publishing and freedom of the press are hardly niche issues: they're vital to global democracy. A world in which most of our news is delivered to us through a single provider (like Facebook), and where our every movement and intention can be tracked by an organization (like Google) is not where any of us should want to be. That's not inherently Facebook or Google's fault: as American corporations, they will continue to follow commercial opportunities. It's not a problem we can legislate or just code away. The issue is that there isn't enough of a commercial counterbalancing force, and it really matters.

Part of the problem is that respectful software - software that protects a user's privacy and gives them full control over their data - has become political. In particular, "open source" has become synonymous with "free of charge", and even tied up with anti-capitalism causes. This is a mistake: open source and libre software were never intended to be independent from cost. The result of tying up software that respects your privacy with the idea that software should come without cost is that it's much harder to make money from it.

If it's easier to make money by violating a user's autonomy than protecting it, guess which way the market will go?

A criticism I personally receive on a regular basis is that we're trying to make money with Known (which is an open source product using the Apache license). A common question is, "shouldn't an open source project be free from profit?"

My answer is, emphatically, no. The idea behind open source and libre software is that you can audit the code, to ensure that it's not doing something untoward behind your back, and that you can modify its function. Most crucially, if we as a company go bust, your data isn't held hostage. These are important, empowering values, and the idea that you shouldn't make money from products that support them is crazy.

More importantly, by divorcing open software from commercial forces, you actually remove some of the pressure to innovate. In a commercial software environment, discussing an open standard for three years without releasing any code would not be tolerated - or if it was, it would be because that standard was not significant to the company's bottom line, or because the company was so mismanaged that it was about to disappear without trace. (Special mention goes to the indie web community here, for specifically banning mailing lists and emphasizing shipping software.)

The web is no longer a movement: it's a market. There is no vanguard of super-users who are more qualified to say which products and technologies people should use, just as there should be no vanguard of people more qualified than others to make political decisions. Consumers will speak with their wallets, just as citizens speak with their votes.

If we want products that protect people's privacy and give people control over their data and identities - and we absolutely should - then we have to make them, ship them, and do it quickly so we can iterate, refine and make something that people really love and want to pay for. This isn't politics, it's innovation. The business models that promote surveillance and take control can be subverted: if we decide to compete, we can sneak up and eat their lunch.

Let's get to work.

 

Community is the most important part of open source (and most people get it wrong)

3 min read

This post by Bill Mills about power and communication in open source is great:

Being belittled and threatened and told to shut up as a matter of course when growing up is the experience of many; and it does not correlate to programming ability at all. It is not enough to simply not be overtly rude to contributors; the tone was set by someone else long before your first commit. What are we losing by hearing only the brash?

Bottom line: if you, either as a maintainer or as a community, are telling people to shut up then you're not open at all.

If you make opaque demands of people to test their legitimacy before participating then you're not open at all.

If you require that only certain kinds of people participate then you're not open at all.

The potential of open source is, much like the web, that anyone can participate. On Known, we're really keen to embrace not just developers, but designers, writers, QA testers - anyone who wants to chip in and create great software with us. That's not going to happen if we're unfriendly or project the vibe that only certain kinds of people can play. Donating time and resources on an open project is a very generous act, that not everyone can participate in. Frankly, as a community we should be grateful that anyone wants to take part.

As a project founder, a lot of that is about leading by example. That means being talkative and open. I get a lot of direct messages and emails from people, and I try and direct people to participate in the IRC channel and the mailing list - not just because it allows our conversations to be findable if people in the future have similar questions, but because every single message adds to the positive feedback loop. If there's public conversation going on, and it's friendly, then hopefully more people will feel comfortable taking part in it.

Like any positive communication, a lot of this is related to empathy. I'm pretty shy: what would make me feel welcome to participate in a community? Probably not abrupt messages, terse technical corrections or (as we see in many communities) name-calling. Further to that, explicitly marking the community as a safe space is important. We're one of the few communities to have an anti-harassment policy; I'm pleased to say that we've never had to invoke it. More communities should do this.

Which isn't to say that there isn't more that we can do. There is: we need better documentation, better user discussion spaces, a better showcase for people to show off what they've built on top of Known. We're working on it, but let us know what you think.

And please! Whether you're a writer, designer, illustrator, eager user, or a developer, we'd love for you to get involved.

 

10 things to consider about the future of web applications

3 min read

  1. Twitter - by far the social network that I use the most - is struggling to break 300 million monthly active users and is not hitting revenue targets. (Contrast with Facebook's 1.44 billion monthly actives.) Even investor Chris Sacca has warned that he's going to start making "suggestions".
  2. Instagram - still a newcomer in many peoples' eyes - is beginning to send re-engagement emails in response to flagging user growth.
  3. The 2016 US election is apparently going to be huge on Snapchat. Translation: Snapchat is over. The next generation of young users are already looking for something else. Snapchat was released in September 2011.
  4. The Document Object Model - core to how web pages are manipulated inside the browser - is slow, and may never catch up to native apps. We've known that responsiveness matters for engagement for over a decade.
  5. It's possible to build more responsive web apps by going around the DOM. But these JavaScript-based web apps are harder to parse and often can't be seen by search engines (unless you provide a fallback, which requires a lot of extra programming time).
  6. Push notifications - which are core to apps like Snapchat, and possibly the future of Internet applications - are not available on the open web. Browsers like Chrome are implementing them on a browser-by-browser basis.
  7. Facebook has no HTML fallbacks, renders almost entirely in JavaScript and lives off push notifications. Twitter has HTML fallbacks, is very standards-based, uses push notifications but also SMS and email, and is generally a good player (with respect to the web, at least, although it's less good at important features like abuse management). Facebook is kicking Twitter's ass.
  8. The thing that may save Twitter? Periscope, a native live video app, which is highly responsive and live-video-heavy.
  9. Users have stopped paying for apps, and instead opt for free apps that have in-app purchases, so they can try before they buy. We're a long way off having a payments standard for the web.
  10. There's no way to transcode video in a web browser, which means uploading video via the web is effectively impossible on most mobile connections. (Who wants to sit and wait for a 1GB file to upload, even on an LTE connection?) Meanwhile, the web audio API saves WAV files, rather than some other, more highly-compressed formats you may have heard of. Similarly, resampling images is difficult. In other words, while the web has been optimized for consumption (albeit in a slower way than native apps, as we've seen), it has a long way to go when it comes to letting people produce content, particularly from mobile devices.

What does all of this mean?

I don't mean to be pessimistic, but I think it's important to understand where users are at. The people making the web aren't always the people using it, and there's a serious danger that we find ourselves trying to remake the platform we all enjoyed when we first discovered it.

Instead, we need to make something new, and understand that if we're building applications to serve people, the experience is more important to our users than our principles.

All of these things can be solved. But while we're solving them at length, native app developers are going off and building experiences that may become the future of the Internet.

 

Remembrance on Memorial Day

2 min read

Memorial Day is an American holiday that commemerates people who have lost their lives in service to the United States. It's similar to Remembrance Day in the commonwealth countries, except that it's also a long weekend that marks the start of the summer.

My dad spent the first few years of his life in a Japanese-run internment camp, while my grandfather was captured by the Nazis and had to deny his Jewishness to stay alive. My great grandfather's family fled pogroms in Ukraine. I'm a pacifist, and war is evil, but I also believe it is sometimes necessary as a last resort.

But in remembering the brave people who fought and died, it's also important to remember who sent them, and why. Many people died in Iraq for shameful political reasons. Many people died in Afghanistan. Vietnam. Patriotism can't just be about remembering their sacrifice: it has to also be about trying to make sure it never happens again, whether we consider war to be just or not. One life lost is too many. There are still war criminals involved in some of those wars who need to see justice. There are still politicians who make joining the armed services and dying for their country the only viable career choice for many people.

I also want to remember, as Marc has in his own post, the people who are fighting for freedom domenstically. People have lost their lives for civil rights, for equality, for better working conditions for immigrants, for the right to form a union, for the right to vote. In contrast to resource skirmishes for policial purposes, their sacrifices have changed our societies for the better in tangible ways. They should never be forgotten.

 

A walk in the park

11 min read

"Daddy," Wendy said, looking up at the sky, the cloudless blue stretching opaquely in all directions. "Can people fly?"

I smiled at her. "They say that some people can," I said.

"Can I fly?"

I shook my head, smiling. "No, honey," I said, gently. "Mommy and daddy can't fly either."

"Why not?"

"Well, for one thing, being on the ground's just fine," I said, patting the grass in the park. "It's nice down here."

And for another, I thought, I can't afford that upgrade.

I can't be completely certain on which day I died, but I'm pretty sure it was the Wednesday before Thanksgiving: the first real cold day in months, when the wind picked up and remembered it was fall, and the auburn of the leaves finally overwhelmed the green. Wendy was with her grandparents, and I just thought, well, why the hell not?

I was the last on my block to die. None of us knew it yet, of course. If anything, we were excited for it and what it meant: the Mayor had been the first to go all the way, and he came back from the store telling us about all the new colors he was seeing. A more vivid spectrum, he said; he could see the life-force in everything around him, from the squirrels in the trees to the trees themselves. He said it was magical. He used that word: magical.

Pretty soon, everyone was doing it. The police all did it as a group, before the police disbanded, and then the local businesspeople in town started to die one by one, and then the rich part of town, and then our neighbors. Finally, when it seemed like everyone else had gone and done it and I was overwhelmed with fear that I was missing out on something vital, I went and did it, too.

Wendy was the last living girl I knew.

When Samantha died, she was gone, and it felt like there was a hole in my heart that was torn fresh each morning and could never heal. I clawed at the walls wishing time would peel back with the wallpaper, and found ways to smile for our daughter even as I wanted to rip myself apart. When she slept, mercifully in her own room because she was a big girl now, I retired to our bed and wept. Were it not for the life that was entrusted to me, my precious girl, I would have simply found my way to the bridge and stepped into nothingness.

Death isn't what it used to be. When I died, I saw new colors in the trees, my metabolism was automatically managed for me by an intelligent software agent, and I gained the ability to manage my emotions from an elegant control panel on my wrist. My memories were organized. For the first time in my existence, I felt fully in control. It was magical.

When I first stepped out of the store, having paid the staff to configure me, I was struck by the silence in my head. The pain of Samantha's legacy death was gone, along with the background ebb and flow of dread that had been my soundtrack. Samantha's face was not staring at me from my mind's eye, as it had done every day for the past four years. I had no desire to whisper her name and call out to her. I had no desire at all.

Instead, I saw information. My mind had become a dashboard, and everything in the world was a point of data. I felt free, and suddenly the world seemed full of possibilities. They had worked hard on designing that first emotion: it was spectacular. The app told me I was empowered.

Wendy and I held hands as we walked through the park. The glow of the energy in the trees rivaled the sunshine. She sang to herself, softly, and my controller app told me it was an old Sandie Shaw tune. It suggested I sing a few bars along with her to build empathy, and I complied.

"Wow, that's a real old song," I said. "Where did you learn that?"

"It's from a commercial," Wendy said, matter-of-factly.

The controller told me which one and played a few seconds for me, and I smiled.

"Do you want to go get a burger?" I asked, sensing that she was hungry.

"How did you know it was from that commercial?" Wendy said, smiling. She nodded.

The burger joint was on the edge of the park; we crossed a tall bridge to get there, and paused briefly to play pooh sticks. We stood at the top of the bridge and each dropped a stick into the water. The winner would be the stick that the current passed through first; like always, I engineered my throw so that it would be Wendy's, and like always, she was delighted. One day she would tire of this game and I would have to find new ways to entertain her. Luckily, I had access to a vast database of games.

It was a traditional sort of place, not unlike the kind my dad had taken me to when I was Wendy's age. Everything was primary-colored, and the tables were wipe-clean. Booths lined with glittered cushioning sat against the walls, while the floor was dotted with circular tables. Once, almost every table would have been occupied with families like ours. These days, almost nobody needed food, so the loop of soft, upbeat music played to an empty room. I already knew what Wendy wanted, and my app communicated this to the kitchen wordlessly; we simply took a booth, and a drone waiter flew out the food once it was ready.

"How's your hamburger?" I asked, knowing the answer was written on her smiling face. The hamburger meat was made with insect meal, but that was all that Wendy had ever known, and anyway, they found plenty of ways to make it just as juicy and delicious as the beef I had enjoyed as a child. I felt her emotions as if they were my emotions and knew that it was good.

"Don't you want to eat anything?" she asked between mouthfuls.

"I'll eat later," I said.

I sighed happily, although my oxygen is processed for me and I no longer use my lungs. "What next, pumpkin?"

"I haven't finished my fries yet," Wendy pointed out, and although I knew she didn't really need them nutritionally, I saw that they would have an emotional benefit. "Once I've finished my french fries, let's go home," she said.

"Sounds like a plan," I said, smiling. The app suggested I look out the window, and I took in the amber light of the dimming sun.

The app strongly suggested I step outside. "I'll be right back, okay, honey?"

"Okay, daddy."

I slid myself out from the booth seat and walked through the swing doors to the street outside. The controller app gave me directions and I turned to follow them exactly, suddenly aware that my energy levels were running low. My pace increasing, I walked around to the rear of the restaurant and scanned the backlot.

There was an old Ford Fusion parked against the dumpsters, and I could see a man rummaging around inside them, his torso fully submerged in trash.

"Excuse me," I said, understanding why I was here.

With the advent of the controller apps, there was no need for a police force: everyone became the eyes and arms of the law. When it was introduced halfway through a lavish keynote presentation, crowdsourced enforcement was hailed by the press as the future of policing. They took great pains to point out that it was not the same as vigilante justice: the apps were highly-regulated, and everything app users saw and heard was recorded for later algorithmic judicial compliance. Anyway, using the app to begin with enforced compliance with social norms; virtually everyone had it installed, so it was rare that policing was even required. Staying on the straight and narrow felt good. The app made sure of it.

The man kept rummaging.

"Excuse me," I said again.

The man pulled himself out of the dumpster and turned to look at me. He was wearing a flannel shirt, and had an unsanitary beard. He'd managed to pull out some discarded food: leaves of lettuce, a few packages of insect meal.

"That food is the property of this restaurant," I said. "You really can't take it."

"They're just throwing it out, dude," the man said.

"It's theft," I said. "If you're hungry, there are ways to get credit so you can buy your own food."

"It's wasteful," the man said.

"It's their property," I said.

He sighed. "Fucking users," he said, under his breath.

"Pardon me?"

"Fucking users," he said again, loudly and deliberately.

He turned to run, but naturally, my reaction time was faster. I caught up to him before he managed to get up to speed without needing to increase my heart rate, and forced his arm behind his back.

He yelled in pain. "I've got kids, man."

"You need to respect the rules of your community," I said. "Your children don't outweigh the rights of this restaurant to their property."

My body sent me a notification. I was critically low on energy; I needed to recharge quickly, or I would shut down.

"Put the property down," I said. "Now."

"Fuck you," the man said, spitting. "I need to feed my family."

Another notification. I would be down soon. "You should put down the food," I said, holding the arm lock. Behind the scenes, the app sent a request to the cloud controller.

Request approved. Unthinkingly, I let go of the man's arm, and put both hands around his head. For a moment, he screamed out in fear, but the app downloaded quickly, and he fell limp. For thirty seconds, we stood in the backlot together, me cradling his head, distant birdsong the only sound. Silently, a spot of light on my wrist pulsed to the rhythm of my long-discarded heartbeat.

Fully charged; needs met. With a single movement, I picked up the carcass and threw it in the dumpster.

Wendy had finished her burger and fries, and was patiently watching for me through the window. She waved at me through the glass as I approached, and flashed me a toothy grin as I sat back down opposite her. "That was delicious," she said, smiling. "Are we ready to go home?"

I smiled at my daughter, the only living child in town. "Absolutely."

We lived in a small, suburban house with a driveway in front and a small yard in the back, similar to around 60% of our town's population. It was a little too big for just two of us, but moving would have been one stressful experience too many for Wendy, so we stayed. The remnants of Samantha's presence no longer bothered me, and Wendy had been too young. For her, they were a curiosity.

Much of the bandwidth for the National Internet had been dedicated to controller apps; almost nobody strolled the web or talked to each other over immersive video, because they didn't need to. We were all a mesh now, united by our controllers and tethered to the cloud. But I had Wendy, so I continued to purchase Internet service so she could tap into the movies. We would often sit on the couch together until she fell asleep, a blanket covering us both, like we had always done. I could sense that it was comforting to her, and it was a pattern that I found pleasant, too. I don't know if what I felt was real closeness, or a well-designed authentic emotional experience. They are materially the same, so I don't believe it matters.

There we lie, father and daughter, as the movie runs to its credits. She dreams of adventures and new experiences, and I watch the electrons dance inside her brain. And then I carry her up to her bed, ready for another day, my hands cradling her head just a little. She is beneficial.

 

Publish on your Own Site, Reflect Inwardly

2 min read

Known gives you the ability to share the content you create across social media platforms at the point of publishing, with just one click. I'm deliberately not doing that with this post.

If you're reading it, it's because you came to my site, or you picked up the content in a feed reader.

One reason to publish on the web is to make a name for yourself, and create an audience for your content or services. But that's not the only reason, or even the best one. I think structured self-reflection is more valuable - with or without feedback.

We've been trained to worry about audience and analytics for our posts. How many people read a piece about X vs a piece about Y? Is it better to post at 2pm on a Thursday or 10pm on a Sunday? Which demographic segments are most interested?

That's fine and dandy if you're a brand, but not all of us need to be brands. Not every piece of content needs to be a performance. If we unduly worry about audience, we run the risk of diluting our work in order to appeal to a perceived segment. Sometimes the audience is you, and that's enough.

The dopamine hit that comes from a retweet or a favorite creates a kind of awkward emotional dependence. A need for audience. There's a lot to be said for slow reflection for its own sake. That's what we encourage when we give blogs to students, and that's probably what we should be practicing more of ourselves.

And of course, I'm speaking for myself. I've decided I need to ease back on social media interactions, and start using my own space as just that: my own space. My own space to reflect, to think out loud, and to publish because I want to. That's how we used to do it on the web. As I've said before: I think the world would be better if we revealed more of ourselves.

 

Elgg and Known: how deep insight can help you build a better community platform

2 min read

Ten years ago last November, we released the first version of Elgg. An open source social networking platform originally designed for higher education, where it was used by Harvard and Stanford, it spread to organizations like Oxfam, Orange, Hill & Knowlton and the World Bank, as well as national governments in countries like Australia, Canada and the Netherlands.

Not bad for a couple of industry outsiders based in Scotland.

Elgg is still in wide use today. I credit that to a technical emphasis on extensibility and ease of use, as well as our focus on being responsive to the needs of the community - but not too responsive. We never veered from the vision we had of an open social networking infrastructure for organizations.

The web has changed unrecognizably since 2004, and Known takes those changes into account: mobile-first, with an emphasis on streams and shorter bursts of content. You can still run it in an organization, but you can run it as a personal platform, too. Higher education institutions are using it to give self-reflective websites to all their students, and more and more private companies are using it to create social feeds internally, too. Design thinking is core to our process, which helps us stay responsive and build tools that truly solve a deep user need.

Known, Inc goes beyond Known the platform: we're exploring new applications that use design thinking, and our deep community platform knowledge, to solve problems in different verticals.

Most importantly, we offer that experience and toolset to other organizations. If you need platform strategy advice, or even to build a new social website or app for your organization, we can help. We have over a decade of experience in building organizational social platforms, and you can put it to work. To get started, get in touch via our website.

 

The full-stack employer

7 min read

Just over four years since I permanently moved to California, I'm beginning to understand the differences in work style between the US and Europe. America still has a largely time-based view of productivity, even in Silicon Valley. But with tech industries in other nations catching up fast, and remote working becoming a more viable option, you need to compete for talent with companies all over the world. You have to be a full-stack employer.

What is a full-stack employer?

Over the past year in particular, there's been a lot of discussion about "full-stack employees", "full-stack developers" and "full-stack startups". The trend is that employees are expected to have a broader range of skills, and be able to switch between them seamlessly. Employees apply their skills in a more holistic manner, moving away from a dedicated position on an assembly line.

This is arguably related to startup culture, and the growing trend for even larger companies to innovate by creating much smaller, more autonomous internal product teams. It has worked for a number of corporations. But the full-stack expectation places greater demands on employees, which must be met by the employer. Simply put, to innovate, you need support.

I'm shamelessly repurposing the term "full-stack" to mean not just the technology you use to build your products and services, not just the skills you are expected to use in the course of doing business, but also the support mechanisms that humans need in the course of doing their job. A full-stack employer is one that sees their employees as a community of people, and that provides structures, support networks and services for them based on that understanding.

The good news is, treating the people who work for you as human beings has a real effect on productivity, sales and the health of a company.

What it's like to have a full-stack employer

The new reality is that you are expected to use a variety of skills in the course of doing your job. But those skills didn't just come to you, fully-formed. Nobody puts on a headset, Matrix-style, and emerges minutes later knowing kung fu: mastering a new skill requires time, effort and investment. Full-stack employers recognize that providing dedicated training and professional development for their employees creates a measurable return on investment. You're expected to join the company with intelligence, an enthusiasm for learning, and skills that relate to your core role - but the reality is that even those will develop as you continue at your company. Importantly, it's needs-led: as an employee, you can tailor your professional development based on your needs. Training isn't dropped on you from above.

You are not expected to be always-on. Phone calls at 3am, Slack messages late at night, urgent emails that need to be responded to out of hours are not on the table. It comes down to respect: the employer understands that you have your own life, and that your choice to work for a company is a relationship that goes both ways.

The ethics of this are clear, but it turns out that workers are more productive when they are rested and take more breaks. The same is true during working hours. Employees are trusted to do their work, and aren't measured in terms of the amount of time they spend at their desks. In fact, research suggests that they should build some break-time into every hour, and they may be encouraged to do this. Similarly, they may be encouraged to take a full hour for lunch, perhaps with a walk. And they're encouraged to go home after eight hours. It all results in happier, healthier employees, and more productivity.

Employees have some choice over their benefits, but they always have full medical coverage in countries where this isn't a given. Childcare is paid for (because the employer saves money by making it available).

Generous parental leave is provided, for both mothers and fathers, engineered in such a way that both parents take it more or less equally. Studies have shown that equal paternity leave helps to prevent against pay discrimination, and provides happiness (and therefore productivity) boosts across both parents. The company becomes a better place for women to work, allowing it to remain competitive.

Finally, the aspect that may make American managers shake their heads with disbelief. Every employee gets a minimum of six weeks of vacation time a year, and they are strongly incentivized to use it. In order to combat a culture of working all the time, full-stack employers may create a financial bonus for employees that go on holiday, knowing that vacations dramatically increase productivity. Happier workers are more productive.

Why it's great to be a full-stack employer

While it sounds expensive, full-stack employers actually save money. Happier employees are measurably more productive, and arguably more creative, leading to more innovative solutions. These measure also reduce churn, which is important when the cost of employee turnover can be as much as twice their salary.

Providing a better place to work can be a differentiator, which can help companies compete for employees in a cost-effective way. By way of example, I was once offered a position at a very well-known Silicon Valley web company; one whose services people use every day. The salary was great, and it would have looked good on my resumé. At the end of our meeting, the interviewer remarked to me, "you do end up spending a lot of time at work, but it's okay, because your colleagues become like your family." I refused the job, and started noticing when employees posted Instagram photos of their office on Saturdays and after midnight.

Optimizing the workplace for people who have lives outside of work allows for more experienced employees (who are more likely to have families), and people who have outside hobbies and interests. All of these things provide more bang for your salary buck as an employer - as well as creating a far more enjoyable place to work.

What does this mean for employees and managers?

Although some of the benefits sound costly, most of them actually save the company money over time. The biggest adjustment it requires is an attitude shift.

I would argue that the American reliance on time-at-desk as a productivity metric is an ideological, rather than fact-based, approach. German workers, for example, are at least as productive as their US counterparts, while enjoying six week vacations, more regular working hours, and so on.

Therefore, the biggest required change is for managers to respect the time of the people on their teams, and to create conditions where employees don't feel guilty for stepping away from their desks, putting their phones down, and spending regular, quality time away from work. In fact, they should feel empowered to do so, because they will be better workers as a result. The same goes for asking for professional development resources, and expecting fair compensation in both monetary and benefit terms.

It's a seismic shift for a country so deeply involved in the Protestant work ethic.

What of the future of work?

Ubiquitous mobile Internet was supposed to give us more freedom and allow us to live more fulfilling lives. It has not necessarily lived up to this potential.

Just because an employer can contact an employee at midnight on a Thursday night, does not mean that we should do so. We have gained all kinds of new, amazing tools to help us be productive and create more innovative companies. Now it's time to learn to do so responsibly.

After the industrial revolution, Henry Ford and others learned that five-day weeks and eight-hour days allow us to be more productive. After the information revolution, it seems we now have to relearn these lessons.