I've always had a complicated relationship with revenue. Back when we were working on the fully-managed version of Elgg in 2006 or so, competing as a bootstrapped company with Ning and its $100M in funding, we differentiated ourselves by charging for our services so we could be more sustainable. A few years later, Dave Tosh and I laughed at our naïvety: "choose us, because we have a business model!"
The point, of course, is that users didn't care if we had a business model. To them, we were a service that charged money in competition with a service that didn't. Where we had won customers, it was where we had provided something unique that users needed.
It's not that revenue isn't the right path to create a sustainable business. I strongly believe that it is. It aligns services with their users and creates incentives that don't promote surveillance, predatory business practices, or monopoly strategies. The entire web - and the world - would be better if more services were revenue-bound. It's one of the major reasons I've chosen to work at Unlock.
But we have to accept that most users don't care. If there's one thing I've learned from three open source startups, it's that you can't sell on ideology. It's not that they need education on the issues. It's that everyone has things going on in their lives, and you can't expect people to care about the same things as you. There will always be a community of early adopters and enthusiasts that will be on the same page as you, but the only way to truly derisk your venture is to build something that real people actually need.
Back in the Elgg days, we were doing a lot of work with higher education, which was just beginning to discover social media. Educators were integrating Twitter into their classes - sometimes at the grade school level - and encouraging their students to sign up for commercial services. We were appalled by this, for ideological reasons: those services were free, and making money from user data. Making them a required part of a syllabus was akin to forcing students to participate in surveillance. But our pleas, and the pleas of a small number of others, fell on deaf ears.
Over a decade later, that trade-off has become much more obvious. The New York Times reported recently that facial recognition databases have been trained on the user photos uploaded to a range of free services:
The databases are pulled together with images from social networks, photo websites, dating services like OkCupid and cameras placed in restaurants and on college quads. While there is no precise count of the data sets, privacy activists have pinpointed repositories that were built by Microsoft, Stanford University and others, with one holding over 10 million images while another had more than two million.
As reported in the story, at least one database, innocuously trained on CCTV footage from a cafe in San Francisco, was then used for facial recognition technology used by the Chinese military to monitor Uighurs, an oppressed minority group who are being imprisoned in concentration camps. Of course, other facial recognition technology, notably Amazon's Rekognition database, is being used by ICE to target and deport immigrants.
Every educator who made commercial social media a part of their curriculum is culpable in adding their students to this kind of training database. Nobody who studies the space can plausibly claim ignorance of this potential. But the ideological imperative was outweighed by other pragmatic decisions.
These kinds of decisions are made every day. Do you make sure that the chocolate you buy isn't picked by child slaves? It seems like a pretty imperative idea when laid out as a blunt question like this, but I bet you don't. It would be lovely if we could rely on people to make ethical consumer decisions, but generally they won't. So the solution has to be to build ethically and to meet a user's need in the most direct way possible. Build something that people really want, and do it ethically, while not making the ethics the differentiator. You'll capture some early adopters through the ethics of your work, but you'll get the bulk of your customers by serving their self-interest.
Most crucially, if you're building something that has intrinsic value to your users, you can charge money for it and make money in a way that is in line with your values.
By now, the adage that "if you're not the customer, you're the product being sold" is pretty old hat. But it remains the case that everyone has to eat and pay for a roof over their heads, and that businesses need to make a profit. Software isn't made by magical elves who can live without being paid. Nothing is actually free. If a service isn't making enough money up-front, they have to make up the difference through other means, whether it's by placing invasive advertising, selling user datasets, making "data partnerships", or all of the above.
Arguably revenue won't be enough to stop them in itself: where profit can be made, it will be. We need strong legislative consumer protections to prevent this kind of user betrayal. But once the industry has cleaned up its act, sustainable revenue practices will need to be in place to support the services we use every day.