Skip to main content
 

I don't want my software to kill people

A screen full of JavaScript code

Dave Winer poses:

If you think of yourself as an "open source developer" please ask yourself this question. Are you as committed to freedom for people who use your software as you are to freedom for developers? Not just freedom to modify the source code, but freedom to do anything they like with the stuff they create. If not, why, and where do you draw the line?

I’m not sure if I do consider myself an open source developer these days. I don’t have the time or bandwidth to write software for myself on a regular basis in the way that I used to. I have the software I help with in my work (which is, these days, more about team dynamics and process rather than writing code); that’s about all I have time for outside of my family. I am having a lot of trouble making any time at all for my own projects.

But I used to write a lot of open source code (Elgg, Known, more contributions elsewhere). And I’ve spent a lot of time thinking about this subject.

I think we have to consider that the principles of the free software movement, revolutionary though they genuinely were, were also set in the same mindset that latterly saw its founder Richard Stallman spectacularly fall from grace. They are principles that deal in software development and licensing in strict isolation, outside of the social context of their use. They are code-centered, not human-centered.

Dave’s question has two angles that I’d like to discuss: one briefly, and the other at more length (because it’s more controversial in open source circles).

The first is: how easy is open source software to use, anyway? Can users do anything they like with the stuff they create? Doesn’t a commitment to user freedom also necessitate a commitment to ease of use? I think yes, but open source projects rarely have capacity for design or user experience research, and even when people with those skillsets want to contribute, projects quite often don’t know what to do with them. The tools (from GitHub on down), the culture, the mindsets are all code-first. There is no good way to open source user research or the empathy work that is a core part of software development. A code-centric approach takes the humanity out of software, and work has to be done to put people back in the center.

The second, more complicated one, is: I don’t want my software to be used to cause harm.

You could couch that in liability. Many software licenses disallow use in a nuclear facility, for example. But I want to go further. I don’t want anything I built to be used to kill people; nor to discriminate against them; nor to commit hate crimes; nor to intentionally organize or facilitate any act of violence or assault.

I think many software developers would feel the same way. But any license that incorporated clauses to this effect would fail to be recognized by the Free Software Foundation or the Open Source Initiative.

My blunt take on that is that I don’t care: clearly the principle of not causing harm is more important than recognition by some foundations (and particularly not foundations like the FSF whose leaders have been found to be so lacking in empathy). If the idea of not causing harm is outside the realm of the existing open source movement, then we need a new movement.

The word “free” in free software is famously overloaded. It’s “free as in speech, not free as in beer”. But there are many kinds of free speech, and even in America, where it’s the First Amendment to the Constitution, there are limits to it.

It’s worth considering whose freedom we value. Do we value the freedom of the people who use software, or do we also value the freedom of the people the software is used on? While the latter group doesn’t always exist, when they do, how we consider them says a lot about us and our priorities.

Take a drone used in warfare out in the field which incorporates an open source library that had originally been developed for some other purpose. The author released it under a license that dictated how it could be modified and shared. Shouldn’t they also have a right to say that you can’t use it in a bombing campaign? Open source principles say no.

Consider a police AI system that is used to pre-emptively target people who might commit a crime. Because of underlying biases both in the corpus of data the model was trained on and in the police force itself, and because of a fundamental disconnect between the Minority Report promise of this technology and what it can actually deliver, they tend to be wildly discriminatory and are essentially a new cover for racial profiling. Shouldn’t a software library author be able to opt out from being a part of this kind of system? Open source principles, once again, say no.

Or, closer to home for me, take an open source community platform that is used by neo-Nazis to publish propaganda about Jewish people, or to organize acts against specific people or organizations. The authors might have designed it for use with aid workers or in education, but open source licenses make no restriction on other uses.

Code does tend to find other uses. I once co-organized a demo day when I was at Matter Ventures, and had the privilege of chatting with Chelsea Manning, who was in attendance. I asked her what she thought; she was glowing about some ventures, but then went through a point-by-point list of which platforms on show could be used for military and surveillance purposes in the hands of the wrong investors or acquirers. It was one of the most eye-opening conversations of my life.

When an author releases code to the open source commons, they invite others to enter into a relationship with them. Those third parties can incorporate the code into their own projects under some restrictions, and modify and re-share it under others. The exact nature of how open source code may be incorporated, modified, and re-shared varies from license to license. But other restrictions are not a stretch. The author is giving their work away for free; this is not work for hire. They should have the right to restrict its use. They should not have to simply accept that someone could use their work to kill people, commit hate, perpetuate systemic injustices, or otherwise harm. There is nothing good and principled about that idea.

There is also no need for the FSF or OSI to be the sole arbiters of what is free or open source software. The only thing that really matters is how authors want to release their work, how downstream users might incorporate it, and how the rights and well-being of people it is used on are affected.

This isn’t just about warfare, systemic discrimination, or hate crimes (although those all should be enough). There are questions here about the rights of software authors, and the role of software in a just and equitable society. To limit our considerations to code is to say we don’t care about the people affected by our work. And to do good work, we must care.

· Posts · Share this post