Skip to main content
 

Doing no harm

I used to have a simple ethical stance that influenced my career choices: I wouldn't work for a defense or arms company, and I wouldn't work for a bank.

In both cases, the idea was that I didn't want my work to result in someone's death. For defense or weapons, the connection to death is obvious. For banks, my argument was that traditional banking institutions work in a predatory way that actively harms people in poverty, worsening their situation at best, and exploiting it at worst.

While I stand by both lines, this stance is a little too simple. What about challenger banks, for example? Particularly those which seek to upend the status quo and provide real help for people who need it? What about other technology companies that provide essential technology platforms that are used to profile immigrants or activists? What about companies that further - or simply choose to ignore - gaping racial disparities?

My computer science degree taught me many things. I learned how to diagram finite state automata like nobody's business; I've got a pretty good handle on how to analyze the cost effectiveness of an algorithm; I got my head around Prolog, C, Java, and XSLT. While I believe we learned about ethical considerations around artificial intelligence - the trolley problem, for example - I don't recall being taught about the ethics of our own actions.

Over the last few years, it's become a more accepted idea that software is the reflection of the people who build it: engineering, product, and business decisions are all made by humans who use their own value systems and understanding of their ethical context. Omidyar Network's Ethical OS is one response to this challenge, allowing organizations to make better decisions as they build products. The Center for Humane Technology has, to its credit, also been refining its approach to advocating for ethics, having launched amidst some deserved criticism. These endeavors, as well as the work of inclusion-minded organizations like Code2040, Women Who Code, Techqueria, Trans*H4ck and others, represent a great deal of progress. Of course, there's significantly more progress that still needs to be made.

But there are relatively few resources centered around making ethical decisions as an individual. How might you choose your next job in an ethical way? What kind of work should an engineer - or a product manager, designer, marketer, etc - feel comfortable doing? What are the codes of ethics we should live by and look for among our peers?

Ethics is, of course, a widespread area of study, and there are plenty of endeavors outside of tech to figure out how to apply them. Santa Clara University's Markkula Center for Applied Ethics hosts a framework for considering ethical decisions that is probably a good starting point. But I've found very little that dives specifically into the ethical challenges for individuals posed by building software on the internet. Not only do you need a framework for asking the right questions, which the Markkula Center's work helps provide, but you need to have the insight and knowledge to truly understand the implications of your work.

A few years ago, Chelsea Manning attended the New York demo day for Matter, the values-based accelerator where I was Director of Investments. She was on the board for Rewire, a startup that was attempting to build an easy-to-use encrypted email solution for journalists and activists. We had worked hard to select teams that we believed had the potential to make the world more informed, inclusive, and empathetic. Chelsea is very smart indeed, and doesn't hold her opinions back; I was eager to get her feedback on the teams.

It was a shock to me when she explained which technologies could be used for surveillance, which could be used for weapons, and so on. The teams were absolutely not building technology for those use cases, but at the hands of the wrong investors or acquirers, they could be used to cause harm. She was right. While we had invested in some genuinely incredible people, I realized I hadn't done enough to discuss the implications of the work the teams were doing and ensure that it was impossible for them to cause harm in the wrong hands. Intention is not enough. I consider this to be one of the most important conversations of my life.

A similar conversation might be enlightening for engineers who build facial recognition software that is used by ICE to scan DMV records to build a corpus of data that can be used to track immigrants. Or those who build modern payday loans that plunge low income people into debt traps. Or machine learning algorithms that predict "high risk renters", locking in historic racial disparities. Or engineers who find themselves agreeing with James Damore's outrageous Google memo. And we need to be having these conversations more openly, so that we can improve understanding and share knowledge and insights that will help everyone in the industry make better decisions.

Ethical challenges are subjective and non-deterministic, and it's difficult to build a hard and fast framework that encompasses them - which is a hard pill to swallow for engineers, who are used to living in a deterministic universe built out of discrete logic and testable outcomes. My "don't build weapons, don't work for a bank" rules simply don't cut it. It's tempting to reduce my stance to a pat motto like "do no harm", but like its obvious cousin "do no evil", too much wiggle room is left for work that could have less than positive implications.

There's no alternative to assessing each opportunity on its own terms and asking the right ethical questions, although we can help each other by making those assessments public and learning from each other. We need to be doing that for everything we assess, from job opportunities to business models to technology architectures. Technology is not amoral; neither is business. And we all have a responsibility to do the right thing.

· Posts · Share this post