Building Ethics not Bombs: The Role of Scientists and Engineers in Humanitarian Disarmament

Lan Mei, Armed Conflict and Civilian Protection Initiative

In the early 1940s, nuclear physicist Lise Meitner rejected an offer to work on the Manhattan Project, declaring in a now-famous statement, “I will have nothing to do with a bomb”. Meitner’s individual ethical stance was commendable but insufficient to prevent the development of a revolutionary new weapon. The Manhattan Project succeeded in building the atomic bomb using nuclear fission, a process Meitner helped to discover.

Lise Meitner’s story highlights the need for scientists and engineers to join together and, reflecting on the ethical implications of their work, to take appropriate action. According to software engineer Laura Nolan, tech workers can help advance the cause of humanitarian disarmament by collectively pushing for stronger international and national laws and holding their employers to account.

Laura Nolan is a new member of the Campaign to Stop Killer Robots and the International Committee for Robot Arms Control. These organizations are seeking a ban on fully autonomous weapons, popularly known as “killer robots,” which would select and engage targets without meaningful human control. Laura recently obtained a grant from the Campaign to organize events in Dublin to raise awareness about the issue of fully autonomous weapons, including a seminar on the topic that she hopes will engage the anti-war community, the tech sector, and the local media. Last week, she attended a regional campaign meeting in Japan and next month will participate in a global campaigners’ meeting in Berlin.

Credit: Campaign to Stop Killer Robots, 2018.

Laura left her job at Google last June over the company’s contract with a US Department of Defense project to develop artificial intelligence (AI) that could be used to enhance targeting in drone strikes. Although Project Maven did not deal directly with fully autonomous weapons, Laura observed that it took several steps towards them, and it was through this experience that she began to think about the ethical implications of autonomous weapons.

In a recent interview, Laura described three classes of ethical challenges in the tech industry, all of which arise in the context of fully autonomous weapons. First, the technology you build might not work as it is supposed to, causing unintended harm. Second, the technology could work as you intended but have unanticipated repercussions. Third, the technology could operate as intended and expected, but someone else could repurpose it, using it in a harmful way that you did not plan for or envision.

Fully autonomous weapons pose significant concerns in all of these categories. Engineers designing fully autonomous weapons could not be certain that the systems would function as desired. Indeed, such weapons would likely be unable to comply with existing international humanitarian law, including the principles of proportionality and distinction. Laura said she worries that errors or oversights in programming could cause the weapons to target civilians. There is also a risk that, as has happened so many times before, engineers could unwittingly program underlying societal biases into the weapons. And as with all technology, the potential for the weapons to be hacked could lead to other catastrophic consequences.

Even if the above did not occur, Laura noted, the use of these weapons could change the cost calculus for engaging in war and taking human life in ways that could generate unending armed conflicts or spark a new arms race. The cost of developing the weapons could mean that only wealthy nations would be able to develop and deploy them, leading to increasingly asymmetric warfare. In addition, fully autonomous weapons designed for use in warfare could be easily repurposed for use in policing and become another tool in the increasing militarization of law enforcement around the world. Laura expressed a particular concern that engineers designing non-weaponized autonomous systems could find their work being incorporated into autonomous weapons systems without their knowledge.

The specter of these different scenarios motivated Laura to join the Campaign to Stop Killer Robots in advocating for a legally binding treaty preemptively banning fully autonomous weapons. She believes that despite the moral imperative not to build this new class of weapons, we cannot prevent their development unless we make them too costly to build. An international ban treaty and national laws and regulations that make these weapons politically and socially unacceptable form a critical component of this advocacy strategy.

In addition to campaigning for national and international measures to stop killer robots, Laura wants to encourage tech workers to take responsibility for their work. Near the end of 2018, she started organizing TechWontBuildIt meetups in Dublin as a forum for tech workers to discuss the impacts of the industry on human rights and what they can and should do about it.

Laura is very passionate about the responsibilities of technologists, urging them to “think about all of the consequences of our work and try and make sure that we don’t create systems that make victims of the weak and powerless. We need to think about writing systems that empower rather than disempower people.” Laura observed that every person has a responsibility to consider where the products they build might be used. She noted that her own work at Google was not specifically for Project Maven, but she had seen how it could be incorporated into the project. She said she could not continue to be part of a company that might make unethical choices about how it uses its employees’ work.

Of course, personal involvement in campaigns such as the Campaign to Stop Killer Robots can affect your employability, so Laura is one of the few people from industry actively involved in the Campaign. She noted, however, that tech workers need to start standing up to employers and to recognize that they have both a duty and a right: the duty to build only ethical technology and to resist applications of their technological products in unethical ways, and the right to know how their employers and others use their work.

Laura observed with regret that the Silicon Valley mindset, one that values getting products to market without taking the time to reflect on the consequences of the technology, seems to be the dominant one in the tech industry. She is hopeful, however, that 2018 marked a turning point of sorts in public discourse around the ethical responsibilities of the tech industry. She believes that the negative press around many tech companies and tools in recent years—the Facebook app designed for Cambridge Analytica that exposed up to 87 million user profiles, Amazon’s AI recruitment tool that discriminated against female candidates, Microsoft’s chatbot that posted racist and sexist tweets, and Uber’s Greyball tool that allowed cars to evade the authorities in cities or countries where Uber had been banned—has catalyzed larger discussions within the industry about companies’ ethical responsibilities.

Tech workers today are like the nuclear physicists of the 1930s and ‘40s—in high demand and necessary for the development of the latest military technologies. For Laura Nolan, this means that now is the best time for them to take an ethical and moral stand. In 2018, more than 3,000 scientists and engineers and almost 250 tech companies pledged not to “participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons”. However, as history and Lise Meitner’s case demonstrate, individual pledges against building weapons are not enough. To prevent arms-induced human suffering, scientists and tech workers should take advantage of their unique position to garner more support for humanitarian disarmament efforts and to pressure their industry to uphold ethical standards.

%d bloggers like this: