The advent of fully autonomous weapons is often described as the third revolution in warfare. Gunpowder and nuclear weapons were the first and second. The deployment and use of gunpowder and nuclear weapons fundamentally changed how conflicts were fought and experienced by combatants and civilians alike. Fully autonomous weapons or ‘killer robots’ are weapons systems that would select and engage targets on the basis of sensor inputs; that is, systems where the object to be attacked is determined by sensor processing, not by humans. Their capacity to change our world in equal measure to that of gunpowder and nuclear weapons is not to be underestimated.
Fully autonomous weapons would lack the human judgment necessary to evaluate the proportionality of an attack, distinguish civilian from combatant, and abide by other core principles of the laws of war. History shows their use would not be limited to certain circumstances. It’s unclear who, if anyone, could be held responsible for unlawful acts caused by a fully autonomous weapon – the programmer, manufacturer, commander, or machine itself – creating a dangerous accountability gap. Countries like the US, China, Israel, Russia, South Korea, and the United Kingdom are investing heavily in the development of these weapons.
Some types of fully autonomous weapons will process data and operate at tremendous speeds. Complex, unpredictable and incredibly fast in their functioning, these systems would have the potential to make armed conflicts spiral rapidly out of control, leading to regional and global instability. Fully autonomous weapons will also be used in other circumstances outside of armed conflict, such as in border control and policing. They could be used to suppress protest and prop-up regimes. Killer robots are systems that intrinsically lack the capacity to empathize or to understand nuance or context. That is why the Campaign to Stop Killer Robots is working with military veterans, tech experts, scientists, roboticists, and non-profit organisations around the world to ban fully autonomous weapons systems via new international law.
The Campaign to Stop Killer Robots aims to ban fully autonomous weapons and retain meaningful human control over the use of force. We view fully autonomous weapons as weapons systems that can identify and fire on targets without meaningful human control over that process. That is, machines that would decide whether or not to kill without a human actively making the decision. We are not seeking to ban weapons that operate under meaningful human control.
The Campaign is not opposed to artificial intelligence (AI) or robotics broadly, or even to the use of AI or robotics by the military. The Campaign is not proposing a ban on systems without weaponry designed to save lives, such as autonomous explosive ordnance disposal systems, which may operate with or without human control. But the Campaign believes there is a line which should never be crossed; life and death decision making should not be delegated to machines.
We believe that development or deployment of fully autonomous weapons will lower the threshold for entering into armed conflict, and any such systems that is deployed could be hacked, spoofed, or malfunction, increasing risk to friendly troops and civilians alike. It is our understanding that no military commander would want to cede control on the battlefield to a fully autonomous weapon.