Over 1,000 prominent artificial intelligence specialists (including engineers, computer scientists, futurists and academics) have signed an open letter calling for a ban on “offensive autonomous weapons beyond meaningful human control”, warning of the dangers of weapons that can “select and engage targets without human intervention”. The letter states that the deployment of such systems is theoretically within reach within years rather than decades, as previously supposed.
In fact, a report by Human Rights Watch in 2012 listed an alarming number of almost autonomous and wholly lethal weapons systems already used globally; from a robot that patrols the border in South Korea, which uses sensing equipment to detect humans as far as two miles away and can kill from a safe distance; to a German automated system used for defending military bases in Afghanistan by detecting and firing back at incoming gunfire. Currently, autonomous weapons can only be run by a human approving the computer’s actions, but work at a speed that excludes the possibility of consideration. However, no one has yet built a weapons system complicated enough that it makes its own decisions about when it should be deployed.
The open letter describes autonomous weapons as “the third revolution in warfare, after gunpowder and nuclear arms”. It suggests that the key global question for humanity today is “whether to start a global AI arms race or to prevent it from starting”. The letter warns that if developed, autonomous weapons could easily fall into dangerous hands and potentially be used for ill means, such as “dictators wishing to better control their population”, or “warlords wishing to perpetuate ethnic cleansing”. The researchers also point out that compared to nuclear weapons, autonomous weapons require no specific hard-to-create materials and will be challenging to monitor.
The letter was announced at the International Joint Conference on Artificial Intelligence 2015 conference held in Buenos Aires, Argentina. As the letter is open, anyone who wants to back the issue can add his or her signature. Endorsers of the letter include some of the tech world’s most prominent names, including Apple co-founder Steve Wozniak; Tesla founder and CEO Elon Musk and renowned scientist Stephen Hawking. Other backers include Google Deepmind Technologies Demis Hassabis and Mustafa Suleyman, in addition to 20 other Deepmind staff members.
There is an evident concern from the backing scientists about the need to protect their right to continue to pursue AI research in depth, worrying that the development of AI weapons could restrict their work. Part of the letter reads, “Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons—and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits”.