Tesla CEO Elon Musk and Google AI expert Mustafa Suleyman are among 116 experts from 26 countries who have signed a letter to the United Nations asking it to ban autonomous weapons worldwide, warning killer robots could usher in an unprecedented new era of deadly warfare.
The group warned the UN’s Convention on Conventional Weapons review conference that “Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the Guardian reported.
“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways,” the letter continued. “... We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
“Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” Clearpath Robotics founder Ryan Gariepy told the Guardian.
The signatories said they want lethal autonomous systems that can make their own independent decisions to kill to be added to the 1983 Convention on Certain Conventional Weapons, which banned or restricted land mines, chemical incendiaries and blinding lasers, among other weapons. Their letter will be launched at the International Joint Conference on Artificial Intelligence, which kicks off in Melbourne on August 21st.
Musk has been particularly outspoken on the topic, though his fixation has generally been the idea that artificially intelligent robots pose an existential risk to the future of the human species.
This letter is a little more down to earth, seeing as remotely controlled weapons like Predator drones are already in widespread use, and some fully autonomous weapons like aircraft carrier missile defense systems have already been developed. On the South Korean side of the DMZ, fortifications include Samsung’s SGR-A1 sentry gun, which likely includes autonomous capability. Machines already control much of the decision-making process in warfare (for example, helping troops to identify targets), and are poised to take over even more of it.
So in other words, fears of a potential Skynet scenario are not a prerequisite for concerns about whether merciless robots could make the world of the future a nastier, less stable place. Take for example the escalation of drone wars under the Barack Obama administration. While the White House could have ordered the military to kill thousands of people in the Middle East, Afghanistan and North Africa with conventional jets, there’s little doubt that the physical and emotional distance of having remotely piloted Predator drones carry out the assassinations was a key factor to the program’s eventual scale.
Law enforcement use of lethal robots is potentially just as concerning; last year, Dallas police took out a barricaded mass shooter with a bomb on a robot.
As the Guardian noted, numerous fully autonomous systems which could make killing even more convenient than RC vehicles are being worked on now. Those include the UK’s Taranis Drone and the US’ Sea Hunter warship, as well as autonomous versions of Russia’s Uran-9 armored vehicle. None of them have terrifying steel skulls with glowing red eyes like the robots from Terminator, but they all carry a lot more weaponry than one of those things, too.