aNewDomain/RobotRepublic — The United Nations should move immediately to ban autonomous weapons and kill off any ‘killer robot” initiatives it knows about, an international coalition of robotics and AI business and academic leaders said today.
SpaceX founder Elon Musk, Google DeepMind chief Mustafa Suleyman and Element AI’s Yoshua Bengio are just three of the 116 business leaders and academics from 24 nations who signed the open letter to Geneva. (Scroll below the fold to read the letter in full.)
The open letter, say organizers, is in response to a recent U.N. vote to commence discussions about about such smart military tech as war drones, tanks and automated machine guns and assault rifles.
If the sponsoring Campaign to Stop Killer Robots and its signatories have their way, there won’t be much to discuss. The group wants an outright ban and the UN should do it now, they write before it’s too late. After gunpowder and nuclear bombs, autonomous weaponry could usher in “a third revolution in warfare,” which they say will bring “conflict that (can) be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the authors write.
That’s why the group is urging the UN to go in and outlaw autonomous weapons, killer robots, assassin drones and like technologies. If it doesn’t, the group warns, the a deadly arms race to procure killer robot and AI technology will begin and then there will be no turning back.
“We do not have long to act,” the letter’s closing statement reads. Once this Pandora’s box is opened, it will be hard to close.”
Killer microdrone swarms?
AI and robotics innovations in weaponry could arrive in five to 30 years, the experts said, but there are plenty of smart weapons already in use that are plenty alarming already. Ever heard of the $157 million long-range anti-ship missile system? Once given a target, it will doggedly go after it and try to take it out, no further human intervention required. And maybe you’ve read about the newX-47B, considered to be the most advanced unmanned drones in the US military. It needs hardly any help from a human remote pilot to take off, fly and land on carriers. Israel’s Harpy drone will do all that, plus patrol till it detects an enemy radar signal and start firing at the source.
Defense systems like the US Phalanx, pictured in the cover image above and shown in the video below the fold, similarly will shoot down incoming missiles, with no time for humans to intervene — presuming they even wanted to.
In the open letter and accompanying release today, Bayesian Logic’s founder Stuart Russell, the open letter’s first signer, said an outright UN ban on autonomous weapons “is vital for national and international security.” He added:
“Unless people want to see new weapons of mass destruction – in the form of vast swarms of lethal microdrones – spreading around the world, it’s imperative to step up and support the United Nations’ efforts to create a treaty banning lethal autonomous weapons. This is vital for national and international security.”
“The number of prominent companies and individuals who have signed this letter reinforces our warning that this is not a hypothetical scenario, but a very real, very pressing concern which needs immediate action,” said Clearpath Robotics founder and CTO Ryan Gariepy, who appears to be the first person to sign the letter.
“We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” he wrote. “The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale.”
“Autonomous weapons systems are on the cusp of development right now,” said deep learning expert and Element founder Bengio. “I signed the open letter because the use of AI in autonomous weapons hurts my sense of ethics, would be likely to lead to a very dangerous escalation, because it would hurt the further development of AI’s good applications, and because it is a matter that needs to be handled by the international community, ” he said, “similarly to what has been done in the past for some other morally wrong weapons like (chemical and biological) weapons.”
The open letter to the U.N. was one largely organized by AI thinker Toby Walsh, who led the creation and delivery of a similar letter to the UN back in 2015. That letter was endorsed by physicist Stephen Hawking and Apple cofounder Steve Wozniak, and in the end scored thousands of signatures from AI and robotics thinkers at universities, think tanks and startups around the world.