SpaceX founder Elon Musk, Google research chief Peter Norvig, physicist Stephen Hawking, and Apple co-founder Steve Wozniak are among the prominent names in technology, robotics, and artificial intelligence warning against a “military AI arms race.”
In an open letter published online by the Future of Life Institute, a Boston-based AI research group, the notable names join a long list of experts who are preemptively warning against using AI in war. The letter states:
“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”
The institute’s founder, MIT physics professor Max Tegmark, has said the broad goal of his nonprofit is to safeguard against the risks of systems that are too smart for their own good. This month, the group began spending a $10 million grant from Musk for research that would “help maximize the societal benefit of AI.”
If you’re against a military AI arms race, please sign this open letter: http://t.co/yyF9rcm9jz
— Elon Musk (@elonmusk) July 28, 2015
Computer scientists have given the institute’s push a mixed reception. Several university-affiliated computer science researchers have signed FLI’s letter, but others have expressed misgivings about such high-profile crusades against a nascent technology. Their chief gripe: many of the big names making claims about the dangers of AI do not study it themselves, and the technology is too limited for such a debate to make sense today.
The tension between future fears and the current state of AI research is well-illustrated by one of the letter’s signatories. Yoshua Bengio, a computer science professor at the University of Montreal, is listed as a supporter on the FLI letter. But in January, he told Popular Science that “we would be baffled if we could build machines that would have the intelligence of a mouse in the near future.” Though he acknowledged that very intelligent systems could one day be built, he added that “this would be very far in the future, hence the current debate is somewhat of a waste of energy.”
The FLI is not alone in warning against the use of robots in war.
Since 2013, Human Rights Watch and a nonprofit group called the Campaign to Stop Killer Robots have been rallying researchers and diplomats against the use of intelligent machines in war, saying a robot should never replace a human when it comes to making a decision to kill.
The groups have made their case with United Nations researchers for a treaty banning autonomous “killer robots.” In April, the UN’s Convention on Certain Conventional Weapons in Geneva heard arguments from experts to discuss that possibility.
In part, the FLI letter echoes this argument, calling for a “a ban on offensive autonomous weapons beyond meaningful human control.” The letter, however, stops short of describing what exactly qualifies as “meaningful human control.”
An online form allows anyone to add their name to the list of actors, writers, and researchers of all stripes who have signed the letter. The FLI confirmed, however, that it sought permission from high-profile figures including Musk, Norvig, Hawking, Wozniak, MIT professor Noam Chomsky, and physicists Lisa Randall and Frank Wilczek before adding them to the list.
Updated 5:30 p.m. with more detail on Bengio comments, spelling correction.