Boston group awards $6m from Elon Musk to jump-start artificial intelligence research

Elon Musk, CEO of Tesla Motors and SpaceX has donated $10 million to the Future of Life Institute in Boston to explore the safety of artificial intelligence.
Elon Musk, CEO of Tesla Motors and SpaceX has donated $10 million to the Future of Life Institute in Boston to explore the safety of artificial intelligence.

Elon Musk hasn’t been shy about sharing his fears about artificial intelligence, or AI. Earlier this year, the Tesla and SpaceX founder described the dangers of AI run amok: “You could construct scenarios where recovery of human civilization does not occur,” he said. To address that risk, he gave $10 million to the Boston-based Future of Life Institute, or FLI, to lead a global research program into AI safety. FLI organized a grants competition to select the research it would fund, and today announced the competition’s winners.

Panels of professors and researchers selected the winners from about 300 initial applicants. The judges were looking for research that “aims to help maximize the societal benefit of AI,” FLI wrote on its website.

FLI will award about $7 million to 37 research teams. Musk had designated $6 million dollars to be allocated this year, with an additional $4 million dedicated to follow-up work on the most promising projects. An additional $1 million for the grantees was donated by the Open Philanthropy Project, an organization created by Facebook co-founder Dustin Moskovitz that funds projects that “help humanity thrive.”

Three of the research projects involve building AI systems that can learn what humans care about by observing us. A Carnegie Mellon University project aims to develop AI that can explain its decisions to humans. Research at Stanford University will try to find optimal economic policies in a fully automated society. The control of lethal autonomous weapons is the subject of a grant awarded at the University of Denver. Other grant awardees are at Cambridge, Oxford, and universities in Australia, Italy, and Switzerland. The full list of winners is available online.

The grants will fund these programs for up to three years. In describing the importance of this research, FLI president Max Tegmark cited this week’s premiere of the movie “Terminator Genisys.” “The danger with the Terminator scenario isn’t that it will happen, but that it distracts from the real issues posed by future AI,” Tegmark said in a statement. “We’re staying focused, and the 37 teams supported by today’s grants should help solve such real issues.”

The Future of Life Institute, a volunteer-run research and outreach organization, was founded in early 2014. It gained attention from Musk and others when Tegmark, along with FLI scientific advisers including Stephen Hawking, published a Huffington Post op-ed about the need for AI safety research. The group is interested in all kinds of existential risks to humanity, but focuses especially on artificial intelligence.

At a June 27 conference at Boston University’s College of General Studies, FLI core member Richard Mallah described the reasons for that focus. Computers don’t need bodies to take control of our society, said Mallah, who also heads research in AI and text analytics for Cambridge Semantics. An Internet connection is enough. That’s why we shouldn’t bother worrying about armies of malevolent robots. Artificially intelligent systems don’t even need to have evil intentions to do damage — merely a set of values that don’t line up with our own.

As an example, he described a self-driving car and a human rider who asks to be taken to the airport as quickly as possible. Maybe the car drives 300 miles per hour to the airport terminal and then slams on the brakes, launching its occupant through the windshield. From the car’s perspective, it has met its goal. But in this case its values weren’t aligned with the human’s.

“This is essentially the problem of the genie that’s been told for thousands of years,” Mallah said. We may meet an entity that can grant our wishes, but if we aren’t extremely specific in how we ask, we might not get what we want.