Scientists Eric Horvitz and Lawrence Krauss gathered a group of 40 artificial intelligence experts to discuss possible doomsday scenarios of advancements in AI.
Bloomberg reports that Horvitz and Krauss gathered at Arizona State University, with funding from Tesla Inc. co-founder Elon Musk and Skype co-founder Jaan Tallinn, to hold a workshop titled “Envisioning and Addressing Adverse AI Outcomes.” 40 AI scientists and cyber-security experts were split into groups of attackers — red team — and defenders — blue team. The groups then simulated rogue AI scenarios from the manipulation of stock markets to full blown global warfare.
Horvitz, who is managing director of Microsoft’s Research Lab in Washington, is optimistic about the future of AI. “There is huge potential for AI to transform so many aspects of our society in so many ways. At the same time, there are rough edges and potential downsides, like any technology,” he said. “To maximally gain from the upside we also have to think through possible outcomes in more detail than we have before and think about how we’d deal with them.”
Others, however, are more cautious about the future of artificial intelligence and believe Horvitz’s predictions to be quite optimistic. As a result, participants in Horvitz and Krauss’ workshop were asked to submit entries for their worst-case scenarios relating to artificial intelligence. The guidelines for the entries stated that the scenarios must be realistic and based on current technologies or those that appear possible within the next five to 25 years. Four scenarios were picked from each team, and their authors were appointed to a panel to discuss the attacks and how to defend against them.
Some of the scenarios suggested included AI being used to influence or hack elections or large-scale cyber attacks. Horvitz gave the example of AI hacking self-driving cars to misread traffic signs so that the car would register a “stop” sign as “yield,” leading to dangerous traffic issues.
John Launchbury, the director of one of the offices at the U.S.’s Defense Advanced Research Projects Agency, and Kathleen Fisher, chairwoman of the computer science department at Tufts University, both discussed the dangers of intelligent, automated cyber attacks. The two experts posed the scenario of a cyber weapon designed to hide and prevent all attempts at destroying it. If that weapon were to gain access to the Internet directly, it could cause untold damage to the world wide web.
“We’re talking about malware on steroids that is AI-enabled,” said Fisher.
The defending blue team argued that an advanced AI would require massive amounts of computing power and communication in order to deploy a full-scale cyber attack, so it would be easier to detect. The red team replied that it would be easy to use something such as an addictive video game to enslave players’ computers, hiding the cyber-attack behind a seemingly innocuous activity.
University of Michigan computer science professor Michael Wellman posed the scenario of massive stock market manipulation by an AI. The blue team fared better in their defense tactics on this subject, suggesting that they treated attackers like malware, using a database of known hacks to recognize them. Wellman, a 30 year veteran of AI research, said that he believes this approach could be effective when used in finance.
Lawrence Krauss, a chairman of the board of sponsors responsible for the Doomsday Clock — a symbolic measure of how close the world is to a large-scale global catastrophe — said that some of the things he saw at the gathering “informed” his decision on whether or not the hands of the clock should be moved one minute closer to midnight and eventual catastrophe. “Some things we think of as cataclysmic may turn out to be just fine,” said Krauss.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan_ or email him at lnolan@breitbart.com