Team members at Google are currently developing a “kill switch” to prevent artificial intelligence from becoming a threat to humans.
According to a paper released by Google’s DeepMind team, research is being conducted at the University of Oxford to prevent a situation where artificially intelligent robots stop listening to their programmers. Both Tesla CEO Elon Musk and noted astrophysicist Stephen Hawking have claimed that artificial intelligence is the greatest existential threat to human existence.
Researchers at the University of Oxford are searching for a way in which robots that are misbehaving can be remotely interrupted regardless of their active processes.
Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not necessarily receive rewards for this.
Programmers understand the threat posed by artificial intelligence and even go as far to concede that less than perfect behavior is expected from robots. The researchers claim that artificially intelligent robotics are “unlikely to behave optimally all the time.”
If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions — harmful either for the agent or for the environment — and lead the agent into a safer situation.
The paper suggests that the goal of programmers should not only be to ensure that AI devices can be safely interrupted, but also to ensure that AI isn’t capable of learning how to prevent such interruptions.
Tom Ciccotta writes about Free Speech and Intellectual Diversity for Breitbart. You can follow him on Twitter @tciccotta or on Facebook. You can email him at tciccotta@breitbart.com
COMMENTS
Please let us know if you're having issues with commenting.