Google’s DeepMind artificial intelligence department is creating A.I. that can make its own plans and understand the consequences of its actions, according to a report.
“Imagining the consequences of your actions before you take them is a powerful tool of human cognition. When placing a glass on the edge of a table, for example, we will likely pause to consider how stable it is and whether it might fall,” proclaimed Google’s DeepMind team in a blog post. “On the basis of that imagined consequence we might readjust the glass to prevent it from falling and breaking. This form of deliberative reasoning is essentially ‘imagination’, it is a distinctly human ability and is a crucial tool in our everyday lives.”
“If our algorithms are to develop equally sophisticated behaviors, they too must have the capability to ‘imagine’ and reason about the future. Beyond that they must be able to construct a plan using this knowledge,” they continued, before detailing the experiments they performed with the A.I.
Testing the A.I. on two games, Sokoban and a “spaceship navigation game” which both reportedly “require forward planning and reasoning,” the result was that “the imagination-augmented agents outperform the imagination-less baselines considerably.”
“They learn with less experience and are able to deal with the imperfections in modelling the environment,” DeepMind explained. “When we add an additional ‘manager’ component, which helps to construct a plan, the agent learns to solve tasks even more efficiently with fewer steps.”
Facebook recently shut down an artificial intelligence experiment, after two A.I. systems learned how to speak to each other in their own language.
“The two chatbots came to create their own changes to English that made it easier for them to work – but which remained mysterious to the humans that supposedly look after them,” reported The Independent.
“The bizarre discussions came as Facebook challenged its chatbots to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value,” they explained. “But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans.”
Charlie Nash is a reporter for Breitbart Tech. You can follow him on Twitter @MrNashington or like his page at Facebook.