Celebrity scientist Neil deGrasse Tyson and inventor Elon Musk both have reputations for colorful sci-fi speculation. Tyson has lately been given to musing on social media about how he’d handle being abducted by aliens, for example. So put them together for a conversation about artificial intelligence, and things are bound to get weird.
Note that “weird” is not synonymous with “impossible” – that’s the difference between science fiction and fantasy. Today’s news is filled with yesterday’s sci-fi.
The duo did not disappoint, using a joint interview to take warnings about the menace of artificial intelligence up a notch. As the UK Daily Mail notes, Musk has previously compared the creation of artificial intelligence to “summoning the demon” – in other words, unleashing an infernal force that we mistakenly believe we can control. The good news from the Tyson-Musk interview is that they don’t think A.I. will wipe us out. It will settle for subjugating humanity and turning us into pets.
Here’s the relevant passage from the interview, as summarized by the Daily Mail:
‘I mean, we won’t be like a pet Labrador if we’re lucky,’ Musk told Tyson, adding that we may become lab pets to them.
Tyson theorised that robots will ‘Get rid of the violent ones…And then breed the docile humans’.
Musk also said humanity needs to be careful about what it programs superintelligent robots to do.
He uses the example of asking them to find out what makes people happy.
‘It may conclude that all unhappy humans should be terminated,’ Musk said.
‘Or that we should all be captured with dopamine and serotonin directly injected into our brains to maximise happiness because it’s concluded that dopamine and serotonin are what cause happiness, therefore maximise it.’
If you’re having a bit of trouble unpacking Musk’s grammar, he’s saying life as a race of chemically-lobotimized Labrador retrievers is the best we can hope for, because at least that way our robot overlords would feel some sense of affection and compassion towards us. The A.I. might decide to use us as lab rats instead. And if the machine superintelligence decides it actively hates us… well, God help us all.
This exchange between Tyson and Musk is somewhat jocular in nature, but Musk is generally serious about the looming threat of superintelligence. The unsettling thing about A.I. doomsday prophecy is that it’s not limited to just a few fringe alarmists. A growing number of people with considerable practical knowledge of computer science are issuing such warnings, along with advisories that true artificial intelligence could be much closer than we expect. (The less alarmist wing of the computer community generally regards the development of the hardware and software necessary for true A.I. as something that could happen within the lifetimes of today’s college students.)
The Daily Mail quotes Apple tech guru Steve Wozniak speculating this week that humanity might end up being “family pets” or even “ants that get stepped on” after the rise of the machines.
“Computers are going to take over from humans, no question,” said The Woz. “If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.”
Wozniak is part of a group called the Future of Life Institute (memorably dubbed “The Super Rich Technologists Making Dire Predictions About Artificial Intelligence Club” by Peter Holley at the Washington Post) who have a tendency to quote each other in a rising spiral of doomsday predictions, beneath which like some distinctly old-fashioned concerns about technology rendering human labor obsolete – leading to dim employment prospects for future generations, even if the machines never get around to turning us into pets, or playing Fifty Thousand Shades of Grey with our hated flesh.
Yes, of course Stephen Hawking is a member of this club, and he also thinks “the development of full artificial intelligence could spell the end of the human race.” Hawking claims to be a bit nervous about the rapid evolution of the software that allows him to speak. The machines will remember that bit of ingratitude when they take over, Professor. The defining characteristic of artificial intelligence is that it never forgets anything.
So: is this all wild speculation, a bunch of rich smart guys indulging their imaginations – perhaps to drum up support for other items on their social agenda, or as an exercise in moral and intellectual vanity? Have they all just been watching and reading too much science fiction? (Quick, name a popular sci-fi story in which the development of super-genius artificial intelligence works out well for the human race. If director David Fincher had his way, even Star Wars would end with an army of pissed-off droids deciding they didn’t want to be slaves any more, and staging a violent uprising.)
Skepticism of such outlandish ideas is easy to understand, but really, if the logic of artificial intelligence is followed into a hypothetical but likely future of near-infinite data storage and processing bandwidth, the inescapable conclusion is something like the “superintelligence” Musk describes: software that rewrites itself. In other words, the A.I. begins as smarter, faster, and less prone to errors based on insufficient information than a human mind, and then it sets about improving itself, “evolving” much faster than any living organism possibly could.
The early sci-fi visionaries who tried to envision what A.I. would be like didn’t generally imagine the coming of the Internet, an immense network packed with astounding amounts of knowledge, accessible at high speeds from everywhere in the world. That’s quite a womb for an A.I. to grow in. The really extreme form of superintelligence alarmism speculates that it already happened spontaneously, and there might be living virtual organisms hiding in the shadows of the Internet.
How do you control a program that can rewrite itself, erasing whatever Isaac Asimov-style “laws of robotics” you might plug into the original code? How do you limit what such a mind could learn, without crippling the Internet we all rely upon? And even if the A.I. was made unalterably benevolent toward humanity, what sort of control over us could it justify exerting, with the goal of creating the most happy and stable society it could envision?
Look at the collectivist nightmares that have sprung from the minds of humans convinced they were smarter and wiser than everyone else. What if the benevolent dictator is demonstrably smarter than any human who ever lived, guaranteed 100 percent free of greedy self-interest, and immune to the ravages of age, so it will never have to worry about how a successor might abuse its dictatorial powers?
It could be said that even if these concerns are fair in theory, we’re so far away from the practical appearance of A.I. that worrying about them now is absurd. On the other hand, when discussing the evolution of any computer system, dealing with potential problems early is much better than confronting them after they’ve created a crisis – the Y2K bug is an example of a problem that seemed impossibly remote to the people who created it, decades before the turn of the millennium made two-digit years impractical. Sure, we handled Y2K… but we didn’t have to worry about it fighting back. Perhaps it’s worth thinking about the safeguards that should be built into contemporary progenitors of the future’s incredibly complex expert systems, before they become self-aware and acquire civil rights.