Technological control of our world – how far could it go? And who’s at the top?
A very few people at the top are setting their own agendas for manipulating the world and for what we see of the world.
Many people are well aware that tech giants such as Facebook and Google have immense power over how we receive and share information online. There are quite rightly great and increasing concerns about how these mammoth corporations access, manipulate and profit from our data, and how they shape and control what information comes our way.
A recent report from the UK’s House of Lords also shared the common concerns of many industry insiders and commentators about the dominance of a few tech behemoths. Many worry about free speech online, the policing of “hateful” speech, the manipulation of search results and recommendations, and more. Techniques are being developed to disrupt “extremists” online, with disquietingly loose notions of who counts as an extremist. We must remain acutely aware of this to understand the fight for the information battle-space, and it’s important to connect the dots to understand fully what kind of potential threat we are facing.
These giants do not simply control our world – they control how we even see what world is there to see. They can use powerful psychological nudges to manipulate people. They can mold how we relate to each other. Technology use can even change how the human brain develops.
New technology is a great hope for humanity. With it we can reach isolated communities and individuals, we can spread knowledge and hope, we can organize. But in turn, it can organize us. As artificial intelligence (AI) and machine learning develops and is embedded in tech, its powers will expand, and so, too, will the dangers.
You could say that the human race is currently the subject of an experiment where billions of rats are sitting in little labs pressing levers on all sorts of devices, all over the world. The docile rats have even bought their own equipment.
We are the rats.
But who are the experimenters?
There is currently a great push to examine the ethical issues involved in AI. This is to be welcomed, but there is a critical question about the power that many companies hold in this area, and how far such companies can be held accountable. The concern that a few tech giants dominate is heightened by the concern felt by some insiders that some powerful people in some of these companies are pushing particular agendas of their own. Witness the recent allegations raised by James Damore about the culture at Google. If these allegations have any foundation, there may be serious implications for how this technology is being developed.
In 2015, Google bought the London-based AI company, DeepMind, for a reported $400,000,000. At the time, shareholders were promised that DeepMind would provide ethical oversight. So should we stop worrying?
DeepMind’s location means it’s well placed to cream off the best tech talent from Cambridge, London, and Oxford. It has three founders. Demis Hassabis has the usual geek entrepreneur profile of spectacular and early success in the field, as does Shane Legg. The more perplexing figure is Mustafa Suleyman, the best friend of Hassabis’ younger brother. He dropped out of a degree in philosophy and theology at Oxford in the second year to help set up the Muslim Youth Helpline. Then at age 22, he was appointed to give policy advice on human rights to the then-Mayor of London, Ken Livingstone. Why Ken couldn’t find someone who’s actually finished a degree, worked in human rights law, or who had more than 22 years life experience to offer, one can only guess. Suleyman was involved in the UK branch of Reos partners, a mediation organization, and then helped set up DeepMind. It is said that he was an entrepreneur at school, reselling sweets to other kids from an early age. Perhaps that explains his acumen.
“He had always been the ‘well-spoken interlocutor’ at home, helping parse his father’s broken English. As DeepMind’s 30-year-old co-founder and head of applied AI, he’s responsible for integrating the company’s technology across Google’s products — and ensuring clear communication among the top engineers,” Wired writes. Among his tasks at DeepMind, Suleyman has overseen a team looking at YouTube-recommendation personalization – a powerful way of manipulating people used by those who are officially tasked with disruption online.
And he’s in charge of ethics and safety. There were rumblings for years about the invisibility of any work on ethics at DeepMind, and although this has now started, one can still wonder about Suleyman’s approach to overseeing such work.
Suleyman’s said earlier this year that “there is an emerging consensus that it is the responsibility of those developing new technologies to help address the effects of inequality, injustice and bias.” But these are very broad-brush aims, and somewhat different from each other. There are currently laws enacted which mean certain forms of discrimination must be avoided, for instance – but “addressing inequality” is rather vague. Everything hangs on how these aims are interpreted; accepting “responsibility” can sometimes amount to “seizing the reins” and pushing your own strategies to the front. Suleyman does go on to add, “progress in this area also requires the creation of new mechanisms for decision-making and voicing that include the public directly.” So that could be good, although Suleyman is by no means the only person to say this – indeed, he’s rather late to the table in issuing such a comment.
So how does Suleyman see his own involvement? As overseeing ethics so that the public will be directly involved? Troublingly, he does not seem to understand the difference between ethical oversight of an area and social activism. The latter approach pushes particular agendas for social change, and Suleyman is quite right to see that social activism coupled with far-reaching technological change gives a uniquely powerful mix. He has said: “As someone who started out as a social activist, I can see many examples of people working in tech who are genuinely driven to improve the world.” But it all depends upon what you see as an improvement. And as Suleyman and countless others have pointed out, we can use AI to combat bias, or to incorporate our own biases.
Indeed, DeepMind has been involved in research in conjunction with the Royal Free Hospital, which was found by the Information Commissioner to be in breach of the Data Protection Act in how it handed over data from 1.6 million patients to DeepMind. The report on this did not directly criticize DeepMind, because it is the Royal Free who was responsible for the curation of their patient data. Nonetheless, DeepMind said, “In our determination to achieve quick impact when this work started in 2015, we underestimated the complexity of the NHS and of the rules around patient data, as well as the potential fears about a well-known tech company working in health.” An artificial intelligence company having trouble understanding complex rules? We have it from their own mouths – a rush to action before judgment. This is what happens when ethics are driven by social activism.
DeepMind, shallow heart?
An appreciation of ethics requires many things, including the ability to think clearly, consistently, and without bias. Suleyman’s involvement in setting up the Muslim Youth Helpline is presented as forming part the experience that seemingly qualifies him for the job. Naturally, a helpline for troubled young people can be a great benefit, and specific services to particular client groups can be valuable. But the Muslim Youth Helpline appears to try to do more than one thing – to offer something like counseling, and to offer culturally and religiously appropriate responses. Their website states that they are a faith and culturally sensitive organization, and although they do not offer religious advice, “as a faith and culturally sensitive service our volunteers are trained to use hadith and Quranic ayah to give words of comfort where appropriate to the client.”
But it’s easy to find many hadith and passages from the Quran that would be very far from comforting for young people with troubles relating to sex, drugs, sexual identity, gender identity, worries about their beliefs, and even their identity as Muslims. How are the comforting hadith and ayah chosen? This suggests treading a fine line between drawing upon Islamic beliefs and supporting distressed young people. One must suspect a certain cognitive dissonance, and perhaps the same kind of doublethink that blurs the distinction between ethics and activism.
Put some of this together. People working in AI, and in computing technology more generally, can influence what we see online. They can show us things, they can distract us. They can block information, they can silence voices. They can develop algorithms which contain bias, or which eliminate bias – depending, of course, on what is seen as “bias.” They can nudge us to behave in various ways. They can analyze data that can reveal a staggering amount about us. They seem to be claiming the ability to decide who the good guys are, who the bad guys are, what voices are dissent that needs to be crushed. They may be working with governments, as well as in social media companies. And much of this is being carried out within large, extremely rich, extremely powerful corporations, where a few people at the top are setting their own agendas for manipulating the world and for what we see of the world.
Pamela Geller is the President of the American Freedom Defense Initiative (AFDI), publisher of The Geller Report and author of the bestselling book, FATWA: Hunted in America, as well as The Post-American Presidency: The Obama Administration’s War on America and Stop the Islamization of America: A Practical Guide to the Resistance. Follow her on Twitter or Facebook.
COMMENTS
Please let us know if you're having issues with commenting.