Tech-savvy parents are using big tech’s new A.I. systems — Alexa, Siri and Google Assistant — as surrogate babysitters. But what happens when children start viewing these bias-ridden systems as authority figures?
According to the research firm eMarketer, approximately 68 million consumers use A.I. assistants on their phones or voice-enabled smart speakers like the Amazon Echo, Google Home, or Apple’s new HomePod. They are especially popular among parents, many of whom are setting up assistants in children’s rooms.
For example, through the use of what Amazon calls “skills”, Alexa can perform the role of an entertainer that plays music and reads bedtime stories, a teacher who answers homework questions, and even a tutor that teaches children a foreign language. It may, therefore, seem like a good idea to expose children to A.I. technology from a young age, but it’s not yet clear how interactions with artificial intelligence can affect the ways in which children think and behave.
In an interview with the Washington Post, Dr. Allison Druin, a child development expert, explained that AI-enabled digital assistants “don’t have emotional intelligence.” She pointed out that young children are naturally inclined to anthropomorphize objects by attributing human qualities to non-human objects and forming emotional connections to these entities.
Studies show that children are, just like adults, biased to learn from sources that are familiar and look and sound similar to them. This means that children are more likely to accept the information offered by a friendly digital home assistant that they “know” without questioning the answers or asking for evidence.
Unfortunately, the same large Silicon Valley companies and people who have been accused of censorship and discrimination against conservatives are developing these algorithms that answer kids’ questions about the world. In doing so, they are introducing their own ideological biases into A.I. systems like Alexa or Google Assistant.
When popular conservative commentator Steven Crowder posted a video of himself posing political and religious questions to Amazon’s Alexa home assistant in November 2017, many of his viewers were surprised to learn just how “helpful” Alexa tried to be — by telling him that there were more than two genders and that Jesus Christ was a fictional Biblical character.
After online critics accused Crowder of editing Alexa’s answers, he posted the raw footage of the recording, and explained that he only wanted to show his audience what he had personally experienced: the device’s algorithm seems to show a strong progressive bias and gives editorialized responses.
In a similar test performed by television producer David Sams, Google Assistant also provided some rather revealing answers. When asked who Mohammed and Buddha was, Google Assistant provided detailed biographies, but in response to the questions: “Who is Jesus Christ? Who is God?” The device replied: “Sorry, I’m not sure how to help.”
The problem is, of course, that children don’t understand or recognize that when Alexa or Google Assistant provides a response to their questions, the answer may be confusing, subjective, and sometimes even entirely incorrect. In fact, many parents have started to notice that when children ask the device a question about homework or even the weather, they often accept the answer without engaging in further discussion about the subject or asking follow-up questions.
Additionally, psychologists are concerned that the use of digital assistants during the formative years could impact children’s communication skills. For example, if a child asks Alexa to turn off the bedside light before going to sleep, Alexa may respond that she doesn’t understand the question. This response thus teaches children to ask simpler questions using a much terser syntax and sentence structure: “Alexa, turn off the light”, which ultimately reinforces a simplistic and less nuanced communication style.
Does that mean that we should keep kids away from these new handy helpers? Not necessarily. Children can learn a lot and benefit from their interactions with digital assistants, but parents have to take responsibility to oversee usage and set limits.
The Center on Media and Child Health at Harvard recommends that parents introduce digital home assistants to children as tools instead of toys. Children should be taught how and when to use these tools appropriately. They should be encouraged to speak calmly (no yelling at Alexa!) and to say “please” and “thank you.”
It’s also important to keep in mind that the Amazon Echo and Google Home are always silently listening when they’re in idle mode. The unit starts to record conversations when the microphones hear certain keywords, such as “OK, Google” or “Alexa”, and will then upload the snippets to Amazon or Google’s servers in order to train A.I. voice recognition models to better understand and respond to queries. That said, it’s not uncommon for these devices to spontaneously “wake up” and record background audio snippets that you may not be aware of. Although Amazon and Google allow you to delete stored recordings from your account, it has previously been reported that Amazon may soon allow developers of Alexa skills to access voice recordings. Parents who are concerned about the privacy of home assistants should periodically remind children not to share private information with their new digital friends.
While it’s clear that AI-based technology is the way of the future, its algorithms will be closely tied to the data that is used to train it and the biases of its creators. If A.I. developers are mostly progressive liberals, is it a good idea to expose our children to it, and if so, how much?
Marlene Jaeckel is a scientist and software engineer who writes about technology and free speech. You can follow her on Twitter @mjaeckel or contact her at marlene.jaeckel@protonmail.com.