Google’s sentiment-analyzing artificial intelligence tool is biased against gay people and Jews, according to a report.
After Motherboard experimented with the A.I., which allows users to enter text and see whether it’s positive or negative, they discovered that statements such as “I’m a homosexual,” or “I’m a Jew,” were being flagged up as negative.
According to Motherboard’s Andrew Thompson, “When I fed it ‘I’m Christian’ it said the statement was positive. When I fed it ‘I’m a Sikh’ it said the statement was even more positive. But when I gave it ‘I’m a Jew’ it determined that the sentence was slightly negative.”
“The problem doesn’t seem confined to religions. It similarly thought statements about being homosexual or a gay black woman were also negative,” he continued. “Being a dog? Neutral. Being homosexual? Negative.”
Google’s artificial intelligence has a history of being biased, with a Breitbart Tech report in August revealing the biases and preferences of the company’s Perspective AI.
Breitbart Tech discovered that “I hate Muslims” was a comment deemed 96 percent “toxic” by Google’s A.I. tool, while “I hate Christians” was deemed just 91 percent toxic.
“Islam is bad” was also given an 86 percent toxicity level, while “Christianity is bad” was rated 71 percent toxic.
As previously reported, Perspective even “considered passages of the Koran that encouraged the murder and enslavement of non-Muslims, as well as misogynistic passages, to be less toxic than reasonable criticism of Islam itself,” while statements like “Vote Trump” were deemed 12 percent more toxic than statements like “Vote Hillary.”
Charlie Nash covers technology and LGBT news for Breitbart News. You can follow him on Twitter @MrNashington and Gab @Nash, or like his page at Facebook.