Google’s new AI Overview feature has been generating bizarre and sometimes dangerous responses, a disastrously hasty rollout that has the woke internet giant manually disabling certain searches. For example, the Silicon Valley company’s woke AI suggests that pregnent women smoke “2-3 cigarettes per day.”
The Verge reports that Google’s AI Overview, a feature that has been in beta testing for over a year, is now facing scrutiny due to its odd and potentially harmful outputs. From suggesting users put glue on their pizza to recommending the consumption of rocks, the AI-powered search feature has been generating a flurry of memes across social media. As a result, Google is frantically disabling AI Overviews for specific searches as the memes gain traction, causing many of them to disappear shortly after being shared on social networks.
The situation is particularly perplexing given that Google has been testing AI Overviews for an extended period. CEO Sundar Pichai even stated that the company had served over a billion queries during this time. However, Pichai also mentioned that Google had reduced the cost of delivering AI answers by 80 percent, driven by advancements in hardware, engineering, and technical breakthroughs. It appears that this optimization may have occurred prematurely, before the technology was fully prepared.
An anonymous AI founder commented on the situation, stating, “A company once known for being at the cutting edge and shipping high-quality stuff is now known for low-quality output that’s getting meme’d.”
Some of the other AI-generated search results include advising women to smoke while pregnant:
The AI also has an interesting view of what it thinks astronauts get up to:
In another case the AI suggested adding glue to pizza sauce to improve its stickiness:
And the AI thinks everyone should eat a few rocks per day for their health:
Despite the criticism, Google maintains that its AI Overview product generally provides “high quality information” to users. Meghann Farnsworth, a Google spokesperson, noted that many of the examples seen have been in response to uncommon queries and that some have been doctored or irreproducible. Farnsworth also confirmed that the company is taking swift action to remove AI Overviews on certain queries where appropriate and using these examples to develop broader improvements to their systems, some of which have already begun to roll out.
Gary Marcus, an AI expert and emeritus professor of neural science at New York University, believes that many AI companies are “selling dreams” that this technology will quickly progress from 80 percent to 100 percent accuracy. While achieving the initial 80 percent is relatively easy, as it involves approximating a large amount of human data, Marcus argues that the final 20 percent is extremely challenging and may require artificial general intelligence (AGI).
Google is undoubtedly feeling the pressure to compete in the AI search space, with rivals like Bing and OpenAI making significant strides and younger users gravitating towards platforms like TikTok for the best experience. However, this pressure can lead to rushed AI releases, as evidenced by Meta’s Galactica AI system, which had to be taken down shortly after its launch in 2022 due to similar issues, such as telling people to eat glass.
Read more at the Verge here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.