Google's AI tells users to add glue to their pizza, eat rocks and make chlorine gas
Social media has been flooded with bizarre and dangerous advice that appears to have been made by Google's new AI overview feature. The company continues to defend the 'high quality' search tool.
Google has updated its search engine with an artificial intelligence (AI) tool — but the new feature has reportedly told users to eat rocks, add glue to their pizzas and clean their washing machines with chlorine gas, according to various social media and news reports.
In a particularly egregious example, the AI offered appeared to suggest jumping off the Golden Gate Bridge when a user searched "I'm feeling depressed."
The experimental "AI Overviews" tool scours the web to summarize search results using the Gemini AI model. The feature has been rolled out to some users in the U.S. ahead of a worldwide release planned for later this year, Google announced May 14 at its I/O developer conference.
But the tool has already caused widespread dismay across social media, with users claiming that on some occasions AI Overviews generated summaries using articles from the satirical website The Onion and comedic Reddit posts as its sources.
"You can also add about ⅛ cup of non-toxic glue to the sauce to give it more tackiness," AI Overviews said in response to one query about pizza, according to a screenshot posted on X. Tracing the answer back, it appears to be based on a decade-old joke comment made on Reddit.
Other erroneous claims are that Barack Obama is a muslim, that Founding Father John Adams graduated from the University of Wisconsin 21 times, that a dog played in the NBA, NHL and NFL and that users should eat a rock a day to aid their digestion.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
Live Science could not independently verify the posts. In response to questions about how widespread the erroneous results were, Google representatives said in a statement that the examples seen were "generally very uncommon queries, and aren't representative of most people's experiences".
"The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web," the statement said. "We conducted extensive testing before launching this new experience to ensure AI overviews meet our high bar for quality. Where there have been violations of our policies, we've taken action — and we're also using these isolated examples as we continue to refine our systems overall."
This is far from the first time that generative AI models have been spotted making things up — a phenomenon known as "hallucinations." In one notable example, ChatGPT fabricated a sexual harassment scandal and named a real law professor as the perpetrator, citing fictitious newspaper reports as evidence.
Ben Turner is a U.K. based staff writer at Live Science. He covers physics and astronomy, among other topics like tech and climate change. He graduated from University College London with a degree in particle physics before training as a journalist. When he's not writing, Ben enjoys reading literature, playing the guitar and embarrassing himself with chess.