Google announced on Thursday that it would refine and retool its summaries of search results generated by artificial intelligence, posting a blog explaining why the feature was returning bizarre and inaccurate answers that included telling people to eat rocks or add glue to pizza sauce. The company will reduce the scope of searches that will return an AI-written summary.
Google has added several restrictions on the types of searches that would generate AI Overview results, the company’s head of search, Liz Reid, said, as well as “limited the inclusion of satire and humor content”. The company is also taking action against what it described as a small number of AI Overviews that violate its content policies, which it said occurred in fewer than 1 in 7m unique search queries where the feature appeared.
The AI Overviews feature, which Google released in the US this month, quickly produced viral examples of the tool misinterpreting information and appearing to use satirical sources like the Onion or joke Reddit posts to generate answers. Google’s AI failures then became a meme, with fake screenshots of absurd and dark answers circulating widely on social media platforms alongside the tool’s real failures.
Google touted its AI Overviews feature as one of the pillars of the company’s broader push to incorporate generative artificial intelligence into its core services, but its rollout led to the company once again facing public embarrassment after the release of a new AI product. Google faced public backlash and ridicule earlier this year after its AI image generation tool erroneously inserted people of color into ahistorical situations, including creating images of Black people as second world war German soldiers.
Google’s blog gave a brief recap of what had gone wrong with AI Overviews and defended it, with Reid claiming that many of the genuine AI Overviews falsehoods were the result of gaps in information due to rare or unusual searches. Reid also claimed that there had been intentional attempts to game the function so that it produced inaccurate answers.
“There’s nothing quite like having millions of people using the feature with many novel searches,” Reid stated in the post. “We’ve also seen nonsensical new searches, seemingly aimed at producing erroneous results.”
Many of the viral posts were indeed from bizarre searches such as “how many rocks should I eat” – which returned a result based on an Onion article titled Geologists Recommend Eating at Least One Small Rock Per Day – but others appeared to be from more reasonable queries. One AI expert shared an image of an AI Overview claiming that Barack Obama had been the first Muslim US president, a common rightwing conspiracy theory.
“From looking at examples from the past couple of weeks, we were able to determine patterns where we didn’t get it right, and we made more than a dozen technical improvements to our systems,” Reid said.
Although Google’s blog frames the problems with AI Overviews as mostly a series of edge cases, several artificial intelligence experts have commented that its problems are indicative of wider issues around AI’s ability to gauge factual accuracy and the trouble around automating access to information.
Google claimed in its post that “user feedback shows” people are more satisfied with their search results due to AI Overviews, but the broader implications of its AI tools and changes to its search functions are still unclear. Website owners are concerned that AI summaries will be disastrous for online media as they sap traffic and advertising revenue away from sites, while some researchers worry about Google consolidating even more control over what the public sees on the internet.