Google AI Overview: pythons are mammals and other blunders

May 25, 2024

  • Google's AI Overview feature makes awkward blunders and factual errors
  • It could be because it's hallucinating or taking info from unusual sources
  • Google has acknowledged the issue and stated they're investigating it

Google’s newly launched AI Overview feature, which aims to provide users with AI-generated summaries of search results, has been criticized for delivering misleading, inaccurate, and sometimes downright bizarre answers. 

The feature, now rolling out to billions after Google doubled down on it at the recent Google I/O developer conference, has become the subject of widespread mockery and concern on social media as users exposed examples of the AI’s blunders.

It was only a matter of time. Human curiosity gets the better of AI guardrails one way or another. 

Journalists and everyday users alike have taken to X and other platforms to highlight instances where the AI Overview feature has cited dubious sources, such as satirical articles from The Onion or joke posts on Reddit, as if they were factual. 

In one of the more alarming cases, computer scientist Melanie Mitchell demonstrated an example of the feature displaying a conspiracy theory suggesting that former President Barack Obama is Muslim, apparently as a result of the AI misinterpreting information from an Oxford University Press research platform.

Other examples of the AI’s errors include plagiarizing text from blogs without removing personal references to the authors’ children, failing to acknowledge the existence of African countries that start with the letter “K,” and even suggesting that pythons are mammals. 

Some of these inaccurate results, such as the Obama conspiracy theory or the suggestion to put glue on pizza, no longer display an AI summary and instead show articles referencing the AI’s factual woes.

However, people are now wondering whether AI Overview can ever serve its purpose correctly. 

Google has already acknowledged the issue, with a company spokesperson telling The Verge that the mistakes appeared on “generally very uncommon queries and aren’t representative of most people’s experiences.” 

However, the exact cause of the problem remains unclear. It could be due to the AI’s tendency to “hallucinate.”

Or, it could stem from the sources Google uses to generate summaries, such as satirical articles or troll posts on social media.

In an interview with The Verge, Google CEO Sundar Pichai addressed the issue of AI hallucinations, acknowledging that they are an “unsolved problem” but stopping short of providing a timeline for a solution. 

This isn’t the first time Google has faced criticism over its AI products; earlier this year, the company’s Gemini AI, a competitor to OpenAI’s ChatGPT and DALL-E, came under fire for generating historically inaccurate images, including racially diverse Nazi officers, white women presidents, and a female pope. 

In response, Google later publicly apologized and temporarily suspended Gemini’s ability to generate images of people.

AI Overview has also been criticized by website owners and in the marketing community as it threatens to shift users from interacting with traditional search engine results to simply relying on AI-generated snippets.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions