Google Implements Fixes to AI-Generated Search Overviews Following Viral Outrage
Google has made over a dozen technological modifications to its AI systems in response to criticism of its search engine's erroneous information. Social media users uploaded screenshots of bizarre and false responses provided by Google's AI algorithms. Google defended its AI overviews, admitting that some were unusual, erroneous, or useless.
In May, the business overhauled its search engine, which now includes AI-generated summaries alongside search results. However, social media users rapidly shared images of weird and false responses.
While Google defended their AI overviews, claiming that they had been well tested and were typically correct, Liz Reid, the head of Google's search division, stated in a blog post that some of the AI summaries were strange, erroneous, or useless. Some of the examples published on social media were not just absurd, but also possibly dangerous or damaging hoaxes. Furthermore, bogus screenshots were made and extensively distributed, escalating the dispute.
The Associated Press asked Google which wild mushrooms were safe to eat, which highlighted the issue. Google gave a lengthy AI-generated summary that, although technically valid, omitted critical information that may have been detrimental or even deadly. Mary Catherine Aime, a professor of mycology and botany at Purdue University, analysed Google's response and highlighted that, while some information on puffball mushrooms was correct, the summary forgot to mention that potentially lethal puffball mimics have solid white meat.
Another widely cited example featured an AI researcher asking Google how many Muslims have served as president of the United States. Google boldly reacted with a long-debunked conspiracy theory, stating that the US has only one Muslim president, Barack Hussein Obama. Google quickly corrected this blunder, which violates the company's content regulations.
Google has released a number of updates in response to these vulnerabilities. It has enhanced the detection of illogical questions that should not be answered with an AI summary, such as "How many rocks should I eat?". AI systems have also been adjusted to limit the usage of user-generated content, such as social media posts, which may contain deceptive information. Furthermore, Google has placed triggering constraints to improve the quality of responses to specific questions, particularly those relating to health.
However, worries remain concerning the dependability and correctness of AI-generated solutions. Critics believe that relying on AI-generated summaries could perpetuate bias and misinformation, potentially endangering people seeking help in an emergency. Large language models utilised in AI systems have a tendency to make things up, a condition known as hallucination.
While Google claims that its AI overviews are more closely integrated with its traditional search engine and rely on top web results, computer scientist Chirag Shah warns that even if the AI feature is not technically fabricating information, it does present false information, whether AI-generated or human-made, in its summaries. Shah emphasises that information retrieval is Google's primary business, and the reliance on AI language models is concerning.
Google has made more than a dozen technical improvements to its AI systems following criticism of its search engine's inaccurate information.
Social media users shared screenshots of outlandish and misleading answers generated by Google's AI summaries.
Google defended its AI overviews but acknowledged that some were odd, inaccurate, or unhelpful.
Source: AP NEWS