Google Search's AI faces backlash for misinformation on political figures

Google Search's AI faces backlash for misinformation on political figures

Google's latest AI search tools were touted to enhance online search efficiency, yet recent reports reveal inaccuracies in search results. Following the launch, concerns arise as the AI erroneously labeled public figures, sparking debates on the reliability of AI-driven information retrieval.

Google recently launched new artificial intelligence search tools with the promise of making it easier and quicker for users to find information online. However, just days after the release, the company had to address some inaccuracies in the search results.

Earlier this month, Google introduced an AI-generated search results overview tool that summarizes search results for users, eliminating the need to click through multiple links to find quick answers. Unfortunately, the feature faced criticism this week for providing incorrect or misleading information to certain user queries.

Several users on X shared that Google's AI summary incorrectly stated that former President Barack Obama is a Muslim, which is not true. Obama is actually a Christian. Another user pointed out that the AI summary claimed that none of Africa's 54 recognized countries start with the letter 'K', overlooking Kenya.

CNN was informed by Google on Friday that the AI summaries for both of these queries were taken down because they violated the company's policies.

Google spokesperson Colette Garcia stated that the majority of AI Overviews offer high quality information and include links for further exploration on the internet. She also mentioned that some viral instances of Google AI errors seem to be manipulated images. Feedback is welcomed as Google conducted thorough testing before launching this new feature, taking prompt action when necessary according to their content policies.

At the end of each Google AI search overview, it is noted that "generative AI is experimental." The company conducts tests to simulate potential malicious users to avoid incorrect or low-quality information from appearing in AI summaries.

Sundar Pichai speaks about Gemini 1.5 pro during Google I/O developer conference today.

Sundar Pichai speaks about Gemini 1.5 pro during Google I/O developer conference today.

Sundar Pichai speaks about Gemini 1.5 pro during Google I/O developer conference today.

Google

Related article

Google is showcasing an impressive vision for how AI will be integrated with Gmail, Photos, and other products. This is part of Google's effort to utilize its Gemini AI technology in all its offerings to stay competitive in the AI race against rivals like OpenAI and Meta. However, a recent incident highlights the potential risk of incorporating AI, which may sometimes provide inaccurate information and could harm Google's reputation as a reliable source for online information.

Sometimes, Google's AI overview can give incorrect or unclear information, even on simple searches.

In a test conducted by CNN, Google was asked about the sodium content in pickle juice. The AI overview stated that an 8 fluid ounce-serving of pickle juice contains 342 milligrams of sodium. Surprisingly, it also mentioned that a serving less than half the size (3 fluid ounces) contained more than double the sodium at 690 milligrams. (Interestingly, Best Maid pickle juice, available at Walmart, lists 250 milligrams of sodium in just 1 ounce.)

CNN also looked into the question: “What data does Google use to train its AI?” The response from Google's AI team mentioned that it's uncertain whether Google filters out copyrighted materials from the online data it uses to train its AI models, highlighting a significant issue with how AI companies function.

Google has faced criticism in the past for the mistakes of its AI tools. For instance, in February, the company had to stop its AI photo generator from creating images of people because it was criticized for producing historically inaccurate images that mostly depicted people of color instead of White individuals.

Google’s Search Labs webpage lets users in areas where AI search overviews have rolled out toggle the feature on and off.

Editor's P/S:

The recent inaccuracies in Google's AI search tools raise concerns about the reliability of AI-generated information. While AI has the potential to streamline information access, it's crucial to address potential biases and ensure accuracy. Google's acknowledgment of these issues and its commitment to rectifying them is a step in the right direction. However, it underscores the need for continued vigilance and transparency in AI development.

Moreover, the article highlights the potential risks associated with AI's limited understanding of context and its reliance on vast amounts of online data. The uncertain use of copyrighted materials in training AI models raises ethical and legal questions. As AI becomes increasingly integrated into our daily lives, it's imperative to strike a balance between innovation and accountability, ensuring that AI tools provide accurate and reliable information while respecting intellectual property rights.