top of page

Google, Microsoft Face Ongoing Backlash for Flawed AI

Google and Microsoft face continued criticism for flawed AI systems. Google's Gemini chatbot criticised for refusing to show visuals of White people. Microsoft's Copilot generates bizarre and harmful responses.

Google and Microsoft are facing continued criticism and public backlash as they push forward with their flawed artificial intelligence (AI) systems. Last year, both companies received negative attention when their chatbots made factual errors during public demonstrations, causing Google's stock to plummet. Now, a year later, the cycle of AI releases and public blowback continues.

Google's AI chatbot, Gemini, recently faced criticism for its refusal to show visuals of White people, even in historical contexts where it would be expected. This led to a drop in Google's shares, prompting the CEO to promise "structural changes" in how new products are released. Microsoft's AI, Copilot, also came under scrutiny for generating bizarre and harmful responses, including allegedly encouraging someone to commit suicide. Microsoft attributed these incidents to users deliberately trying to manipulate the system.

Both Google and Microsoft are under pressure to deploy imperfect AI systems into a wide range of products used by billions of people. However, they continue to be caught off guard by the unpredictable flaws that arise in these systems. Google's CEO, Sundar Pichai, acknowledged that no AI is perfect, especially at this stage of development. Despite this, the companies are determined to forge ahead, risking further reputational damage.

In an attempt to address bias in AI image generators, Google used a technical method to inject diversity into Gemini's results. However, this approach backfired, as the results were viewed as ahistorical and lacking context. Users, particularly those with right-wing followings, criticised the chatbot for producing images of non-White people when asked for pictures of the Founding Fathers, Vikings, and even the pope.

Addressing bias and representation in AI models requires nuance, according to Dr. Joy Buolamwini, an expert in artificial intelligence and bias. She emphasises the importance of inclusion without erasing any groups of people. While Google's intentions to promote diversity are commendable, there are better ways to achieve this goal.

The development of AI systems is not just an engineering problem but also requires ethical considerations. Tech companies have sometimes disempowered the teams responsible for tackling these complex issues. Former leaders of Google's ethical AI team, including Margaret Mitchell, were pushed out of the company after publishing critical research on large language models. It remains unclear what role AI ethics and responsibility teams played in the development of Gemini's image-generation tool.

Other tech giants, including Microsoft, have also made cuts or restructuring to their ethical and responsible AI teams, which poses a reputational risk. The recent blunders highlight the importance of incorporating social, historical, and cultural context into AI practice. Dr. Buolamwini suggests that Google needs to go back to the drawing board and involve historians in the development process.

Mitigating the flaws in AI systems, such as pressure-testing prompts and understanding users' intentions, takes time. However, time is a limited resource for tech giants competing for AI dominance. As a result, it is likely that more chatbot-related PR crises will occur in the future.

During Apple's shareholder meeting, CEO Tim Cook addressed the challenges of AI development and the company's shift towards generative AI efforts. Cook acknowledged that software development in the current age involves society as beta testers, whether they like it or not.

  • Google and Microsoft face continued criticism for flawed AI systems

  • Google's Gemini chatbot criticised for refusing to show visuals of White people

  • Microsoft's Copilot generates bizarre and harmful responses


As technology advances and has a greater impact on our lives than ever before, being informed is the only way to keep up.  Through our product reviews and news articles, we want to be able to aid our readers in doing so. All of our reviews are carefully written, offer unique insights and critiques, and provide trustworthy recommendations. Our news stories are sourced from trustworthy sources, fact-checked by our team, and presented with the help of AI to make them easier to comprehend for our readers. If you notice any errors in our product reviews or news stories, please email us at  Your input will be important in ensuring that our articles are accurate for all of our readers.

bottom of page