top of page
tech360.tv

Google Apologizes for AI Image-Generator's Diversity Issues

Google apologizes for faulty AI image-generator and acknowledges overcompensation for diversity. Gemini chatbot temporarily halted due to claims of anti-white bias. Images generated by the tool found to be inaccurate and offensive.

Google has issued an apology for the flawed launch of its new artificial intelligence (AI) image-generator, acknowledging that the tool sometimes "overcompensated" in its pursuit of diversity, even when it didn't make sense. The company admitted that the images generated by the tool were inaccurate and, in some cases, offensive. This comes after Google temporarily halted its Gemini chatbot from generating any images with people in response to claims of anti-white bias. The controversy arose when users noticed that the tool was generating racially diverse images in historical settings where they wouldn't typically be found.


In a blog post, Prabhakar Raghavan, a senior vice president at Google, stated, "It's clear that this feature missed the mark. Some of the images generated are inaccurate or even offensive. We're grateful for users' feedback and are sorry the feature didn't work well." While specific examples were not mentioned, social media highlighted images depicting a Black woman as a U.S. founding father and Black and Asian individuals as Nazi-era German soldiers. The authenticity of the prompts used to generate these images could not be independently verified.


The new image-generating feature was added to Google's Gemini chatbot, formerly known as Bard, about three weeks ago. It was built upon a previous Google research experiment called Imagen 2. Google had previously acknowledged the challenges associated with such tools, as they can be used for harassment, spreading misinformation, and raise concerns about social and cultural exclusion and bias.


The pressure to release generative AI products publicly has increased due to the competitive race among tech companies, sparked by the emergence of OpenAI's chatbot ChatGPT. However, the issues with Gemini are not unique, as Microsoft had to make adjustments to its Designer tool after it was used to create deepfake pornographic images of celebrities. Studies have also shown that AI image-generators can amplify racial and gender stereotypes present in their training data.


Raghavan emphasised that Google aimed to ensure the feature in Gemini avoided the pitfalls of previous image generation technologies, such as creating violent or sexually explicit images or depictions of real people. He stated, "And because our users come from all over the world, we want it to work well for everyone." However, he acknowledged that the tool sometimes overcompensated or erred on the side of caution, refusing to answer certain prompts or misinterpreting innocuous ones as sensitive.


The outrage surrounding Gemini's outputs gained traction on social media, particularly on X (formerly Twitter), and was amplified by Elon Musk, the CEO of Tesla and owner of X. Musk criticised Google for what he described as "insane racist, anti-civilizational programming." Raghavan assured that extensive testing would be conducted before enabling the chatbot's ability to generate images of people again.


Sourojit Ghosh, a researcher at the University of Washington who has studied bias in AI image-generators, expressed disappointment with Raghavan's message. Ghosh stated that for a company like Google, which has perfected search algorithms and possesses vast amounts of data, generating accurate and non-offensive results should be a basic expectation.

 
  • Google apologizes for faulty AI image-generator and acknowledges overcompensation for diversity

  • Gemini chatbot temporarily halted due to claims of anti-white bias

  • Images generated by the tool found to be inaccurate and offensive


Source: AP NEWS

As technology advances and has a greater impact on our lives than ever before, being informed is the only way to keep up.  Through our product reviews and news articles, we want to be able to aid our readers in doing so. All of our reviews are carefully written, offer unique insights and critiques, and provide trustworthy recommendations. Our news stories are sourced from trustworthy sources, fact-checked by our team, and presented with the help of AI to make them easier to comprehend for our readers. If you notice any errors in our product reviews or news stories, please email us at editorial@tech360.tv.  Your input will be important in ensuring that our articles are accurate for all of our readers.

bottom of page