Competition among companies including Anthropic, OpenAI, Google, Meta, Microsoft and Perplexity to win over users has led these makers of generative AI chatbots to push the boundaries of what their systems can do and rush updates and new versions to market.
But the current state of Google’s Gemini (formerly Bard) reminds us that gen AI is a work in progress and that developers should take a pause and do the proper quality assurance testing before releasing things to the public. As for Gemini, Google’s large language model has been delivering results that are so off the rails that last week it paused its three-week old image generation function to address “inaccuracies in some historical image generation depictions,” the company said in a Feb. 22 post on X.
What historical inaccuracies are we talking about? “Images showing people of color in German military uniforms from World War II that were created with Google’s Gemini chatbot have amplified concerns that artificial intelligence could add to the internet’s already vast pools of misinformation as the technology struggles with issues around race,” The New York Times wrote in a story under the headline Google Chatbots AI Images Put People of Color in Nazi-Era Uniforms.
“Besides the false historical images, users criticized the service for its refusal to depict white people: When users asked Gemini to show images of Chinese or Black couples, it did so, but when asked to generate images of white couples, it refused,” The NYT added. “According to screenshots, Gemini said it was ‘unable to generate images of people based on specific ethnicities and skin tones,’ adding, ‘This is to avoid perpetuating harmful stereotypes and biases.'”
Oh, if only Google had the resources, including QA teams, to identify these kinds of potential problems before they released tools to its billions of users.Â
A day after its social media mea culpa, Google posted a more detailed apology in a blog post under the headline Gemini image generation…
Read the full article here