Google Eliiminates Capability of Gemini AI Chatbot to Create Human Images

0
58
Google to delete browsing history to resolve a privacy complaint
Google to delete browsing history to resolve a privacy complaint

Google announced that, one day after apologizing for “inaccuracies” in historical depictions it was producing, it would be temporarily suspending the creation of photos by its Gemini artificial intelligence chatbot.

Google announced on Thursday that, one day after apologizing for “inaccuracies” in historical portraits it was producing, it is temporarily halting the creation of photographs of people using its Gemini artificial intelligence chatbot.

This week, Gemini users shared photos of historically white-dominated scenes including racially diverse individuals on social media, claiming to have constructed such situations. This led critics to wonder if the firm was overcorrecting for the possibility of racial bias in their AI model.

Google stated on the social media site X that “We’re already working to address recent issues with Gemini’s image generation feature.” “We’re going to stop creating people’s images while we do this, and we’ll re-release an improved version soon.”

According to earlier research, AI picture generators without filters are more likely to produce lighter-skinned men when asked to create a human in a variety of scenarios, and they can also reinforce gender and racial stereotypes present in their training data.

“As to Google’s statement on Wednesday, the company is “aware that Gemini is offering inaccuracies in some historical image generation depictions” and is “working to improve these kinds of depictions immediately.”

People from all over the world use Gemini, which the business stated is “generally a good thing,” but it is “missing the mark.” Gemini can generate a “wide range of people.”

Sourojit Ghosh, an AI image-generator researcher at the University of Washington, expressed support for Google’s decision to halt the creation of people’s faces, but he is “a little conflicted about how we got to this outcome.” Ghosh’s research has mostly shown the contrary, despite accusations of so-called “white erasure” and the notion that Gemini refuses to develop the faces of white people—ideas that have been making the rounds on social media this week.

I find it a little hard to square the speed of this response with the volume of other studies and literature that demonstrate how models like this remove the voices of historically marginalized people, the speaker said.

In response, Gemini stated that it is “working to improve” its capacity to produce images of people, or even just a large gathering. The chatbot added, “We expect this feature to return soon and will notify you in release updates when it does.”

According to Ghosh, Google probably has the ability to filter responses so that they represent the historical background of a user’s query, but a technical fix alone won’t be sufficient to address the wider damages caused by image-generators that are based on millions of images and works of art that can be obtained online.

“It will take time to develop a text-to-image generator that doesn’t damage representations,” he stated. “They are a mirror of the culture we inhabit.”

Also readAs a technology leader, I cultivated the knack of understanding one step above the others so that I can handle them, says Dr. Chandran Raghuraman, CTO at Bahwan Cybertek

Do FollowCIO News LinkedIn Account | CIO News Facebook | CIO News Youtube | CIO News Twitter 

About us:

CIO News, a proprietary of Mercadeo, produces award-winning content and resources for IT leaders across any industry through print articles and recorded video interviews on topics in the technology sector such as Digital Transformation, Artificial Intelligence (AI), Machine Learning (ML), Cloud, Robotics, Cyber-security, Data, Analytics, SOC, SASE, among other technology topics.