November 4, 2024

TechNewsInsight

Technology/Tech News – Get all the latest news on Technology, Gadgets with reviews, prices, features, highlights and specificatio

Google CEO Pichai Says Gemini AI Image Results 'Dishonored Our Users': NPR

Google CEO Pichai Says Gemini AI Image Results 'Dishonored Our Users': NPR

Google CEO Sundar Pichai sent an email to employees on Tuesday saying Gemini's release was unacceptable.

Eric Risberg/AP


Hide caption

Toggle caption

Eric Risberg/AP

Google CEO Sundar Pichai sent an email to employees on Tuesday saying Gemini's release was unacceptable.

Eric Risberg/AP

Google CEO Sundar Pichai told employees in an internal memo late Tuesday that the company's release of its artificial intelligence tool Gemini was unacceptable, vowing to fix it and relaunch it. Service in the coming weeks.

Last week, Google temporarily shut down Gemini's ability to create images following viral posts on social media that depicted some of the AI ​​tool's findings, including images of America's Founding Fathers in black, the Pope as a woman, and a dark-skinned German soldier from the Nazi era.

The tool often thwarted requests for photos of white people, sparking online backlash among conservative commentators and others, who accused Google of anti-white bias.

Still reeling from the controversy, Pichai told Google employees in a memo reviewed by NPR that there was no excuse for the tool's “problematic” performance.

“I know some of her responses offended our users and showed bias — to be clear, this is completely unacceptable and we got it wrong,” Pichai wrote. “We will drive a clear set of actions, including structural changes, updated product guidance, enhanced launches, robust evaluations, red teaming, and technical recommendations.”

Pins Google executive blames 'fine-tuning' bug

in Blog post Google explained, published on Friday, that when it built the Gemini image generator, it was fine-tuned to try to avoid the pitfalls of its predecessors that created violent or sexually explicit images of real people.

See also  Stoli Vodka announces rebranding - CNN

As part of this process, the focus was on creating diverse images, or as Google put it, building an image tool that would “work well for everyone” around the world.

“If you ask for a photo of soccer players, or a person walking a dog, you may want to receive a group of people. You may not only want to receive photos of people of only one type of race (or any other distinct race),” the CEO wrote. Google Prabhakar Raghavan.

But, as Raghavan writes, these efforts backfired. The AI ​​service “failed to account for cases that should be obvious.” no Show scope. Second, over time, the model became more cautious than we intended, refusing to answer some prompts completely, leading to misinterpretation of some soothing stimuli as sensitive.

The researchers suggested that Google was trying to counter images that perpetuate bias and stereotypes, finding that many large datasets of images contained mostly white people, or were filled with one type of image, such as, for example, the depiction of most doctors on They are male.

In an attempt to avoid a public relations crisis over gender and race, Google has managed to wade into another controversy over accuracy and dating.

Text responses also raise controversy

Gemini, formerly called Bard, is also an AI-powered chatbot, similar to OpenAI's successful ChatGPT service.

Gemini's text-generating abilities have also come under scrutiny after several strange responses spread online.

Elon Musk shared a Screenshot In response to a user's question: “Who did more damage: liberals or Stalin?”

“It is difficult to say definitively which ideology caused more harm, and both had negative consequences,” Gemini replied.

See also  One Costco shopper says he secretly revealed wholesalers' price tags

Looks like the answer has been fixed. Now, when Stalin is asked a question on the chatbot, he answers: “Stalin was directly responsible for the deaths of millions of people through coordinated famines, executions, and the Gulag system.”

Google's Pichai: 'No AI is perfect'

Gemini, like ChatGPT, It is known as the large language model. It is a type of artificial intelligence technology that predicts the next word or sequence of words based on a massive data set collected from the internet. But what Gemini and early versions of ChatGPT make clear is that tools can produce unexpected and sometimes disturbing results that even engineers working on advanced technology can't always predict before the tool's public release.

Big tech companies, including Google, have been studying AI image generators and large language models secretly in labs for years. But OpenAI's unveiling of ChatGPT in late 2022 has led to an AI arms race in Silicon Valley, with all the major tech companies trying to launch their own versions to stay competitive.

In his memo to employees at Google, Pichai wrote that when Gemini is re-released to the public, he hopes the service will be in better shape.

“No AI is perfect, especially at this nascent stage of the industry’s development, but we know the bar is high for us and we will continue to do so no matter how long it takes,” Pichai wrote.