Home CryptocurrencyAltcoin Google lets AI depict people again after  diversity-borked images in Feb

Google lets AI depict people again after  diversity-borked images in Feb

by admin


Google is letting its artificial intelligence bot generate images of people again after a number of its historically inaccurate image generations went viral in February.

This time, however, it’ll have some additional guardrails.

Imagen 3, the company’s latest image generation model rolling out in “the coming days” to its Gemini AI model, will produce images of people again but won’t support photorealism, Google said in an Aug. 28 blog post.

It will also disallow any images of “identifiable individuals,” minors or anything excessively gory, violent or sexual.

Image generations of people will only be supported in English for its Gemini Advanced, Business, and Enterprise users for now. 

In February, Google pulled Gemini’s ability to generate people after viral posts showed it generating diverse but historically inaccurate images, such as Nazi-era German soldiers and America’s Founding Fathers as people of color.

A now-deleted tweet showing Gemini inaccurately depicting Nazi-era German soldiers. Source: X

Online commentators derided Google for programming the bot to be “woke.” Elon Musk — founder of rival AI firm xAI — even superfluously suggested in a March X post that AI models programmed for diversity could “potentially even” kill people.

At the time, Google said that Gemini generating a wide range of people was “generally a good thing” as the bot had global users, but it conceded that it was “missing the mark here.”

Related: Bet more on the Bitcoin miners cashing in on AI

“Of course, as with any generative AI tool, not every image Gemini creates will be perfect, but we’ll continue to listen to feedback from early users as we keep improving,” Google said in its latest post.

The tech giant added the “Gems” feature to its Gemini chatbot, allowing users to make custom chatbots, which it previewed at its May Google I/O conference. 

Gems, similar to OpenAI’s custom GPTs, can be given specific prompts with a “detailed set of instructions.” Google said users could refine them for tasks such as reviewing software code, language tutoring or editing writing.

AI Eye: AI drone ‘hellscape’ plan for Taiwan, LLMs too dumb to destroy humanity