Google makes drastic change to Gemini AI after shocking "Black Nazi" results
Mountain View, California - Google is no longer allowing its Gemini AI software to generate images of people after its efforts to display more people of color led to shockingly inaccurate and offensive depictions of history.
After images emerged on social media of racially diverse Nazi soldiers and American colonial settlers, the tech giant admitted that in some cases the depiction did not correspond to historical context and that the image generation would be temporarily limited.
At the same time, Google defended its efforts to make AI-generated images more diverse, even if the company was "missing the mark" in this case.
Following advances by rivals like Microsoft's chat assistant Copilot, Google Gemini was three weeks ago given a new feature allowing users to generate images from text specifications.
In a blog post on Friday, Google explained that it had failed to program exceptions for cases in which diversity would definitely be out of place. The resulting images were "embarrassing and wrong," Google Senior Vice President Prabhakar Raghavan said.
"I can't promise that Gemini won't occasionally produce embarrassing, incorrect, or offensive results," he wrote, promising that Google will intervene quickly in the event of problems.
At the same time, the software had become too cautious over time and refused to fulfil some requirements, Raghavan said. But if users wanted to display images of a "white veterinarian with a dog," the AI had to comply.
In recent years, various AI applications have displayed clear racial bias. For example, facial recognition software was initially poor at recognizing people of color.
Many AI services for creating photos meanwhile started out depicting mostly white people, and the technology often reflects the biases and prejudices of its developers.
Cover photo: IMAGO / imagebroker