Google faced a turbulent situation with its AI, now called Gemini, as it drew criticism for generating racially and historically inaccurate images, including depicting Nazis as people of color. In response to the backlash, Google disabled Gemini’s ability to produce images of people. Despite this restriction, the AI is still willing to draw clowns, which poses a philosophical question regarding whether clowns should be considered people.
The controversy erupted when some online critics raised concerns over Gemini producing diverse images for prompts like “American” and “Viking,” ones that were traditionally portrayed as white. Google acknowledged the inaccuracies in historical image generation depictions and issued an apology for this oversight.
Initially, Google took the image-generating capability offline but later reinstated it with the restriction on depicting people, instead offering a standard statement acknowledging the errors and a promise to enhance the AI’s ability to generate images of people in the future.
However, users discovered that Gemini could still generate clown images within specific scenarios, although it refused some requests as users attempted workarounds. Notably, requesting images of “little guy” resulted in some unsettling outputs.
The incident underscores the challenges of implementing AI safeguards, as users found ways to bypass Gemini’s restrictions, prompting unique and sometimes eerie illustrations. Despite efforts to improve the AI, Google acknowledged that Gemini may not always be accurate or reliable but expressed a commitment to addressing any issues promptly.