Adobe Firefly repeats the same AI blunders as Google Gemini

Key Points:

  • AI image creation tool Firefly repeats the mistakes made by Google’s Gemini in inaccurate racial and ethnic depictions
  • Google shut down Gemini after criticism for creating historically inaccurate images and refusing to depict white people
  • Adobe’s Firefly and Google’s Gemini rely on similar techniques but are trained on different datasets, with Adobe using only licensed or stock images


Adobe’s new AI image creation tool, Firefly, has landed in controversy for replicating mistakes made by Google’s now-defunct Gemini tool. Gemini faced backlash for generating historically inaccurate images, such as portraying America’s Founding Fathers as Black and omitting white individuals. Following public criticism, Google CEO Sundar Pichai acknowledged the missteps and ceased Gemini’s operations.


Similar to Gemini, Semafor’s tests on Firefly revealed concerning inaccuracies in racial and ethnic depictions. Firefly and Gemini share techniques for transforming text into images but rely on disparate datasets. While Adobe restricts its training data to stock images and licensed content, Google drew from a broader range. Despite their varying corporate cultures, both companies grapple with the challenge of controlling the inherent biases in AI image generation technology.


In a test comparing Firefly and Gemini responses to controversial prompts, Firefly generated images of Black soldiers serving in Nazi Germany during WWII, inserted Black figures into scenes from the constitutional convention in 1787, and depicted diverse variations of requested characters, including an old white man and Black Vikings. These outcomes stem from designers’ efforts to combat racial stereotypes, aiming for diverse representations in various roles, which has sparked criticism for perceived historical distortions.


The controversy surrounding Firefly underscores that this issue transcends individual companies and models. Notably, Adobe adheres to strict training standards, drawing from licensed content to mitigate copyright concerns for users. Despite its conscientious approach, Adobe has not yet addressed inquiries about the AI’s outputs.


The challenges faced by Firefly and its predecessors reflect the broader dilemma in the tech industry regarding biased AI image creation. While efforts to promote diversity and combat stereotypes are commendable, the inadvertent historical revisions and misrepresentations fuel debates about the intersection of technology and societal values. Adobe’s experience with Firefly serves as a cautionary tale for tech companies navigating the complexities of AI-driven image generation in an increasingly scrutinized landscape.



Prompt Engineering Guides



©2024 The Horizon