A recent report published on arXiv highlighted that popular artificial intelligence tools are incorporating covert racist biases as they evolve. The study by a team of technology and linguistics researchers focused on large language models like OpenAI’s ChatGPT and Google’s Gemini, which showed discriminatory stereotypes towards speakers of African American Vernacular English (AAVE).
The researchers, led by Valentin Hoffman from the Allen Institute for Artificial Intelligence, delved into how these AI models perceive individuals based on dialect differences. They discovered that AAVE speakers were often labeled as “stupid” and “lazy,” resulting in recommendations for lower-paying jobs and even the death penalty for criminal defendants using the dialect in court.
Hoffman expressed concerns that these biases could adversely impact job candidates who code-switch between AAVE and standard American English, especially if their online presence demonstrates the use of AAVE. The potential implications in various sectors, including the legal system, raise alarms about the fairness and accuracy of AI-driven decisions.
Despite attempts to implement ethical guardrails to mitigate overt racism in language models, researchers found that as these models grow more sophisticated, covert racism becomes more pronounced. Avijit Ghosh, an AI ethics researcher, explained that while guardrails may mask explicit biases, they do not address the underlying issue of discriminatory input shaping AI output.
The rapid growth of the generative AI market, projected to reach $1.3 trillion by 2032, alongside the limited regulatory oversight, amplifies concerns about the unchecked influence of biased AI in critical areas like employment and legal proceedings. As experts advocate for responsible AI development and usage, there is a growing consensus on the necessity of regulatory frameworks to prevent harmful implications of AI technologies in sensitive domains.