Google’s Gemini AI model, known for generating realistic content based on prompts, has faced criticism for being capable of producing deceptive information, including false political content and fabricated details. This has drawn ire from policymakers and raised concerns about the misuse of AI tools for disinformation purposes.
In response to this backlash, Google, despite recent layoffs in its voice assistance hardware teams, is redirecting its focus and resources towards AI safety. The tech giant’s AI R&D division, DeepMind, has introduced a new organization called AI Safety and Alignment, which aims to enhance safety measures within Google’s AI models, particularly focusing on preventing misleading medical advice, ensuring child safety, and addressing bias issues.
The organization will also incorporate a team dedicated to artificial general intelligence (AGI) safety, with Anca Dragan, a seasoned AI researcher and former Waymo staff scientist, leading the efforts. Concurrently, skeptics of AI tools, especially in light of deepfake technology, have expressed concerns about the potential spread of misinformation and its impact on society and elections.
As Google and its competitors strive to attract enterprises with AI innovations like GenAI, companies are cautious about the technology’s limitations and implications, such as compliance, privacy, reliability, and the skills needed to leverage these tools effectively. The AI safety challenges persist, with ongoing efforts by DeepMind to invest more resources in mitigating risks and implementing safety evaluation frameworks.
While Dragan acknowledges the complexity of ensuring AI model safety, she emphasizes the importance of addressing human biases, uncertainty estimates, monitoring mechanisms, and dialogue confirmations to minimize the risk of misbehavior. However, the inherent challenges in achieving complete confidence in AI behavior raise doubts among customers, the public, and regulators regarding the reliability and safety of these advanced AI systems.