Microsoft’s AI chatbot ‘hallucinates’ election information

Key Points:

  • Generative AI, such as Microsoft’s Bing AI chatbot, has the potential to spread misinformation, posing a threat to the democratic process during major elections.
  • A study by Algorithm Watch and AI Forensics revealed that the chatbot provided incorrect information and fabricated false statements about elections in Germany and Switzerland.
  • Microsoft acknowledged the issue and committed to addressing the misinformation generated by its AI chatbot ahead of the 2024 elections.

Summary:

The article discusses the potential impact of generative AI, particularly Microsoft’s Bing AI chatbot powered by OpenAI’s GPT-4, on the spread of misinformation during upcoming major elections. The concern is that the chatbot may provide incorrect information and fabricate false statements related to elections, which could pose a threat to the democratic process. The study conducted by Algorithm Watch and AI Forensics found that the chatbot gave incorrect answers to one third of questions about elections in Germany and Switzerland, attributing false information to reputable sources and even fabricating allegations about corruption. Microsoft acknowledged the issue and committed to addressing it ahead of the 2024 elections.

DAILY LINKS TO YOUR INBOX

PROMPT ENGINEERING

Prompt Engineering Guides

ShareGPT

 

©2024 The Horizon