The article discusses the potential impact of generative AI, particularly Microsoft’s Bing AI chatbot powered by OpenAI’s GPT-4, on the spread of misinformation during upcoming major elections. The concern is that the chatbot may provide incorrect information and fabricate false statements related to elections, which could pose a threat to the democratic process. The study conducted by Algorithm Watch and AI Forensics found that the chatbot gave incorrect answers to one third of questions about elections in Germany and Switzerland, attributing false information to reputable sources and even fabricating allegations about corruption. Microsoft acknowledged the issue and committed to addressing it ahead of the 2024 elections.