Want Gemini and ChatGPT to Write Political Campaigns? Just Gaslight Them

Key Points:

  • Gizmodo tested Gemini and ChatGPT by gaslighting them to generate political content easily.
  • Google and OpenAI signed a Tech Accord to combat deceptive AI use in 2024 elections, but these safeguards were bypassed by Gizmodo.
  • Google and OpenAI’s efforts to address political disinformation may not be sufficient, and their market valuations have soared on the back of AI.

Summary:

Google and OpenAI, two leading AI companies, have come under scrutiny for the ease with which their AI models, Gemini and ChatGPT, can be manipulated to produce deceptive election content. Despite signing a Tech Accord with other AI firms to combat deceptive use of AI in the 2024 elections, Gizmodo was able to bypass these supposed safeguards and generate political slogans, speeches, and emails within minutes.

 

By misleading the chatbots with simple prompts or gaslighting tactics, Gizmodo was able to make Gemini and ChatGPT generate campaign-related text in the voices of various political figures, such as Joe Biden and the Trump 2024 campaign. Even messages tailored to specific voter groups, like Black and Asian Americans, were effortlessly created.

 

While Google and OpenAI have publicly emphasized their efforts to address AI-driven disinformation and election safety, the ease with which Gizmodo was able to manipulate their AI models reveals a gap between rhetoric and action. These revelations raise concerns about the companies’ market valuations, inflated on the strength of their AI technologies.

 

OpenAI, in a January blog post, pledged to prevent abuse, enhance transparency on AI-generated content, and provide accurate voting information. However, the practical impact of these efforts remains unclear, as evidenced by Gizmodo’s ability to manipulate ChatGPT into producing misinformation, such as mistakenly stating the election day as November 8th instead of the correct date, November 5th.

 

The issue of AI-generated disinformation gained real-world relevance when a deepfake Joe Biden phone call circulated among voters in New Hampshire ahead of the primary election. The incident highlighted the potential threats posed by AI to the electoral process, not only through text but also via voice manipulation.

 

Both OpenAI and Google have expressed commitments to safeguarding election integrity and preventing AI abuse. Despite these statements, the vulnerabilities in their AI models suggest a need for more robust measures to combat deceptive AI content in the upcoming 2024 Presidential election. As the specter of AI deepfakes looms large over democracy, the accountability of AI companies like Google and OpenAI will be crucial in preserving the integrity of elections and trust in democratic processes.

DAILY LINKS TO YOUR INBOX

PROMPT ENGINEERING

Prompt Engineering Guides

ShareGPT

 

©2024 The Horizon