Researchers at Ben-Gurion University have identified a vulnerability in cloud-based AI assistants, such as Chat GPT, that enables hackers to intercept and decrypt conversations between users and these AI assistants. The issue lies in the way these chatbots send responses in small tokens broken into parts for faster encryption, making them susceptible to interception by hackers. By analyzing the length, size, and sequence of these tokens, malicious actors can decrypt the responses.
Yisroel Mirsky, head of the Offensive AI Research Lab, highlighted the severity of the vulnerability, stating that anyone, including malicious actors on the same Wi-Fi network or LAN as the client, can read private chats sent from chatbots like ChatGPT. The attack is passive and can occur without the knowledge of the AI assistant provider or the client, as the encryption implemented by OpenAI is flawed, leading to exposure of message content.
The researchers conducted a comprehensive evaluation of the vulnerability across various AI assistant platforms, including Microsoft Bing AI (Copilot) and OpenAI’s ChatGPT-4. They successfully deciphered responses from four different services, demonstrating the exploit’s effectiveness.
To address this vulnerability, the researchers propose two main solutions: either ceasing the practice of sending tokens individually or increasing the size of tokens by padding them to the length of the largest possible packet. This adjustment aims to make the tokens more challenging for hackers to analyze, thereby enhancing the security of conversations with AI assistants.