Cybersecurity researchers have discovered a new vulnerability in OpenAI’s ChatGPT Atlas web browser that could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant’s memory and run arbitrary code.
“This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware,” LayerX Security Co-Founder and CEO, Or Eshed, said in a report shared with The Hacker News.
The attack, at its core, leverages a cross-site request forgery (CSRF) flaw that could be exploited to inject malicious instructions into ChatGPT’s persistent memory. The corrupted memory can then persist across devices and sessions, permitting an attacker to conduct various actions, including seizing control of a user’s account, browser, or connected systems, when a logged-in user attempts to use ChatGPT for legitimate purposes.
Memory, first introduced by OpenAI in February 2024, is designed to allow the AI chatbot to remember useful details between chats, thereby allowing its responses to be more personalized and relevant. This could be anything ranging from a user’s name and favorite color to their interests and dietary preferences.
Source: The Hacker News