Cybersecurity researchers have disclosed a new set of vulnerabilities impacting OpenAI’s ChatGPT artificial intelligence (AI) chatbot that could be exploited by an attacker to steal personal information from users’ memories and chat histories without their knowledge.
The seven vulnerabilities and attack techniques, according to Tenable, were found in OpenAI’s GPT-4o and GPT-5 models. OpenAI has since addressed some of them.
These issues expose the AI system to indirect prompt injection attacks, allowing an attacker to manipulate the expected behavior of a large language model (LLM) and trick it into performing unintended or malicious actions, security researchers Moshe Bernstein and Liv Matan said in a report shared with The Hacker News.
The identified shortcomings are listed below –
- Indirect prompt injection vulnerability via trusted sites in Browsing Context, which involves asking ChatGPT to summarize the contents of web pages with malicious instructions added in the comment section, causing the LLM to execute them
- Zero-click indirect prompt injection vulnerability in Search Context, which involves tricking the LLM into executing malicious instructions simply by asking about a website in the form of a natural language query, owing to the fact that the site may have been indexed by search engines like Bing and OpenAI’s crawler associated with SearchGPT.
Source: The Hacker News