A critical security vulnerability in ChatGPT has been discovered that allows attackers to embed malicious SVG (Scalable Vector Graphics) and image files directly into shared conversations, potentially exposing users to sophisticated phishing attacks and harmful content.
The flaw, recently documented as CVE-2025-43714, affects the ChatGPT system through March 30, 2025.
Security researchers identified that instead of rendering SVG code as text within code blocks, ChatGPT inappropriately executes these elements when a chat is reopened or shared through public links.
This behavior effectively creates a stored cross-site scripting (XSS) vulnerability within the popular AI platform.
“The ChatGPT system through 2025-03-30 performs inline rendering of SVG documents instead of, for example, rendering them as text inside a code block, which enables HTML injection within most modern graphical web browsers,” said the researcher with handle zer0dac.
The security implications are significant. Attackers can craft deceptive messages embedded within SVG code that appear legitimate to unsuspecting users.
More concerning are the potential impacts on user wellbeing, as malicious actors could create SVGs with epileptic-inducing flashing effects that may harm photosensitive individuals.
Source: Cybersecurity News