The UK’s cybersecurity agency has warned that chatbots can be manipulated by hackers to cause scary real-world consequences.
The National Cyber Security Centre (NCSC) has said there are growing cybersecurity risks of individuals manipulating the prompts through “prompt injection” attacks.
This is where a user creates an input or a prompt that is designed to make a language model – the technology behind chatbots – behave in an unintended manner.
A chatbot runs on artificial intelligence and is able to give answers to prompted questions by users. They mimic human-like conversations, which they have been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.
Large language models (LLMs), such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts.
Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious prompt injection will grow.
For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.
Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.
This year, Microsoft released a new version of its Bing search engine and conversational bot powered by LLMs. A Stanford university student, Kevin Liu, was able to create a prompt injection to find Bing Chat’s initial prompt.
Source: The Guardian