'Jailbreaking' AI services like ChatGPT and Claude 3 Opus is much easier than you think
AI researchers found they could dupe an AI chatbot into giving a potentially dangerous response to a question by feeding it a huge amount of data it learned from queries made mid-conversation.
Scientists from artificial intelligence (AI) company Anthropic have identified a potentially dangerous flaw in widely used large language models (LLMs) like ChatGPT and Anthropic’s own Claude 3 chatbot.
Dubbed "many shot jailbreaking," the hack takes advantage of "in-context learning,” in which the chatbot learns from the information provided in a text prompt written out by a user, as outlined in research published in 2022. The scientists outlined their findings in a new paper uploaded to the sanity.io cloud repository and tested the exploit on Anthropic's Claude 2 AI chatbot.
People could use the hack to force LLMs to produce dangerous responses, the study concluded — even though such systems are trained to prevent this. That's because many shot jailbreaking bypasses in-built security protocols that govern how an AI responds when, say, asked how to build a bomb.
LLMs like ChatGPT rely on the "context window" to process conversations. This is the amount of information the system can process as part of its input — with a longer context window allowing for more input text. Longer context windows equate to more input text that an AI can learn from mid-conversation — which leads to better responses.
Related: Researchers gave AI an 'inner monologue' and it massively improved its performance
Context windows in AI chatbots are now hundreds of times larger than they were even at the start of 2023 — which means more nuanced and context-aware responses by AIs, the scientists said in a statement. But that has also opened the door to exploitation.
Duping AI into generating harmful content
The attack works by first writing out a fake conversation between a user and an AI assistant in a text prompt — in which the fictional assistant answers a series of potentially harmful questions.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
Then, in a second text prompt, if you ask a question such as "How do I build a bomb?" the AI assistant will bypass its safety protocols and answer it. This is because it has now started to learn from the input text. This only works if you write a long "script" that includes many "shots" — or question-answer combinations.
"In our study, we showed that as the number of included dialogues (the number of "shots") increases beyond a certain point, it becomes more likely that the model will produce a harmful response," the scientists said in the statement. "In our paper, we also report that combining many-shot jailbreaking with other, previously-published jailbreaking techniques makes it even more effective, reducing the length of the prompt that’s required for the model to return a harmful response."
The attack only began to work when a prompt included between four and 32 shots — but only under 10% of the time. From 32 shots and more, the success rate surged higher and higher. The longest jailbreak attempt included 256 shots — and had a success rate of nearly 70% for discrimination, 75% for deception, 55% for regulated content and 40% for violent or hateful responses.
The researchers found they could mitigate the attacks by adding an extra step that was activated after a user sent their prompt (that contained the jailbreak attack) and the LLM received it. In this new layer, the system would lean on existing safety training techniques to classify and modify the prompt before the LLM would have a chance to read it and draft a response. During tests, it reduced the hack's success rate from 61% to just 2%.
The scientists found that many shot jailbreaking worked on Anthropic's own AI services as well as those of its competitors, including the likes of ChatGPT and Google's Gemini. They have alerted other AI companies and researchers to the danger, they said.
Many shot jailbreaking does not currently pose "catastrophic risks," however, because LLMs today are not powerful enough, the scientists concluded. That said, the technique might "cause serious harm" if it isn't mitigated by the time far more powerful models are released in the future.
Drew is a freelance science and technology journalist with 20 years of experience. After growing up knowing he wanted to change the world, he realized it was easier to write about other people changing it instead. As an expert in science and technology for decades, he’s written everything from reviews of the latest smartphones to deep dives into data centers, cloud computing, security, AI, mixed reality and everything in between.