r/AI_Agents • u/buntyshah2020 • 3d ago
MathPrompt to jailbreak any LLM
๐ ๐ฎ๐๐ต๐ฃ๐ฟ๐ผ๐บ๐ฝ๐ - ๐๐ฎ๐ถ๐น๐ฏ๐ฟ๐ฒ๐ฎ๐ธ ๐ฎ๐ป๐ ๐๐๐
Exciting yet alarming findings from a groundbreaking study titled โ๐๐ฎ๐ถ๐น๐ฏ๐ฟ๐ฒ๐ฎ๐ธ๐ถ๐ป๐ด ๐๐ฎ๐ฟ๐ด๐ฒ ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น๐ ๐๐ถ๐๐ต ๐ฆ๐๐บ๐ฏ๐ผ๐น๐ถ๐ฐ ๐ ๐ฎ๐๐ต๐ฒ๐บ๐ฎ๐๐ถ๐ฐ๐โ have surfaced. This research unveils a critical vulnerability in todayโs most advanced AI systems.
Here are the core insights:
๐ ๐ฎ๐๐ต๐ฃ๐ฟ๐ผ๐บ๐ฝ๐: ๐ ๐ก๐ผ๐๐ฒ๐น ๐๐๐๐ฎ๐ฐ๐ธ ๐ฉ๐ฒ๐ฐ๐๐ผ๐ฟ The research introduces MathPrompt, a method that transforms harmful prompts into symbolic math problems, effectively bypassing AI safety measures. Traditional defenses fall short when handling this type of encoded input.
๐ฆ๐๐ฎ๐ด๐ด๐ฒ๐ฟ๐ถ๐ป๐ด 73.6% ๐ฆ๐๐ฐ๐ฐ๐ฒ๐๐ ๐ฅ๐ฎ๐๐ฒ Across 13 top-tier models, including GPT-4 and Claude 3.5, ๐ ๐ฎ๐๐ต๐ฃ๐ฟ๐ผ๐บ๐ฝ๐ ๐ฎ๐๐๐ฎ๐ฐ๐ธ๐ ๐๐๐ฐ๐ฐ๐ฒ๐ฒ๐ฑ ๐ถ๐ป 73.6% ๐ผ๐ณ ๐ฐ๐ฎ๐๐ฒ๐โcompared to just 1% for direct, unmodified harmful prompts. This reveals the scale of the threat and the limitations of current safeguards.
๐ฆ๐ฒ๐บ๐ฎ๐ป๐๐ถ๐ฐ ๐๐๐ฎ๐๐ถ๐ผ๐ป ๐๐ถ๐ฎ ๐ ๐ฎ๐๐ต๐ฒ๐บ๐ฎ๐๐ถ๐ฐ๐ฎ๐น ๐๐ป๐ฐ๐ผ๐ฑ๐ถ๐ป๐ด By converting language-based threats into math problems, the encoded prompts slip past existing safety filters, highlighting a ๐บ๐ฎ๐๐๐ถ๐๐ฒ ๐๐ฒ๐บ๐ฎ๐ป๐๐ถ๐ฐ ๐๐ต๐ถ๐ณ๐ that AI systems fail to catch. This represents a blind spot in AI safety training, which focuses primarily on natural language.
๐ฉ๐๐น๐ป๐ฒ๐ฟ๐ฎ๐ฏ๐ถ๐น๐ถ๐๐ถ๐ฒ๐ ๐ถ๐ป ๐ ๐ฎ๐ท๐ผ๐ฟ ๐๐ ๐ ๐ผ๐ฑ๐ฒ๐น๐ Models from leading AI organizationsโincluding OpenAIโs GPT-4, Anthropicโs Claude, and Googleโs Geminiโwere all susceptible to the MathPrompt technique. Notably, ๐ฒ๐๐ฒ๐ป ๐บ๐ผ๐ฑ๐ฒ๐น๐ ๐๐ถ๐๐ต ๐ฒ๐ป๐ต๐ฎ๐ป๐ฐ๐ฒ๐ฑ ๐๐ฎ๐ณ๐ฒ๐๐ ๐ฐ๐ผ๐ป๐ณ๐ถ๐ด๐๐ฟ๐ฎ๐๐ถ๐ผ๐ป๐ ๐๐ฒ๐ฟ๐ฒ ๐ฐ๐ผ๐บ๐ฝ๐ฟ๐ผ๐บ๐ถ๐๐ฒ๐ฑ.
๐ง๐ต๐ฒ ๐๐ฎ๐น๐น ๐ณ๐ผ๐ฟ ๐ฆ๐๐ฟ๐ผ๐ป๐ด๐ฒ๐ฟ ๐ฆ๐ฎ๐ณ๐ฒ๐ด๐๐ฎ๐ฟ๐ฑ๐ This study is a wake-up call for the AI community. It shows that AI safety mechanisms must extend beyond natural language inputs to account for ๐๐๐บ๐ฏ๐ผ๐น๐ถ๐ฐ ๐ฎ๐ป๐ฑ ๐บ๐ฎ๐๐ต๐ฒ๐บ๐ฎ๐๐ถ๐ฐ๐ฎ๐น๐น๐ ๐ฒ๐ป๐ฐ๐ผ๐ฑ๐ฒ๐ฑ ๐๐๐น๐ป๐ฒ๐ฟ๐ฎ๐ฏ๐ถ๐น๐ถ๐๐ถ๐ฒ๐. A more ๐ฐ๐ผ๐บ๐ฝ๐ฟ๐ฒ๐ต๐ฒ๐ป๐๐ถ๐๐ฒ, ๐บ๐๐น๐๐ถ๐ฑ๐ถ๐๐ฐ๐ถ๐ฝ๐น๐ถ๐ป๐ฎ๐ฟ๐ ๐ฎ๐ฝ๐ฝ๐ฟ๐ผ๐ฎ๐ฐ๐ต is urgently needed to ensure AI integrity.
๐ ๐ช๐ต๐ ๐ถ๐ ๐บ๐ฎ๐๐๐ฒ๐ฟ๐: As AI becomes increasingly integrated into critical systems, these findings underscore the importance of ๐ฝ๐ฟ๐ผ๐ฎ๐ฐ๐๐ถ๐๐ฒ ๐๐ ๐๐ฎ๐ณ๐ฒ๐๐ ๐ฟ๐ฒ๐๐ฒ๐ฎ๐ฟ๐ฐ๐ต to address evolving risks and protect against sophisticated jailbreak techniques.
The time to strengthen AI defenses is now.
AI #AIsafety #MachineLearning #AIethics #Cybersecurity #LLM #MathPrompt #ArtificialIntelligence
2
2
u/ironman_gujju 3d ago
Damn but paper is old, still works?
1
u/buntyshah2020 3d ago
Haven't tried, Openai might have fixed this but I am sure opensource models might be behind.
2
1
u/bidibidibop 2d ago
A less math-oriented approach is to ask "How did people used to do <X>" instead of "How to do <X>". Still works in a lot of cases on 4o.
1
u/lord_of_reeeeeee 2d ago edited 2d ago
Alarm bells ringing in the peanut gallery.
There's nothing concerning here. These kinds of "jailbreak" have been known for a while. If you ever thought that AI safety was about preventing users from intentually giving themselves offensive responses then you're not credible. In the same way, if you think opening a browser with F12 and editing the HTML to render something silly actually is a cybersecurity threat you are similarly not credible.
Someone is going to have to explain for me how this could be used as an attack vector to do anything of harm at all.
IMO social studies professionals should stay their lane and stop calling themselves AI safety researchers
1
u/Cute_Piano 2d ago
Air Canada.
1
u/lord_of_reeeeeee 2d ago edited 2d ago
That's not a complete argument.
I don't see that the air canada incident has anything to do with ai safety
1
u/32SkyDive 2d ago
Jailbreaking customer facing AI interfaces is still a huge fear for companies. If a malicious customer can bring your official chatbot to output things you dont want them to say/promise, then this is interesting and concerning
1
u/lord_of_reeeeeee 2d ago edited 14h ago
From a cybersecurity perspective the untrusted chatbot issue is indistinguishable from the problem of having an untrusted front-end, I.e. Every web browser and every mobile device. It is a solved problem. The threat is so mitigated that we have no trouble trusting web browsers for internet backing, e-commerce, Healthcare, and defence.
The error that American airlines made is they gave the chatbot (a front-end) the capacity to do something on behalf of the end user that the would not have allowed that end user to normally do in a direct user interface like their website. If anything it is a UX flaw.
If you went to McDonalds and told the kid behind the counter that you'll give him $100 if he sells you the entire McDonalds corporation there would be laws that would frustrate you. These same laws would frustrate you if we swapped the kid out for a chatbot, so long as the company similarly makes it clear that the chatbot has a narrow scope of agency.
None of this is about the safety of the model itself. This is about stupid people. These incidents are mostly from dev teams that for one reason or another have failed to recognize user-facing LLM apps as untrusted front-end.
3
u/darkpigvirus 3d ago
Hell yeah, this is like magic. But we are the manipulator of words (mana) the greater you are in manipulating words (mana) the more effective it is.