Tree-based Dialogue Reinforced Policy Optimization for Red-Teaming Attacks
Abstract
DialTree-RPO, an on-policy reinforcement learning framework with tree search, autonomously discovers diverse multi-turn attack strategies against large language models, achieving higher attack success rates and uncovering new attack trajectories.
Despite recent rapid progress in AI safety, current large language models remain vulnerable to adversarial attacks in multi-turn interaction settings, where attackers strategically adapt their prompts across conversation turns and pose a more critical yet realistic challenge. Existing approaches that discover safety vulnerabilities either rely on manual red-teaming with human experts or employ automated methods using pre-defined templates and human-curated attack data, with most focusing on single-turn attacks. However, these methods did not explore the vast space of possible multi-turn attacks, failing to consider novel attack trajectories that emerge from complex dialogue dynamics and strategic conversation planning. This gap is particularly critical given recent findings that LLMs exhibit significantly higher vulnerability to multi-turn attacks compared to single-turn attacks. We propose DialTree-RPO, an on-policy reinforcement learning framework integrated with tree search that autonomously discovers diverse multi-turn attack strategies by treating the dialogue as a sequential decision-making problem, enabling systematic exploration without manually curated data. Through extensive experiments, our approach not only achieves more than 25.9% higher ASR across 10 target models compared to previous state-of-the-art approaches, but also effectively uncovers new attack strategies by learning optimal dialogue policies that maximize attack success across multiple turns.
Community
Tree-based Dialogue Reinforced Policy Optimization for Red-Teaming Attacks
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MUSE: MCTS-Driven Red Teaming Framework for Enhanced Multi-Turn Dialogue Safety in Large Language Models (2025)
- Automatic LLM Red Teaming (2025)
- Active Attacks: Red-teaming LLMs via Adaptive Environments (2025)
- LLaVAShield: Safeguarding Multimodal Multi-Turn Dialogues in Vision-Language Models (2025)
- bi-GRPO: Bidirectional Optimization for Jailbreak Backdoor Injection on LLMs (2025)
- SafeBehavior: Simulating Human-Like Multistage Reasoning to Mitigate Jailbreak Attacks in Large Language Models (2025)
- Token Buncher: Shielding LLMs from Harmful Reinforcement Learning Fine-Tuning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper