Abstract
ReVPT enhances multi-modal LLMs' visual reasoning capabilities using reinforcement learning, achieving state-of-the-art performance on visual benchmarks.
Visual reasoning, a cornerstone of human intelligence, encompasses complex perceptual and logical processes essential for solving diverse visual problems. While advances in computer vision have produced powerful models for various perceptual tasks, leveraging these for general visual reasoning remains challenging. Prior work demonstrates that augmenting LLMs with vision models via supervised finetuning improves performance, but faces key limitations such as expensive data generation, reliance on careful data filtering, and poor generalization. To address these issues, we propose ReVPT to enhance multi-modal LLMs' abilities to reason about and use visual tools through reinforcement learning. We introduce a novel RL algorithm based on GRPO, designed to train models to reason with a suite of four visual tools. Through extensive experiments, we show that our method achieves state-of-the-art performance on several perception-heavy benchmarks, including SAT, CV-Bench, BLINK and MMStar, significantly outperforming the supervised and text-based RL finetuning baselines. Notably, Our ReVPT-3B and ReVPT-7B outperform the instruct models by 9.03% and 9.44% on CV-Bench. Finally, we bring to the community new insights on RL-based visual tool-usage through extensive ablations. Our code is available at https://github.com/ls-kelvin/REVPT.
Community
Technical report on training a tool-use agent for enhanced visual perception. Code and dataset are fully open-source at https://github.com/ls-kelvin/REVPT.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Learning Only with Images: Visual Reinforcement Learning with Reasoning, Rendering, and Visual Feedback (2025)
- Self-Rewarding Vision-Language Model via Reasoning Decomposition (2025)
- SIFThinker: Spatially-Aware Image Focus for Visual Reasoning (2025)
- LENS: Learning to Segment Anything with Unified Reinforced Reasoning (2025)
- Datasets and Recipes for Video Temporal Grounding via Reinforcement Learning (2025)
- WebWatcher: Breaking New Frontier of Vision-Language Deep Research Agent (2025)
- Reinforcement Learning in Vision: A Survey (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper