JasperHaozhe commited on
Commit
2811233
·
verified ·
1 Parent(s): abf03e4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-VL-7B-Instruct
4
+ language:
5
+ - en
6
+ license: apache-2.0
7
+ tags:
8
+ - transformers
9
+ - multimodal
10
+ pipeline_tag: visual-question-answering
11
+ ---
12
+ # VL-Reasoner-7B
13
+ **VL-Reasoner-7B** achieves superior results on various multimodal reasoning benchmarks.
14
+
15
+ It is trained using the **GRPO-SSR** techniques, serving as the foundation for [**VL-Rethinker**](https://huggingface.co/TIGER-Lab/VL-Rethinker-7B/).
16
+
17
+ For details of our approach and performance comparison, please see our [paper](https://arxiv.org/abs/2504.08837).
18
+
19
+ For details of training and evaluation, please see our [code repo](https://github.com/TIGER-AI-Lab/VL-Rethinker/).
20
+
21
+ Explore further via the following links:
22
+
23
+ | [**🚀Project Page**](https://tiger-ai-lab.github.io/VL-Rethinker/) | [**📖Paper**](https://arxiv.org/abs/2504.08837) | [**🔗Github**](https://github.com/TIGER-AI-Lab/VL-Rethinker/) | [**🤗Data** (Coming Soon)]() |
24
+
25
+ ## Citation
26
+
27
+ If you feel this model useful, please give us a free cite:
28
+ ```
29
+ @article{vl-rethinker,
30
+ title={VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning},
31
+ author = {Wang, Haozhe and Qu, Chao and Huang, Zuming and Chu, Wei and Lin, Fangzhen and Chen, Wenhu},
32
+ journal={arXiv preprint arXiv:2504.08837},
33
+ year={2025}
34
+ }
35
+ ```