Improve model card: Update pipeline tag, add library name, and enrich content for Video-MTR

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +14 -5
README.md CHANGED
@@ -1,11 +1,20 @@
1
  ---
2
- license: apache-2.0
3
- language:
4
- - en
5
  base_model:
6
  - Qwen/Qwen2.5-VL-7B-Instruct
7
- pipeline_tag: visual-question-answering
 
 
 
 
8
  ---
 
 
 
 
 
 
 
 
9
  ## References
10
 
11
- * [Model Paper](https://arxiv.org/abs/2508.20478)
 
1
  ---
 
 
 
2
  base_model:
3
  - Qwen/Qwen2.5-VL-7B-Instruct
4
+ language:
5
+ - en
6
+ license: apache-2.0
7
+ pipeline_tag: video-text-to-text
8
+ library_name: transformers
9
  ---
10
+
11
+ # Video-MTR: Reinforced Multi-Turn Reasoning for Long Video Understanding
12
+
13
+ This model is a checkpoint for **Video-MTR**, presented in the paper [Video-MTR: Reinforced Multi-Turn Reasoning for Long Video Understanding](https://arxiv.org/abs/2508.20478).
14
+
15
+ ## Abstract
16
+ Long-form video understanding, characterized by long-range temporal dependencies and multiple events, remains a challenge. Existing methods often rely on static reasoning or external visual-language models (VLMs), which face issues like complexity and sub-optimal performance due to the lack of end-to-end training. In this paper, we propose Video-MTR, a reinforced multi-turn reasoning framework designed to enable iterative key video segment selection and question comprehension. Unlike traditional video reasoning pipeline, which generate predictions in a single turn, Video-MTR performs reasoning in multiple turns, selecting video segments progressively based on the evolving understanding of previously processed segments and the current question. This iterative process allows for a more refined and contextually aware analysis of the video. To ensure intermediate reasoning process, we introduce a novel gated bi-level reward system, combining trajectory-level rewards based on answer correctness and turn-level rewards emphasizing frame-query relevance. This system optimizes both video segment selection and question comprehension, eliminating the need for external VLMs and allowing end-to-end training. Extensive experiments on benchmarks like VideoMME, MLVU, and EgoSchema demonstrate that Video-MTR outperforms existing methods in both accuracy and efficiency, advancing the state-of-the-art in long video understanding.
17
+
18
  ## References
19
 
20
+ * [Model Paper](https://arxiv.org/abs/2508.20478)