X-CoT: Explainable Text-to-Video Retrieval via LLM-based Chain-of-Thought Reasoning
Abstract
X-CoT, an explainable retrieval framework using LLM CoT reasoning, enhances text-to-video retrieval by providing detailed rationales and improving performance.
Prevalent text-to-video retrieval systems mainly adopt embedding models for feature extraction and compute cosine similarities for ranking. However, this design presents two limitations. Low-quality text-video data pairs could compromise the retrieval, yet are hard to identify and examine. Cosine similarity alone provides no explanation for the ranking results, limiting the interpretability. We ask that can we interpret the ranking results, so as to assess the retrieval models and examine the text-video data? This work proposes X-CoT, an explainable retrieval framework upon LLM CoT reasoning in place of the embedding model-based similarity ranking. We first expand the existing benchmarks with additional video annotations to support semantic understanding and reduce data bias. We also devise a retrieval CoT consisting of pairwise comparison steps, yielding detailed reasoning and complete ranking. X-CoT empirically improves the retrieval performance and produces detailed rationales. It also facilitates the model behavior and data quality analysis. Code and data are available at: https://github.com/PrasannaPulakurthi/X-CoT.
Community
🚀 Excited to share our EMNLP 2025 main conference paper “X-CoT: Explainable Text-to-Video Retrieval via LLM-based Chain-of-Thought Reasoning”!
We propose a novel LLM-based re-ranking framework that integrates structured chain-of-thought reasoning for explainable text-to-video retrieval. The approach achieves strong retrieval performance while providing transparent rationales behind ranking decisions.
🔗 Paper: arXiv
💻 Code: GitHub
📄 HF Dataset: Hugging Face Dataset
🌐 Project Page: Website + Demo
We’d love feedback from the community on:
Potential applications of explainable retrieval in multimodal AI
How explainability could support user trust and evaluation in retrieval systems
Looking forward to hearing your thoughts! ✨
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Captioning for Text-Video Retrieval via Dual-Group Direct Preference Optimization (2025)
- EVENT-Retriever: Event-Aware Multimodal Image Retrieval for Realistic Captions (2025)
- CMRAG: Co-modality-based visual document retrieval and question answering (2025)
- ConViS-Bench: Estimating Video Similarity Through Semantic Concepts (2025)
- Enhancing Partially Relevant Video Retrieval with Robust Alignment Learning (2025)
- GAID: Frame-Level Gated Audio-Visual Integration with Directional Perturbation for Text-Video Retrieval (2025)
- Chat-Driven Text Generation and Interaction for Person Retrieval (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper