Papers
arxiv:2509.21559

X-CoT: Explainable Text-to-Video Retrieval via LLM-based Chain-of-Thought Reasoning

Published on Sep 25
· Submitted by PRASANNA REDDY PULAKURTHI on Sep 29
Authors:
,
,
,
,

Abstract

X-CoT, an explainable retrieval framework using LLM CoT reasoning, enhances text-to-video retrieval by providing detailed rationales and improving performance.

AI-generated summary

Prevalent text-to-video retrieval systems mainly adopt embedding models for feature extraction and compute cosine similarities for ranking. However, this design presents two limitations. Low-quality text-video data pairs could compromise the retrieval, yet are hard to identify and examine. Cosine similarity alone provides no explanation for the ranking results, limiting the interpretability. We ask that can we interpret the ranking results, so as to assess the retrieval models and examine the text-video data? This work proposes X-CoT, an explainable retrieval framework upon LLM CoT reasoning in place of the embedding model-based similarity ranking. We first expand the existing benchmarks with additional video annotations to support semantic understanding and reduce data bias. We also devise a retrieval CoT consisting of pairwise comparison steps, yielding detailed reasoning and complete ranking. X-CoT empirically improves the retrieval performance and produces detailed rationales. It also facilitates the model behavior and data quality analysis. Code and data are available at: https://github.com/PrasannaPulakurthi/X-CoT.

Community

Paper author Paper submitter
edited 4 days ago

🚀 Excited to share our EMNLP 2025 main conference paper “X-CoT: Explainable Text-to-Video Retrieval via LLM-based Chain-of-Thought Reasoning”!

We propose a novel LLM-based re-ranking framework that integrates structured chain-of-thought reasoning for explainable text-to-video retrieval. The approach achieves strong retrieval performance while providing transparent rationales behind ranking decisions.

🔗 Paper: arXiv

💻 Code: GitHub

📄 HF Dataset: Hugging Face Dataset

🌐 Project Page: Website + Demo
project_page

We’d love feedback from the community on:

Potential applications of explainable retrieval in multimodal AI

How explainability could support user trust and evaluation in retrieval systems

Looking forward to hearing your thoughts! ✨

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.21559 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.21559 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.