Benchmarking Information Retrieval Models on Complex Retrieval Tasks
Abstract
A benchmark of complex retrieval tasks reveals that even state-of-the-art models struggle with high-quality retrieval, and LLM-based query expansion does not consistently improve performance.
Large language models (LLMs) are incredible and versatile tools for text-based tasks that have enabled countless, previously unimaginable, applications. Retrieval models, in contrast, have not yet seen such capable general-purpose models emerge. To achieve this goal, retrieval models must be able to perform complex retrieval tasks, where queries contain multiple parts, constraints, or requirements in natural language. These tasks represent a natural progression from the simple, single-aspect queries that are used in the vast majority of existing, commonly used evaluation sets. Complex queries naturally arise as people expect search systems to handle more specific and often ambitious information requests, as is demonstrated by how people use LLM-based information systems. Despite the growing desire for retrieval models to expand their capabilities in complex retrieval tasks, there exist limited resources to assess the ability of retrieval models on a comprehensive set of diverse complex tasks. The few resources that do exist feature a limited scope and often lack realistic settings making it hard to know the true capabilities of retrieval models on complex real-world retrieval tasks. To address this shortcoming and spur innovation in next-generation retrieval models, we construct a diverse and realistic set of complex retrieval tasks and benchmark a representative set of state-of-the-art retrieval models. Additionally, we explore the impact of LLM-based query expansion and rewriting on retrieval quality. Our results show that even the best models struggle to produce high-quality retrieval results with the highest average nDCG@10 of only 0.346 and R@100 of only 0.587 across all tasks. Although LLM augmentation can help weaker models, the strongest model has decreased performance across all metrics with all rewriting techniques.
Community
This paper proposed a new Information Retrieval Benchmark for complex retrieval tasks, those which have multiple requirements or aspects. As LLMs become more common users expect retrieval systems to be capable of handling complex information needs, but until now it has been unclear how retrieval models perform on a diverse set of complex retrieval tasks. This paper shows that even state-of-the-art retrieval models struggle with complex retrieval tasks suggesting more work is needed to produce a powerful and generalizable retrieval model for complex tasks.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DIVER: A Multi-Stage Approach for Reasoning-intensive Information Retrieval (2025)
- FinAgentBench: A Benchmark Dataset for Agentic Retrieval in Financial Question Answering (2025)
- MSRS: Evaluating Multi-Source Retrieval-Augmented Generation (2025)
- A Survey of Long-Document Retrieval in the PLM and LLM Era (2025)
- Test-time Corpus Feedback: From Retrieval to RAG (2025)
- Advancing Retrieval-Augmented Generation for Structured Enterprise and Internal Data (2025)
- Improving Table Retrieval with Question Generation from Partial Tables (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper