RefAM: Attention Magnets for Zero-Shot Referral Segmentation
Abstract
A new method leverages diffusion transformers' attention scores for referring segmentation without fine-tuning or additional training, improving performance through stop word filtering and attention redistribution.
Most existing approaches to referring segmentation achieve strong performance only through fine-tuning or by composing multiple pre-trained models, often at the cost of additional training and architectural modifications. Meanwhile, large-scale generative diffusion models encode rich semantic information, making them attractive as general-purpose feature extractors. In this work, we introduce a new method that directly exploits features, attention scores, from diffusion transformers for downstream tasks, requiring neither architectural modifications nor additional training. To systematically evaluate these features, we extend benchmarks with vision-language grounding tasks spanning both images and videos. Our key insight is that stop words act as attention magnets: they accumulate surplus attention and can be filtered to reduce noise. Moreover, we identify global attention sinks (GAS) emerging in deeper layers and show that they can be safely suppressed or redirected onto auxiliary tokens, leading to sharper and more accurate grounding maps. We further propose an attention redistribution strategy, where appended stop words partition background activations into smaller clusters, yielding sharper and more localized heatmaps. Building on these findings, we develop RefAM, a simple training-free grounding framework that combines cross-attention maps, GAS handling, and redistribution. Across zero-shot referring image and video segmentation benchmarks, our approach consistently outperforms prior methods, establishing a new state of the art without fine-tuning or additional components.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Seg4Diff: Unveiling Open-Vocabulary Segmentation in Text-to-Image Diffusion Transformers (2025)
- SAMDWICH: Moment-aware Video-text Alignment for Referring Video Object Segmentation (2025)
- When and What: Diffusion-Grounded VideoLLM with Entity Aware Segmentation for Long Video Understanding (2025)
- SimToken: A Simple Baseline for Referring Audio-Visual Segmentation (2025)
- Re-purposing SAM into Efficient Visual Projectors for MLLM-Based Referring Image Segmentation (2025)
- Text4Seg++: Advancing Image Segmentation via Generative Language Modeling (2025)
- Latent Expression Generation for Referring Image Segmentation and Grounding (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper