Concept-Guided Fine-Tuning: Steering ViTs away from Spurious Correlations to Improve Robustness
Abstract
A novel fine-tuning framework for Vision Transformers that improves robustness under distribution shifts by aligning model attention with concept-level semantics using automatically generated, label-free concept masks from large language and vision models.
Vision Transformers (ViTs) often degrade under distribution shifts because they rely on spurious correlations, such as background cues, rather than semantically meaningful features. Existing regularization methods, typically relying on simple foreground-background masks, which fail to capture the fine-grained semantic concepts that define an object (e.g., ``long beak'' and ``wings'' for a ``bird''). As a result, these methods provide limited robustness to distribution shifts. To address this limitation, we introduce a novel finetuning framework that steers model reasoning toward concept-level semantics. Our approach optimizes the model's internal relevance maps to align with spatially grounded concept masks. These masks are generated automatically, without manual annotation: class-relevant concepts are first proposed using an LLM-based, label-free method, and then segmented using a VLM. The finetuning objective aligns relevance with these concept regions while simultaneously suppressing focus on spurious background areas. Notably, this process requires only a minimal set of images and uses half of the dataset classes. Extensive experiments on five out-of-distribution benchmarks demonstrate that our method improves robustness across multiple ViT-based models. Furthermore, we show that the resulting relevance maps exhibit stronger alignment with semantic object parts, offering a scalable path toward more robust and interpretable vision models. Finally, we confirm that concept-guided masks provide more effective supervision for model robustness than conventional segmentation maps, supporting our central hypothesis.
Community
Accepted to CVPR26!
Project page: https://yonisgit.github.io/concept-ft/
Github: https://github.com/yonisGit/cft
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SL-CBM: Enhancing Concept Bottleneck Models with Semantic Locality for Better Interpretability (2026)
- DiSa: Saliency-Aware Foreground-Background Disentangled Framework for Open-Vocabulary Semantic Segmentation (2026)
- Unlocking ImageNet's Multi-Object Nature: Automated Large-Scale Multilabel Annotation (2026)
- LoGoSeg: Integrating Local and Global Features for Open-Vocabulary Semantic Segmentation (2026)
- Unify the Views: View-Consistent Prototype Learning for Few-Shot Segmentation (2026)
- Fair Context Learning for Evidence-Balanced Test-Time Adaptation in Vision-Language Models (2026)
- Learning Accurate Segmentation Purely from Self-Supervision (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper