Papers
arxiv:2510.19496

CARES: Context-Aware Resolution Selector for VLMs

Published on Oct 22, 2025
Authors:
,
,
,
,

Abstract

A context-aware resolution selector module reduces computational costs in vision-language models by predicting minimal sufficient input resolution while maintaining task performance.

AI-generated summary

Large vision-language models (VLMs) commonly process images at native or high resolution to remain effective across tasks. This inflates visual tokens ofter to 97-99% of total tokens, resulting in high compute and latency, even when low-resolution images would suffice. We introduce CARES-a Context-Aware Resolution Selector, a lightweight preprocessing module that, given an image-query pair, predicts the minimal sufficient input resolution. CARES uses a compact VLM (350M) to extract features and predict when a target pretrained VLM's response converges to its peak ability to answer correctly. Though trained as a discrete classifier over a set of optional resolutions, CARES interpolates continuous resolutions at inference for fine-grained control. Across five multimodal benchmarks spanning documents and natural images, as well as diverse target VLMs, CARES preserves task performance while reducing compute by up to 80%.

Community

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 1

Spaces citing this paper 1

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.