MLLMs often miss small details and spatial relations in cluttered scenes, leading to errors in fine-grained perceptual grounding. We introduce AttWarp, a lightweight method that allocates more resolution to query-relevant content while compressing less informative areas, all while preserving global context. At test time, the approach uses an MLLM's cross-modal attention to perform rectilinear warping of the input image, reallocating spatial resolution toward regions the model deems important, without changing model weights or architecture. This attention-guided warping preserves all original image information but redistributes it non-uniformly, so small objects and subtle relationships become easier for the same model to read while the global layout remains intact. AttWarp consistently improves accuracy, strengthens compositional reasoning, and reduces hallucinations.
Qualitative examples demonstrating how AttWarp improves visual grounding compared to baseline MLLMs and other visual prompting methods.
AttWarp (Warping) achieves superior spatial grounding compared to FGVP (Green Masking), SoM (Visual Grounding), API (Alpha Blending), and ViCrop (Cropping). Our method preserves global context while reallocating resolution to query-relevant regions.
Examples showing how attention-guided warping helps MLLMs correctly answer visual questions. The warped grid visualization shows how AttWarp redistributes spatial resolution toward query-relevant regions, enabling accurate answers where base models fail.
Each example shows how our method adaptively warps images based on the query, highlighting relevant regions while preserving spatial relationships.
Table 1: Main results on TextVQA, GQA, MMMU, POPE, and DocVQA datasets in accuracy (%).
| # | Methods | Key Technique | TextVQA | GQA | MMMU | POPE | DocVQA |
|---|---|---|---|---|---|---|---|
| LLaVA (MLP vision-language connector & open data) | |||||||
| 1 | Base MLLM | 49.3 | 60.5 | 36.9 | 85.3 | 18.1 | |
| 2 | FGVP-mask | Green mask overlay | 39.4 | 59.2 | 36.1 | 85.3 | 19.0 |
| 3 | FGVP-blur | Blur background | 33.9 | 59.5 | 35.0 | 83.1 | 18.6 |
| 4 | SoM | Grounded segments | 18.8 | 54.5 | 35.6 | 78.5 | 15.8 |
| 5 | API | Alpha channel fade | 49.9 | 60.6 | 36.9 | 85.9 | 17.4 |
| 6 | ViCrop | Add object crop | 56.3 | 60.9 | 37.2 | 87.0 | 22.5 |
| 7 | AttWarp | Rectilinear warping | 58.1 | 63.7 | 40.4 | 87.5 | 25.5 |
| 8 | AttWarp-Distill | Efficient inference | 57.2 | 62.7 | 38.8 | 87.4 | 22.4 |
| 9 | AttWarp-Chain | Adaptive Chains | 60.3 | 64.4 | 41.6 | 88.2 | 27.6 |
| Qwen (Cross-attention VL adapter & partially closed data) | |||||||
| 11 | Base MLLM | 81.0 | 62.4 | 47.3 | 86.1 | 77.3 | |
| 12 | FGVP-mask | Green mask overlay | 77.3 | 55.8 | 46.0 | 84.4 | 56.6 |
| 13 | FGVP-blur | Blur background | 72.3 | 55.8 | 46.5 | 81.3 | 38.6 |
| 14 | SoM | Grounded segments | 61.5 | 47.8 | 45.1 | 75.8 | 57.4 |
| 15 | API | Alpha channel fade | 81.6 | 61.1 | 47.4 | 85.8 | 68.4 |
| 16 | ViCrop | Add object crop | 83.8 | 60.6 | 47.1 | 86.7 | 82.5 |
| 17 | AttWarp | Rectilinear warping | 84.7 | 64.0 | 50.4 | 87.4 | 84.1 |
| 18 | AttWarp-Distill | Efficient inference | 84.1 | 63.1 | 48.9 | 87.2 | 81.8 |
| 19 | AttWarp-Chain | Adaptive Chains | 85.9 | 64.8 | 51.0 | 88.0 | 85.3 |
@article{dalal2025constructive,
title={Constructive Distortion: Improving MLLMs with Attention-Guided Image Warping},
author={Dalal, Dwip and Vashishtha, Gautam and Mishra, Utkarsh and Kim, Jeonghwan and Kanda, Madhav and Ha, Hyeonjeong and Lazebnik, Svetlana and Ji, Heng and Jain, Unnat},
journal={arXiv preprint arXiv:2510.09741},
year={2025}
}