Abstract
Image Difference Captioning (IDC) methods have advanced in highlighting subtle differences between similar images, but their performance is often constrained by limited training data. Using Large Multimodal Models (LMMs) to describe changes in image pairs mitigates data limits but adds noise. These change descriptions are often coarse summaries, obscuring fine details and hindering noise detection. In this work, we improve IDC with a noise-robust approach at both data and model levels. We use LMMs with structured prompts to generate fine-grained change descriptions during data curation. We propose a Noise-Aware Modeling and Captioning (NAMC) model with three modules: Noise Identification and Masking (NIM) to reduce noisy correspondences, Masked Image Reconstruction (MIR) to correct over-masking errors, and Fine-grained Description Generation (FDG) to produce coherent change descriptions. Experiments on four IDC benchmarks show that NAMC, pre-trained on our large-scale data, outperforms streamlined architectures and achieves competitive performance with LLM-finetuned methods, offering better inference efficiency.
| Original language | English |
|---|---|
| Title of host publication | Findings of the Association for Computational Linguistics: EMNLP 2025 |
| Editors | Christos Christodoupolous, Tanmoy Chakraborty, Carolyn Rose, Violet Peng |
| Place of Publication | Kerrville |
| Publisher | Association for Computational Linguistics (ACL) |
| Pages | 10125-10145 |
| ISBN (Print) | 979-8-89176-335-7 |
| DOIs | |
| Publication status | Published - 2025 |
| MoE publication type | A4 Article in a conference publication |
| Event | 2025 Conference on Empirical Methods in Natural Language Processing - Suzhou, China Duration: 4 Nov 2025 → 9 Nov 2025 |
Conference
| Conference | 2025 Conference on Empirical Methods in Natural Language Processing |
|---|---|
| Country/Territory | China |
| City | Suzhou |
| Period | 4/11/25 → 9/11/25 |