摘要
Pan-sharpening has significant applications in remote sensing image processing. By fusing high-resolution panchromatic (PAN) images with low-resolution multispectral (MS) images, it generates high-spatial-resolution images with multispectral information (HRMS). Most deep learning methods, while excelling at extracting single-modality image features, face limitations in capturing the global joint distribution of cross-modality images. To address these issues, this paper introduces a novel pan-sharpening model, the Dual-Branch Attention-Guided Diffusion Network (DADiff), which incorporates diffusion models capable of effectively reconstructing the latent distribution of images. DADiff consists of a Diffusion Branch and an Attention-Guided Branch. The diffusion branch captures the global joint features of PAN and MS images during the denoising process, constructing a cross-modal distribution for HRMS images. Meanwhile, the attention-guided branch enhances the high-frequency details and local features of PAN and MS images through a multi-scale convolutional dense connection module and an improved attention mechanism. Leveraging the diffusion model's global modeling capability and the multi-scale attention mechanism's detail-capturing advantage, this network significantly enhances the spatial and spectral fidelity of generated HRMS images. Extensive experimental results validate the effectiveness of the proposed modules. We conducted experiments on the WorldViewII, QuickBird, and Maryland datasets, and the results confirm the superiority of DADiff over state-of-the-art methods.
| 原文 | English |
|---|---|
| 文章編號 | 103076 |
| 期刊 | Information Fusion |
| 卷 | 120 |
| DOIs | |
| 出版狀態 | Published - 8月 2025 |