论文标题

PGMAN:无监督的生成性多对流网络,用于泛滥

PGMAN: An Unsupervised Generative Multi-adversarial Network for Pan-sharpening

论文作者

Zhou, Huanyu, Liu, Qingjie, Wang, Yunhong

论文摘要

pan-sharpening旨在融合低分辨率(LR)多光谱(MS)图像和高分辨率(HR)Panchromatic(PAN)图像,该图像以卫星获取以生成HR MS图像。在过去的几年中,已经开发了许多基于深度学习的方法。但是,由于没有预期的HR MS图像作为学习的参考,因此几乎所有现有的方法都会下样本MS和PAN​​图像,并将原始MS图像视为形成训练的监督设置的目标。这些方法可能在下调图像上表现良好,但是它们概括为全分辨率图像。为了征服这个问题,我们设计了一个无监督的框架,该框架能够直接从完整的图像中学习而无需进行任何预处理。该模型是基于一种新颖的生成多对逆流网络而构建的。我们使用两流生成器分别从PAN和MS图像中提取特异性特异性特征,并开发双歧丝仪,以在执行融合时保留输入的光谱和空间信息。此外,引入了新的损失功能,以促进在无监督的环境下进行训练。实验和与Gaofen-2和Quickbird图像的其他最先进方法的比较表明,所提出的方法可以在全分辨率图像上获得更好的融合结果。

Pan-sharpening aims at fusing a low-resolution (LR) multi-spectral (MS) image and a high-resolution (HR) panchromatic (PAN) image acquired by a satellite to generate an HR MS image. Many deep learning based methods have been developed in the past few years. However, since there are no intended HR MS images as references for learning, almost all of the existing methods down-sample the MS and PAN images and regard the original MS images as targets to form a supervised setting for training. These methods may perform well on the down-scaled images, however, they generalize poorly to the full-resolution images. To conquer this problem, we design an unsupervised framework that is able to learn directly from the full-resolution images without any preprocessing. The model is built based on a novel generative multi-adversarial network. We use a two-stream generator to extract the modality-specific features from the PAN and MS images, respectively, and develop a dual-discriminator to preserve the spectral and spatial information of the inputs when performing fusion. Furthermore, a novel loss function is introduced to facilitate training under the unsupervised setting. Experiments and comparisons with other state-of-the-art methods on GaoFen-2 and QuickBird images demonstrate that the proposed method can obtain much better fusion results on the full-resolution images.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源