论文标题

单对跨模式超级分辨率

Single Pair Cross-Modality Super Resolution

论文作者

Shacht, Guy, Fogel, Sharon, Danon, Dov, Cohen-Or, Daniel, Leizerson, Ilya

论文摘要

非视觉成像传感器在行业中广泛用于不同的目的。这些传感器比视觉(RGB)传感器昂贵,通常会产生分辨率较低的图像。为此,引入了跨模式超分辨率方法,其中高分辨率的RGB图像有助于增加低分辨率模式的分辨率。但是,从不同方式融合图像并不是一项琐碎的任务。输出必须不含伪像,并忠于目标方式的特征。此外,输入图像永远不会完美对齐,这在融合过程中导致了进一步的伪像。 我们提出了CMSR,这是一个用于跨模式超级分辨率的深层网络,与以前的方法不同,它旨在处理弱对齐的图像。该网络仅对两个输入图像进行培训,了解其内部统计信息和相关性,并将其应用于对目标模式进行示例样本。 CMSR包含一个内部变压器,该变压器与上采样过程本身在未经明确的监督的情况下进行训练。我们表明,CMSR成功地增加了输入图像的分辨率,从其RGB对应物中获得了有价值的信息,但是以保守的方式,而没有引入文物或无关紧要的细节。

Non-visual imaging sensors are widely used in the industry for different purposes. Those sensors are more expensive than visual (RGB) sensors, and usually produce images with lower resolution. To this end, Cross-Modality Super-Resolution methods were introduced, where an RGB image of a high-resolution assists in increasing the resolution of the low-resolution modality. However, fusing images from different modalities is not a trivial task; the output must be artifact-free and remain loyal to the characteristics of the target modality. Moreover, the input images are never perfectly aligned, which results in further artifacts during the fusion process. We present CMSR, a deep network for Cross-Modality Super-Resolution, which unlike previous methods, is designed to deal with weakly aligned images. The network is trained on the two input images only, learns their internal statistics and correlations, and applies them to up-sample the target modality. CMSR contains an internal transformer that is trained on-the-fly together with the up-sampling process itself, without explicit supervision. We show that CMSR succeeds to increase the resolution of the input image, gaining valuable information from its RGB counterpart, yet in a conservative way, without introducing artifacts or irrelevant details.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源