论文标题
从图像对蒸馏出全球前向和反向映射的样式
Distilling Style from Image Pairs for Global Forward and Inverse Tone Mapping
论文作者
论文摘要
许多图像增强或编辑操作(例如前进和反向图映射或颜色分级)没有唯一的解决方案,而是一系列解决方案,每种解决方案都代表了不同的样式。尽管如此,现有的基于学习的方法还是试图学习独特的映射,而无视这种样式。在这项工作中,我们表明有关样式的信息可以从图像对的集合中提取并编码为2维矢量。这不仅为我们提供了有效的表示形式,而且还提供了一个可解释的潜在空间来编辑图像样式。我们表示一对图像之间的全局颜色映射是自定义标准化流,并以像素颜色的多项式为条件。我们表明,在低维空间中,在编码图像样式的情况下,这种网络比PCA或VAE更有效,并且使我们获得接近40 dB的精度,这比最先进的方法相比大约是7-10 dB。
Many image enhancement or editing operations, such as forward and inverse tone mapping or color grading, do not have a unique solution, but instead a range of solutions, each representing a different style. Despite this, existing learning-based methods attempt to learn a unique mapping, disregarding this style. In this work, we show that information about the style can be distilled from collections of image pairs and encoded into a 2- or 3-dimensional vector. This gives us not only an efficient representation but also an interpretable latent space for editing the image style. We represent the global color mapping between a pair of images as a custom normalizing flow, conditioned on a polynomial basis of the pixel color. We show that such a network is more effective than PCA or VAE at encoding image style in low-dimensional space and lets us obtain an accuracy close to 40 dB, which is about 7-10 dB improvement over the state-of-the-art methods.