论文标题

朝着细粒的面部表达操纵

Toward Fine-grained Facial Expression Manipulation

论文作者

Ling, Jun, Xue, Han, Song, Li, Yang, Shuhui, Xie, Rong, Gu, Xiao

论文摘要

面部表达操纵旨在用给定条件编辑面部表达。先前的方法在离散情绪标签或绝对条件(例如面部动作单元)的指导下编辑输入图像以具有所需的表达。但是,这些方法要么患有变化的条件 - 卵形区域,要么无法进行细粒度的编辑效率低下。在这项研究中,我们考虑了这两个目标,并提出了一种新颖的方法。首先,我们用相对条件(特别是相对动作单元)代替连续的绝对条件。使用相对动作单元,生成器将学习仅转换由非零值相对AU指定的目标区域。其次,我们的发电机建立在U-NET上,但通过多尺度特征融合(MSF)机制加强,以实现高质量的表达编辑目的。与最先进的表达编辑方法相比,对定量和定性评估的广泛实验表明,我们所提出的方法的改善。代码可在\ url {https://github.com/junleen/expression-manipulator}中获得。

Facial expression manipulation aims at editing facial expression with a given condition. Previous methods edit an input image under the guidance of a discrete emotion label or absolute condition (e.g., facial action units) to possess the desired expression. However, these methods either suffer from changing condition-irrelevant regions or are inefficient for fine-grained editing. In this study, we take these two objectives into consideration and propose a novel method. First, we replace continuous absolute condition with relative condition, specifically, relative action units. With relative action units, the generator learns to only transform regions of interest which are specified by non-zero-valued relative AUs. Second, our generator is built on U-Net but strengthened by Multi-Scale Feature Fusion (MSF) mechanism for high-quality expression editing purposes. Extensive experiments on both quantitative and qualitative evaluation demonstrate the improvements of our proposed approach compared to the state-of-the-art expression editing methods. Code is available at \url{https://github.com/junleen/Expression-manipulator}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源