论文标题

布皱纹合成的多功能超分辨率网络

Multi-feature super-resolution network for cloth wrinkle synthesis

论文作者

Chen, Lan, Ye, Juntao, Zhang, Xiaopeng

论文摘要

现有的物理布模拟器遭受了昂贵的计算和调整机械参数的困难,无法获得所需的皱纹行为。数据驱动的方法提供了替代解决方案。它通常以低得多的计算成本合成布料动画,还会产生皱纹的效果,非常类似于可控制的训练数据。在本文中,我们提出了一种基于深度学习的方法,用于与高分辨率网格合成布料动画。为此,我们首先创建一个用于培训的数据集:模拟了一对低分辨率和高分辨率网格,并同步它们的动作。结果,两个网眼表现出相似的大规模变形,但皱纹不同。然后将每个模拟的网格对转换为一对低分辨率和高分辨率的“图像”(2D样品阵列),每个样品可以解释为三个特征中的任何一个:位移,正常和速度。使用这些图像对,我们设计了一个多功能超分辨率(MFSR)网络,该网络共同训练这三个功能的UPS采样合成器。 MFSR体系结构由两个关键组件组成:共享模块,该模块以多个功能作为输入来同时从相应的超级分辨率任务中学习低级表示;以及针对各种高级语义的特定任务模块。由于提出的基于运动学的损耗功能,框架到框架的一致性得到了很好的维护。我们的方法以高框架速率获得现实的结果:比传统物理模拟快的速度比传统物理模拟快12-14倍。我们通过各种实验场景演示了我们方法的性能,包括带有复杂碰撞的衣服角色。

Existing physical cloth simulators suffer from expensive computation and difficulties in tuning mechanical parameters to get desired wrinkling behaviors. Data-driven methods provide an alternative solution. It typically synthesizes cloth animation at a much lower computational cost, and also creates wrinkling effects that highly resemble the much controllable training data. In this paper we propose a deep learning based method for synthesizing cloth animation with high resolution meshes. To do this we first create a dataset for training: a pair of low and high resolution meshes are simulated and their motions are synchronized. As a result the two meshes exhibit similar large-scale deformation but different small wrinkles. Each simulated mesh pair are then converted into a pair of low and high resolution "images" (a 2D array of samples), with each sample can be interpreted as any of three features: the displacement, the normal and the velocity. With these image pairs, we design a multi-feature super-resolution (MFSR) network that jointly train an upsampling synthesizer for the three features. The MFSR architecture consists of two key components: a sharing module that takes multiple features as input to learn low-level representations from corresponding super-resolution tasks simultaneously; and task-specific modules focusing on various high-level semantics. Frame-to-frame consistency is well maintained thanks to the proposed kinematics-based loss function. Our method achieves realistic results at high frame rates: 12-14 times faster than traditional physical simulation. We demonstrate the performance of our method with various experimental scenes, including a dressed character with sophisticated collisions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源