论文标题

地理-PIFU:单视中人类重建的几何形状和像素对齐隐式函数

Geo-PIFu: Geometry and Pixel Aligned Implicit Functions for Single-view Human Reconstruction

论文作者

He, Tong, Collomosse, John, Jin, Hailin, Soatto, Stefano

论文摘要

我们提出了Geo-Pifu,这是一种从衣服的人的单眼颜色图像中恢复3D网格的方法。我们的方法基于基于隐性函数的深层表示,以使用结构感知的3D U-NET来学习潜在体素特征,以两种方式约束模型:首先,在查询点编码中解决特征歧义,其次,作为粗糙的人类形状的代理,以正规化高分辨率网格并鼓励全球形状正常。我们表明,通过编码查询点和使用潜在体素特征来限制全局形状,与竞争方法相比,我们为衣服人网的重建所获得的重建表现出较小的形状失真和改进的表面细节。我们在最近的人类网格公共数据集上评估了Geo-pifu,该数据集比PIFU和以前的衍生工程中使用的私人商业数据集大10美元。平均而言,我们超过了倒角和点对面距离的$ 42.7 \%$减少的$ 42.7 \%$,而正常估计错误的$ 19.4 \%$减少。

We propose Geo-PIFu, a method to recover a 3D mesh from a monocular color image of a clothed person. Our method is based on a deep implicit function-based representation to learn latent voxel features using a structure-aware 3D U-Net, to constrain the model in two ways: first, to resolve feature ambiguities in query point encoding, second, to serve as a coarse human shape proxy to regularize the high-resolution mesh and encourage global shape regularity. We show that, by both encoding query points and constraining global shape using latent voxel features, the reconstruction we obtain for clothed human meshes exhibits less shape distortion and improved surface details compared to competing methods. We evaluate Geo-PIFu on a recent human mesh public dataset that is $10 \times$ larger than the private commercial dataset used in PIFu and previous derivative work. On average, we exceed the state of the art by $42.7\%$ reduction in Chamfer and Point-to-Surface Distances, and $19.4\%$ reduction in normal estimation errors.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源