论文标题
生成模型的Sink Hhorn天然梯度
Sinkhorn Natural Gradient for Generative Models
论文作者
论文摘要
我们考虑在概率度量的参数家族中最小化功能的问题,其中参数化是通过推动前向结构来表征的。该问题的一个重要应用是培训生成的对抗网络。在这方面,我们提出了一种新型的sndhorn天然梯度(SING)算法,该算法是带有凹痕差异的概率空间的最陡峭下降方法。我们表明,SING的sindhorn信息矩阵(SIM)是SIN的一个关键组成部分,具有明确的表达式,可以准确地评估复杂性,以相对于所需的准确性对对数进行比分缩放。这与现有的自然梯度方法形成鲜明对比,而现有的天然梯度方法只能大致进行。此外,在仅可用蒙特卡洛类型集成的实际应用中,我们为SIM设计设计了经验估计器并提供稳定性分析。在我们的实验中,我们将Sing与最先进的SGD型求解器进行了定量比较,以证明其方法的效率和功效。
We consider the problem of minimizing a functional over a parametric family of probability measures, where the parameterization is characterized via a push-forward structure. An important application of this problem is in training generative adversarial networks. In this regard, we propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence. We show that the Sinkhorn information matrix (SIM), a key component of SiNG, has an explicit expression and can be evaluated accurately in complexity that scales logarithmically with respect to the desired accuracy. This is in sharp contrast to existing natural gradient methods that can only be carried out approximately. Moreover, in practical applications when only Monte-Carlo type integration is available, we design an empirical estimator for SIM and provide the stability analysis. In our experiments, we quantitatively compare SiNG with state-of-the-art SGD-type solvers on generative tasks to demonstrate its efficiency and efficacy of our method.