论文标题
高斯封闭式线性网络
Gaussian Gated Linear Networks
论文作者
论文摘要
我们提出了高斯封闭式线性网络(G-GLN),这是最近提出的深神经网络家族的扩展。 GLN没有使用反向传播来学习功能,而是基于优化凸目标的分布式和本地信用分配机制。这引起了许多理想的特性,包括普遍性,数据效率的在线学习,对灾难性遗忘的琐碎的解释性和鲁棒性。我们通过将几何混合概括为高斯密度的产物,将GLN框架从分类扩展到多重回归和密度建模。 G-GLN在几个单变量和多元回归基准上实现了具有竞争力或最先进的性能,我们证明了它适用于通过Denoising(包括在线上下文的强盗和密度估算)的实际任务的适用性。
We propose the Gaussian Gated Linear Network (G-GLN), an extension to the recently proposed GLN family of deep neural networks. Instead of using backpropagation to learn features, GLNs have a distributed and local credit assignment mechanism based on optimizing a convex objective. This gives rise to many desirable properties including universality, data-efficient online learning, trivial interpretability and robustness to catastrophic forgetting. We extend the GLN framework from classification to multiple regression and density modelling by generalizing geometric mixing to a product of Gaussian densities. The G-GLN achieves competitive or state-of-the-art performance on several univariate and multivariate regression benchmarks, and we demonstrate its applicability to practical tasks including online contextual bandits and density estimation via denoising.