论文标题
浅神经网络梯度方法的稳定性和概括分析
Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks
论文作者
论文摘要
尽管已经取得了重大的理论进步,但揭示了过度参数化神经网络的概括谜团仍然难以捉摸。在本文中,我们通过利用算法稳定性的概念来研究浅神经网络(SNN)的概括行为。我们考虑梯度下降(GD)和随机梯度下降(SGD)来训练SNN,因为这两者都通过通过早期停止来平衡优化和普遍性来发展一致的多余风险范围。与对GD的现有分析相比,我们的新分析需要放松的过度参数化假设,并且还适用于SGD。改进的关键是更好地估计经验风险的Hessian矩阵的最小特征值和沿GD和SGD轨迹的损失函数,通过提供对其迭代的精致估计。
While significant theoretical progress has been achieved, unveiling the generalization mystery of overparameterized neural networks still remains largely elusive. In this paper, we study the generalization behavior of shallow neural networks (SNNs) by leveraging the concept of algorithmic stability. We consider gradient descent (GD) and stochastic gradient descent (SGD) to train SNNs, for both of which we develop consistent excess risk bounds by balancing the optimization and generalization via early-stopping. As compared to existing analysis on GD, our new analysis requires a relaxed overparameterization assumption and also applies to SGD. The key for the improvement is a better estimation of the smallest eigenvalues of the Hessian matrices of the empirical risks and the loss function along the trajectories of GD and SGD by providing a refined estimation of their iterates.