论文标题
对批处理规范化的理论洞察力:正则率的数据依赖性自动调节
Theoretical Insight into Batch Normalization: Data Dependant Auto-Tuning of Regularization Rate
论文作者
论文摘要
批次归一化被广泛用于深度学习,以使中间激活归一化。深层网络臭名昭著的训练复杂性增加,要求仔细的体重初始化,需要较低的学习率等。通过将激活的输入到零平均值和单位标准偏差来解决这些问题(\ textbf {bn})。使培训过程的批归归量化部分显着加速了非常深网络的训练过程。一个新的研究领域正在进行研究\ textbf {bn}成功背后的确切理论解释。这些理论见解中的大多数试图通过将其对优化,体重量表不变性和正则化的影响来解释\ textbf {bn}的好处。尽管\ textbf {bn}在加速概括方面取得了不可否认的成功,但分析的差距将\ textbf {bn}与正则化参数的效果相关联。本文旨在通过\ textbf {bn}对正则化参数的数据依赖性自动调整,并具有分析证明。我们已经将\ textbf {bn}提出为对非\ textbf {bn}权重的约束优化,通过该优化,我们通过该优化的重点,通过该优化,我们通过该优化依赖其数据统计信息依赖于正则化参数的自动调整。我们还为其在嘈杂的输入方案下的行为提供了分析证明,该方案揭示了正则化参数的信号与噪声调整。我们还通过MNIST数据集实验的经验结果证实了我们的主张。
Batch normalization is widely used in deep learning to normalize intermediate activations. Deep networks suffer from notoriously increased training complexity, mandating careful initialization of weights, requiring lower learning rates, etc. These issues have been addressed by Batch Normalization (\textbf{BN}), by normalizing the inputs of activations to zero mean and unit standard deviation. Making this batch normalization part of the training process dramatically accelerates the training process of very deep networks. A new field of research has been going on to examine the exact theoretical explanation behind the success of \textbf{BN}. Most of these theoretical insights attempt to explain the benefits of \textbf{BN} by placing them on its influence on optimization, weight scale invariance, and regularization. Despite \textbf{BN} undeniable success in accelerating generalization, the gap of analytically relating the effect of \textbf{BN} to the regularization parameter is still missing. This paper aims to bring out the data-dependent auto-tuning of the regularization parameter by \textbf{BN} with analytical proofs. We have posed \textbf{BN} as a constrained optimization imposed on non-\textbf{BN} weights through which we demonstrate its data statistics dependant auto-tuning of regularization parameter. We have also given analytical proof for its behavior under a noisy input scenario, which reveals the signal vs. noise tuning of the regularization parameter. We have also substantiated our claim with empirical results from the MNIST dataset experiments.