论文标题

内部整体网络:作为有效正常化的平均合奏

Inner Ensemble Networks: Average Ensemble as an Effective Regularizer

论文作者

Mohamed, Abduallah, Sadiq, Muhammed Mohaimin, AlBadawy, Ehab, Elhoseiny, Mohamed, Claudel, Christian

论文摘要

我们引入内部集合网络(IEN),从而减少神经网络本身的差异而不会增加模型复杂性。 IES在训练阶段利用整体参数来减少网络差异。在测试阶段时,这些参数将被删除,而不会改变增强性能。 IES将普通深层模型的方差减少为$ 1/m^{l-1} $,其中$ m $是内部团体的数量,$ l $是模型的深度。另外,我们从经验和理论上表明,与其他类似方法(例如辍学和麦克斯)相比,IES导致更大的差异。我们的结果表明,与普通的深层模型相比,错误率在1.7 \%和17.3 \%之间的降低。我们还表明,与先前的方法相比,神经体系结构搜索(NAS)方法更喜欢IEN。代码可从https://github.com/abduallahmohame/inner_ensemble_nets获得。

We introduce Inner Ensemble Networks (IENs) which reduce the variance within the neural network itself without an increase in the model complexity. IENs utilize ensemble parameters during the training phase to reduce the network variance. While in the testing phase, these parameters are removed without a change in the enhanced performance. IENs reduce the variance of an ordinary deep model by a factor of $1/m^{L-1}$, where $m$ is the number of inner ensembles and $L$ is the depth of the model. Also, we show empirically and theoretically that IENs lead to a greater variance reduction in comparison with other similar approaches such as dropout and maxout. Our results show a decrease of error rates between 1.7\% and 17.3\% in comparison with an ordinary deep model. We also show that IEN was preferred by Neural Architecture Search (NAS) methods over prior approaches. Code is available at https://github.com/abduallahmohamed/inner_ensemble_nets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源