论文标题

神经尺度:用于资源约束深度神经网络的神经元的有效缩放

NeuralScale: Efficient Scaling of Neurons for Resource-Constrained Deep Neural Networks

论文作者

Lee, Eugene, Lee, Chen-Yi

论文摘要

在设计深神网络期间,确定神经元的数量以最大程度地提高性能是直观的。在这项工作中,我们尝试搜索最大化精度的固定网络体系结构的神经元(过滤)配置。使用迭代修剪方法作为代理,我们将每个层的神经元(滤波器)数量的变化相对于参数的变化进行参数化,从而使我们能够有效地跨任意尺寸扩展体系结构。我们还介绍了体系结构下降,迭代地完善用于模型缩放的参数化函数。两种提出的方​​法的组合被认为是神经刻层。为了在参数方面证明神经刻度的效率,我们使用CIFAR10,CIFAR100和TINYIMAGENET作为基准数据集,在VGG11,Mobilenetv2和Resnet18上显示了经验模拟。我们的结果表明,在CIFAR10,CIFAR100,CIFAR100和TINNYIMAGENET上的VGG11,MobileNetV2和RESNET18的准确性上升3.04%,8.56%和3.41%,分别在参数受约束的设置下(输出神经元(输出神经元(滤波器)),默认配置率为0.25)。

Deciding the amount of neurons during the design of a deep neural network to maximize performance is not intuitive. In this work, we attempt to search for the neuron (filter) configuration of a fixed network architecture that maximizes accuracy. Using iterative pruning methods as a proxy, we parameterize the change of the neuron (filter) number of each layer with respect to the change in parameters, allowing us to efficiently scale an architecture across arbitrary sizes. We also introduce architecture descent which iteratively refines the parameterized function used for model scaling. The combination of both proposed methods is coined as NeuralScale. To prove the efficiency of NeuralScale in terms of parameters, we show empirical simulations on VGG11, MobileNetV2 and ResNet18 using CIFAR10, CIFAR100 and TinyImageNet as benchmark datasets. Our results show an increase in accuracy of 3.04%, 8.56% and 3.41% for VGG11, MobileNetV2 and ResNet18 on CIFAR10, CIFAR100 and TinyImageNet respectively under a parameter-constrained setting (output neurons (filters) of default configuration with scaling factor of 0.25).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源