论文标题

统一的DNN重量压缩框架,使用重新加权优化方法

A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods

论文作者

Zhang, Tianyun, Ma, Xiaolong, Zhan, Zheng, Zhou, Shanglin, Qin, Minghai, Sun, Fei, Chen, Yen-Kuang, Ding, Caiwen, Fardad, Makan, Wang, Yanzhi

论文摘要

为了满足深度神经网络(DNN)的较大模型大小和密集计算的要求,已经提出了权重修剪技术,并且通常分为两类,即基于静态的正则修剪和基于动态的正则修剪。但是,以前的方法目前遭受复杂的工作量或准确性降解,而后者则需要很长时间来调整参数才能达到所需的修剪速率而不会准确损失。在本文中,我们提出了一个统一的DNN权重修剪框架,其动态更新的正则化项受指定约束的界限,该术语可以同时生成非结构化的稀疏性和不同种类的结构化稀疏性。我们还将我们的方法扩展到一个集成框架,以组合不同的DNN压缩任务。

To address the large model size and intensive computation requirement of deep neural networks (DNNs), weight pruning techniques have been proposed and generally fall into two categories, i.e., static regularization-based pruning and dynamic regularization-based pruning. However, the former method currently suffers either complex workloads or accuracy degradation, while the latter one takes a long time to tune the parameters to achieve the desired pruning rate without accuracy loss. In this paper, we propose a unified DNN weight pruning framework with dynamically updated regularization terms bounded by the designated constraint, which can generate both non-structured sparsity and different kinds of structured sparsity. We also extend our method to an integrated framework for the combination of different DNN compression tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源