Make l(1) regularization effective in training sparse CNN | |
He, Juncai2; Jia, Xiaodong3; Xu, Jinchao2; Zhang, Lian2; Zhao, Liang1,4 | |
刊名 | COMPUTATIONAL OPTIMIZATION AND APPLICATIONS |
2020-09-01 | |
卷号 | 77期号:1页码:163-182 |
关键词 | Sparse optimization l(1) regularization Dual averaging CNN |
ISSN号 | 0926-6003 |
DOI | 10.1007/s10589-020-00202-1 |
英文摘要 | Compressed Sensing using l(1) regularization is among the most powerful and popular sparsification technique in many applications, but why has it not been used to obtain sparse deep learning model such as convolutional neural network (CNN)? This paper is aimed to provide an answer to this question and to show how to make it work. Following Xiao (J Mach Learn Res 11(Oct):2543-2596, 2010), We first demonstrate that the commonly used stochastic gradient decent and variants training algorithm is not an appropriate match with l(1) regularization and then replace it with a different training algorithm based on a regularized dual averaging (RDA) method. The RDA method of Xiao (J Mach Learn Res 11(Oct):2543-2596, 2010) was originally designed specifically for convex problem, but with new theoretical insight and algorithmic modifications (using proper initialization and adaptivity), we have made it an effective match with l(1) regularization to achieve a state-of-the-art sparsity for the highly non-convex CNN compared to other weight pruning methods without compromising accuracy (achieving 95% sparsity for ResNet-18 on CIFAR-10, for example). |
资助项目 | Peking University Joint Center for Computational Mathematics and Applications ; Beijing International Center for Mathematical Research from Peking University ; Verne M. William Professorship Fund from Penn State University ; China Scholarship Council ; Hong Kong RGC Competitive Earmarked Research Grant[HKUST16301218] ; Penn State University Joint Center for Computational Mathematics and Applications |
WOS研究方向 | Operations Research & Management Science ; Mathematics |
语种 | 英语 |
出版者 | SPRINGER |
WOS记录号 | WOS:000545777500001 |
内容类型 | 期刊论文 |
源URL | [http://ir.amss.ac.cn/handle/2S8OKBNM/51756] |
专题 | 中国科学院数学与系统科学研究院 |
通讯作者 | Xu, Jinchao |
作者单位 | 1.Chinese Acad Sci, State Key Lab Sci & Engn Comp, Acad Math & Syst Sci, Beijing 100190, Peoples R China 2.Penn State Univ, Dept Math, University Pk, PA 16802 USA 3.Penn State Univ, Dept Comp Sci & Engn, University Pk, PA 16802 USA 4.Univ Chinese Acad Sci, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | He, Juncai,Jia, Xiaodong,Xu, Jinchao,et al. Make l(1) regularization effective in training sparse CNN[J]. COMPUTATIONAL OPTIMIZATION AND APPLICATIONS,2020,77(1):163-182. |
APA | He, Juncai,Jia, Xiaodong,Xu, Jinchao,Zhang, Lian,&Zhao, Liang.(2020).Make l(1) regularization effective in training sparse CNN.COMPUTATIONAL OPTIMIZATION AND APPLICATIONS,77(1),163-182. |
MLA | He, Juncai,et al."Make l(1) regularization effective in training sparse CNN".COMPUTATIONAL OPTIMIZATION AND APPLICATIONS 77.1(2020):163-182. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论