[1] Lecun Y, Boser B, Denker J S, et al. Backpropagation applied to handwritten zip code recognition[J]. Neural Computation, 1989, 1(4):541-551.
[2] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6):84-90.
[3] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[DB/OL]. 2014[2021-09-21]. https://arxiv.org/abs/1409.1556.
[4] Szegedy C, Liu W, Jia Y Q, et al. Going deeper with convolutions[J]. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015:1-9.
[5] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016:770-778.
[6] Lecun Y. Optimal brain damage[J]. Neural Information Proceeding Systems, 1990, 2(279):598-605.
[7] Denil M, Shakibi B, Dinh L, et al. Predicting parameters in deep learning[DB/OL]. 2014[2021-09-12]. https://arxiv.org/abs/1306.0543.
[8] Hassibi B, Stork D G. Second order derivatives for network pruning:optimal brain surgeon[C]//Advances in Neural Information Processing Systems, 1993:164-171.
[9] Thimm G, Fiesler E. Evaluating pruning methods[J]. International Symposium on Artificial Neural Networks, 1995, A2:20-25.
[10] Srinivas S, Babu R V. Data-free parameter pruning for deep neural networks[J]. Computer Science, 2015:2830-2838.
[11] Han S, Pool J, Tran J, et al. Learning both weights and connections for efficient neural networks[DB/OL]. 2015[2021-09-12]. https://arxiv.org/abs/1506.02626.
[12] Han S, Mao H, Dally W J. Deep compression:compressing deep neural networks with pruning, trained quantization and Huffman coding[DB/OL]. 2016[2021-09-12]. https://arxiv.org/abs/1510.00149.
[13] Han S, Liu X Y, Mao H Z, et al. EIE:efficient inference engine on compressed deep neural network[J]. 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), 2016:243-254.
[14] Guo Y W, Yao A B, Chen Y R. Dynamic network surgery for efficient DNNs[J]. Advances in Neural Information Processing Systems, 2016:1379.
[15] Hu H Y, Peng R, Tai Y W, et al. Network trimming:a data-driven neuron pruning approach towards efficient deep architectures[DB/OL]. 2016[2021-09-12]. https://arxiv.org/abs/1607.03250.
[16] Louizos C, Welling M, Kingma D P. Learning sparse neural networks through L0 regularization[DB/OL]. 2017[2021-09-12]. https://arxiv.org/abs/1712.01312.
[17] Lee N, Ajanthan T, Torr P H S. SNIP:single-shot network pruning based on connection sensitivity[DB/OL]. 2018[2021-09-12]. https://arxiv.org/abs/1810.02340.
[18] Frankle J, Carbin M. The Lottery ticket hypothesis:finding sparse, trainable neural networks[DB/OL]. 2018[2021-09-12]. https://arxiv.org/abs/1803.03635.
[19] Wang C, Zhang G, Grosse R. Picking winning tickets before training by preserving gradient flow[DB/OL]. 2020[2021-09-12]. https://arxiv.org/abs/2002.07376.
[20] Anwar S, Hwang K, Sung W. Structured pruning of deep convolutional neural networks[J]. ACM Journal on Emerging Technologies in Computing Systems, 2017, 13(3):1-18.
[21] Zhou A, Ma Y, Zhu J, et al. Learning N:M fine-grained structured sparse neural networks from scratch[DB/OL]. 2021[2021-09-12]. https://arxiv.org/abs/2102.04010.
[22] Wen W, Wu C, Wang Y, et al. Learning structured sparsity in deep neural networks[DB/OL]. 2016[2021-09-12]. https://arxiv.org/abs/1608.03665.
[23] Gordon A, Eban E, Nachum O, et al. MorphNet:fast & simple resource-constrained structure learning of deep networks[J]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018:1586-1595.
[24] Lebedev V, Lempitsky V. Fast ConvNets using group-wise brain damage[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016:2554-2564.
[25] Li H, Kadav A, Durdanovic I, et al. Pruning filters for efficient ConvNets[DB/OL]. 2017[2021-09-12]. https://arxiv.org/abs/1608.08710.
[26] Luo J H, Wu J X, Lin W Y. ThiNet:a filter level pruning method for deep neural network compression[C]//2017 IEEE International Conference on Computer Vision (ICCV), 2017:5068-5076.
[27] Molchanov P, Tyree S, Karras T, et al. Pruning convolutional neural networks for resource
[28] Lin S H, Ji R R, Li Y C, et al. Toward compact ConvNets via structure-sparsity regularized filter pruning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 31(2):574-588.
[29] Lin S H, Ji R R, Yan C Q, et al. Towards optimal structured CNN pruning via generative adversarial learning[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019:2785-2794.
[30] He Y, Liu P, Wang Z W, et al. Filter pruning via geometric Median for deep convolutional neural networks acceleration[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019:4335-4344.
[31] He Y, Dong X Y, Kang G L, et al. Asymptotic soft filter pruning for deep convolutional neural networks[J]. IEEE Transactions on Cybernetics, 2020, 50(8):3594-3604.
[32] Lin M B, Ji R R, Wang Y, et al. HRank:filter pruning using high-rank feature map[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020:1526-1535.
[33] Zhu J H, Zhao Y, Pei J H. Progressive kernel pruning based on the information mapping sparse index for CNN compression[J]. IEEE Access, 2021, 9:10974-10987.
[34] Polyak A, Wolf L. Channel-level acceleration of deep face representations[J]. IEEE Access, 2015, 3:2163-2175.
[35] He Y H, Zhang X Y, Sun J. Channel pruning for accelerating very deep neural networks[C]//2017 IEEE International Conference on Computer Vision, 2017:1398-1406.
[36] Yu R C, Li A, Chen C F, et al. NISP:pruning networks using neuron importance score propagation[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018:9194-9203.
[37] Liu Z, Li J G, Shen Z Q, et al. Learning efficient convolutional networks through network slimming[C]//2017 IEEE International Conference on Computer Vision, 2017:2755-2763.
[38] Huang Z, Wang N. Data-driven sparse structure selection for deep neural networks[C]//European Conference on Computer Vision, 2018:317-334.
[39] Zhuang Z W, Tan M K, Zhuang B H, et al. Discrimination-aware channel pruning for deep neural networks[DB/OL]. 2018[2021-09-12]. https://arxiv.org/abs/1810.11809.
[40] Ye J B, Lu X, Lin Z, et al. Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers[DB/OL]. 2018[2021-09-12]. https://arxiv.org/abs/1802.00124.
[41] Ye Y, You G, Fwu J K, et al. Channel pruning via optimal thresholding[J]. Computer Vision and Pattern Recognition, 2020:508-516.
[42] Liu Z, Sun M J, Zhou T H, et al. Rethinking the value of network pruning[DB/OL]. 2019[2021-09-12]. https://arxiv.org/abs/1810.05270.
[43] Guo S P, Wang Y J, Li Q Q, et al. DMCP:differentiable Markov channel pruning for neural networks[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020:1536-1544.
[44] Chang J F, Lu Y, Xue P, et al. Automatic channel pruning via clustering and swarm intelligence optimization for CNN[J]. Applied Intelligence, 2022:1-21.
[45] Howard A G, Zhu M, Chen B, et al. MobileNets:efficient convolutional neural networks for mobile vision applications[DB/OL]. 2017[2021-09-12]. https://arxiv.org/abs/1704.04861.
[46] Yu J H, Yang L J, Xu N, et al. Slimmable neural networks[DB/OL]. 2019[2021-09-12]. https://arxiv.org/abs/1812.08928.
[47] Yu J H, Huang T. Universally slimmable networks and improved training techniques[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019:1803-1811.
[48] He Y H, Lin J, Liu Z J, et al. AMC:AutoML for model compression and acceleration on mobile devices[C]//European Conference on Computer Vision (ECCV), 2018:784-800.
[49] Yu J, Huang T. AutoSlim:towards one-shot architecture search for channel numbers[DB/OL]. 2019[2021-09-12]. https://arxiv.org/abs/1903.11728.
[50] Cai H, Gan C, Wang T Z, et al. Once-for-all:train one network and specialize it for efficient deployment[C]//International Conference on Learning Representations (ICLR), 2019. efficient transfer learning[DB/OL]. 2021[2021-09-12]. https://arxiv.org/abs/1611.06440.