信号与信息处理

基于广度残差与像素点注意力的图像去模糊模型

展开
  • 南昌航空大学 图像处理与模式识别江西省重点实验室, 江西 南昌 330063

收稿日期: 2021-11-22

  网络出版日期: 2022-12-03

基金资助

国家自然科学基金(No.61866027,No.62162044);江西省自然科学基金(No.20202BAB202016);江西省重点研发计划(No.20212BBE53017);南昌航空大学研究生创新专项基金(No.YC2020031)资助

Image Deblurring Model Based on Width Residual and Pixel Attention

Expand
  • Key Laboratory of Image Processing and Pattern Recognition of Jiangxi Province, Nanchang Hangkong University, Nanchang 330063, Jiangxi, China

Received date: 2021-11-22

  Online published: 2022-12-03

摘要

针对现有方法难以快速从模糊图像中恢复高质量清晰图像的问题,提出了基于广度残差与像素点注意力的图像去模糊模型。该模型以编解码网络为基础,采用广度卷积与多阶残差方法,构建广度残差模块,提高了模型处理速度;同时,采用局部平均与矩阵叉乘,构建像素点注意力模块,增强了模型去模糊质量。在GOPRO数据集上进行的实验结果表明,在模型大小仅为22.24MB情况下,结构相似度为0.9223,峰值信噪比为31.74dB,平均运行时间为0.37s。所提出方法与尺度循环网络方法相比,其峰值信噪比提高了4%,并且性能优于现有其他去模糊方法。

本文引用格式

况发, 熊邦书, 欧巧凤, 余磊 . 基于广度残差与像素点注意力的图像去模糊模型[J]. 应用科学学报, 2022 , 40(6) : 996 -1005 . DOI: 10.3969/j.issn.0255-8297.2022.06.010

Abstract

In order to solve the problem that existing methods suffer difficulty in quickly recovering high-quality sharp images from blurred images, an image deblurring model based on width residual and pixel attention is proposed. Based on encoder-decoder networks, the model uses wide convolution and multi-order residual method to construct width residual modules, improving the processing speed of the model. At the same time, local average and matrix cross multiplication are used to construct pixel attention modules, which enhance the model deblurring quality. The experimental results on GOPRO datasets show that the structural similarity of the proposed method is 0.922 3, the peak signal-to-noise ratio is 31.74 dB, and the average running time is 0.37 seconds when the model size is 22.24 MB. Compared with the scale-recurrent network method, the peak signal-to-noise ratio of the proposed method improves by 4%, and its performance is better than the other existing deblurring methods.

参考文献

[1] Chen L, Fang F, Wang T, et al. Blind image deblurring with local maximum gradient prior[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019:1742-1750.
[2] Yang L, Ji H. A variational EM framework with adaptive edge selection for blind motion deblurring[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019:10167-10176.
[3] 刘鹏飞, 肖亮. 基于Hessian核范数正则化的快速图像复原算法[J]. 电子学报, 2014, 43(10):2001- 2008. Liu P F, Xiao L. A fast algorithm for image restoration based on Hessian nuclear norm regularization[J]. Acta Electronica Sinica, 2014, 43(10):2001-2008. (in Chinese)
[4] Xu L, Zheng S, Jia J. Unnatural l0 sparse representation for natural image deblurring[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013:1107-1114.
[5] Nah S, Kim T H, Lee K M. Deep multi-scale convolutional neural network for dynamic scene deblurring[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017:3883-3891.
[6] Tao X, Gao H, Shen X, et al. Scale-recurrent network for deep image deblurring[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:8174-8182.
[7] Hochreiter S, Schmidhuber J. Long short-term memory[J]. Neural computation, 1997, 9(8):1735-1780.
[8] Zhang H, Dai Y, Li H, et al. Deep stacked hierarchical multi-patch network for image deblurring[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019:5978-5986.
[9] Suin M, Purohit K, Rajagopalan A N. Spatially-attentive patch-hierarchical network for adaptive motion deblurring[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020:3606-3615.
[10] Kupyn O, Budzan V, Mykhailych M, et al. Deblurgan:blind motion deblurring using conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018:8183-8192.
[11] Kupyn O, Martyniuk T, Wu J, et al. Deblurgan-v2:deblurring (orders-of-magnitude) faster and better[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019:8878-8887.
[12] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016:770-778.
[13] Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017:4700-4708.
[14] Yu J, Fan Y, Huang T. Wide activation for efficient image and video super-resolution[C]//30th British Machine Vision Conference, 2019:1-13.
[15] Gao H, Tao X, Shen X, et al. Dynamic scene deblurring with parameter selective sharing and nested skip connections[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019:3848-3856.
[16] Wang X, Girshick R, Gupta A, et al. Non-local neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:7794-7803.
[17] Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:7132-7141.
[18] Roy A G, Navab N, Wachinger C. Concurrent spatial and channel squeeze & excitation in fully convolutional networks[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention, 2018:421-429.
[19] Woo S, Park J, Lee J Y, et al. CBAM:convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision (ECCV), 2018:3-19.
[20] Cao Y, Xu J, Lin S, et al. Gcnet:non-local networks meet squeeze-excitation networks and beyond[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019:1971-1980.
[21] Zhang J, Pan J, Ren J, et al. Dynamic scene deblurring using spatially variant recurrent neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:2521-2529.
[22] Li L, Pan J, Lai W S, et al. Dynamic scene deblurring by depth guided model[J]. IEEE Transactions on Image Processing, 2020, 29:5273-5288.
[23] Feng H, Guo J, Xu H, et al. SharpGAN:dynamic scene deblurring method for smart ship based on receptive field block and generative adversarial networks[J]. Sensors, 2021, 21(11):3641-3659.
[24] Shen Z, Wang W, Lu X, et al. Human-aware motion deblurring[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019:5572-5581.
文章导航

/