Signal and Information Processing

Image Deblurring Model Based on Width Residual and Pixel Attention

Expand
  • Key Laboratory of Image Processing and Pattern Recognition of Jiangxi Province, Nanchang Hangkong University, Nanchang 330063, Jiangxi, China

Received date: 2021-11-22

  Online published: 2022-12-03

Abstract

In order to solve the problem that existing methods suffer difficulty in quickly recovering high-quality sharp images from blurred images, an image deblurring model based on width residual and pixel attention is proposed. Based on encoder-decoder networks, the model uses wide convolution and multi-order residual method to construct width residual modules, improving the processing speed of the model. At the same time, local average and matrix cross multiplication are used to construct pixel attention modules, which enhance the model deblurring quality. The experimental results on GOPRO datasets show that the structural similarity of the proposed method is 0.922 3, the peak signal-to-noise ratio is 31.74 dB, and the average running time is 0.37 seconds when the model size is 22.24 MB. Compared with the scale-recurrent network method, the peak signal-to-noise ratio of the proposed method improves by 4%, and its performance is better than the other existing deblurring methods.

Cite this article

KUANG Fa, XIONG Bangshu, OU Qiaofeng, YU Lei . Image Deblurring Model Based on Width Residual and Pixel Attention[J]. Journal of Applied Sciences, 2022 , 40(6) : 996 -1005 . DOI: 10.3969/j.issn.0255-8297.2022.06.010

References

[1] Chen L, Fang F, Wang T, et al. Blind image deblurring with local maximum gradient prior[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019:1742-1750.
[2] Yang L, Ji H. A variational EM framework with adaptive edge selection for blind motion deblurring[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019:10167-10176.
[3] 刘鹏飞, 肖亮. 基于Hessian核范数正则化的快速图像复原算法[J]. 电子学报, 2014, 43(10):2001- 2008. Liu P F, Xiao L. A fast algorithm for image restoration based on Hessian nuclear norm regularization[J]. Acta Electronica Sinica, 2014, 43(10):2001-2008. (in Chinese)
[4] Xu L, Zheng S, Jia J. Unnatural l0 sparse representation for natural image deblurring[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013:1107-1114.
[5] Nah S, Kim T H, Lee K M. Deep multi-scale convolutional neural network for dynamic scene deblurring[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017:3883-3891.
[6] Tao X, Gao H, Shen X, et al. Scale-recurrent network for deep image deblurring[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:8174-8182.
[7] Hochreiter S, Schmidhuber J. Long short-term memory[J]. Neural computation, 1997, 9(8):1735-1780.
[8] Zhang H, Dai Y, Li H, et al. Deep stacked hierarchical multi-patch network for image deblurring[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019:5978-5986.
[9] Suin M, Purohit K, Rajagopalan A N. Spatially-attentive patch-hierarchical network for adaptive motion deblurring[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020:3606-3615.
[10] Kupyn O, Budzan V, Mykhailych M, et al. Deblurgan:blind motion deblurring using conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018:8183-8192.
[11] Kupyn O, Martyniuk T, Wu J, et al. Deblurgan-v2:deblurring (orders-of-magnitude) faster and better[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019:8878-8887.
[12] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016:770-778.
[13] Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017:4700-4708.
[14] Yu J, Fan Y, Huang T. Wide activation for efficient image and video super-resolution[C]//30th British Machine Vision Conference, 2019:1-13.
[15] Gao H, Tao X, Shen X, et al. Dynamic scene deblurring with parameter selective sharing and nested skip connections[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019:3848-3856.
[16] Wang X, Girshick R, Gupta A, et al. Non-local neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:7794-7803.
[17] Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:7132-7141.
[18] Roy A G, Navab N, Wachinger C. Concurrent spatial and channel squeeze & excitation in fully convolutional networks[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention, 2018:421-429.
[19] Woo S, Park J, Lee J Y, et al. CBAM:convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision (ECCV), 2018:3-19.
[20] Cao Y, Xu J, Lin S, et al. Gcnet:non-local networks meet squeeze-excitation networks and beyond[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019:1971-1980.
[21] Zhang J, Pan J, Ren J, et al. Dynamic scene deblurring using spatially variant recurrent neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:2521-2529.
[22] Li L, Pan J, Lai W S, et al. Dynamic scene deblurring by depth guided model[J]. IEEE Transactions on Image Processing, 2020, 29:5273-5288.
[23] Feng H, Guo J, Xu H, et al. SharpGAN:dynamic scene deblurring method for smart ship based on receptive field block and generative adversarial networks[J]. Sensors, 2021, 21(11):3641-3659.
[24] Shen Z, Wang W, Lu X, et al. Human-aware motion deblurring[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019:5572-5581.
Outlines

/