信号与信息处理

一种基于边缘特征引导的低照度图像细节增强方法

  • 江泽涛 ,
  • 杨建琛 ,
  • 李孟桐 ,
  • 程留明 ,
  • 张路豪
展开
  • 桂林电子科技大学 广西图像图形与智能处理重点实验室, 广西 桂林 541004

收稿日期: 2025-01-02

  网络出版日期: 2025-12-19

基金资助

国家自然科学基金项目(No. 62473105, No. 62172118);广西自然学科基金重点项目(No. 2021GXNSFDA196002);广西图像图形智能处理重点实验项目(No. GIIP2302, No. GIIP2303, No. GIIP2304);研究生创新基金项目(No. 2024YCXB09,No. 2024YCXS039)

Low-Light Image Detail Enhancement Method Based on Edge Feature Guidance

  • JIANG Zetao ,
  • YANG Jianchen ,
  • LI Mengtong ,
  • CHENG Liuming ,
  • ZHANG Luhao
Expand
  • Guangxi Key Laboratory of Image and Graphic Intelligent Processing, Guilin University of Electronic Technology, Guilin 541004, Guangxi, China

Received date: 2025-01-02

  Online published: 2025-12-19

摘要

目前低照度图像增强方法主要采用单一特征重构目标图像,其中堆叠上采样-下采样操作在实现特征缩放时不可避免地造成高频信息的不可逆损失,最终导致增强后的图像存在细节信息模糊的问题。针对这一问题,本文提出一种基于边缘特征引导的低照度图像细节增强方法,该方法包含边缘特征提取模块、增强模块及边缘特征引导模块3个部分,采用Transformer由粗到细并借助边缘特征引导,渐进式地生成高质量的增强图像。首先,边缘特征提取模块通过使用并行窗口的Transformer模块(parallel window transformer block,PWTB)从低照度图像中获取边缘特征,引导图像的增强过程;然后,在增强模块中使用由粗到细的Transformer模块(coarse-to-fine transformer block,CFTB),该模块包括通道Transformer模块(channel transformer block,CTB)和PWTB,分别对全局粗粒度特征与局部细粒度特征进行提取,并对Transformer中前馈网络进行了修改;最后,由边缘特征引导模块将边缘特征嵌入到图像特征空间中,缓解了黑暗区域严重丢失细节的问题。实验结果表明,本文所提出的方法在LOL-v1,LOL-v2-real和LOLv2-synthetic数据集上,峰值信噪比分别达到24.97 dB,23.20 dB和25.92 dB,结构相似性指数分别达到0.873,0.865和0.941,均高于目前的主流方法;在主观质量方面,生成的图像较好地保持了图像细节信息。

本文引用格式

江泽涛 , 杨建琛 , 李孟桐 , 程留明 , 张路豪 . 一种基于边缘特征引导的低照度图像细节增强方法[J]. 应用科学学报, 2025 , 43(6) : 948 -961 . DOI: 10.3969/j.issn.0255-8297.2025.06.005

Abstract

Currently, the low-light image enhancement methods mainly adopt a single feature to reconstruct the target image. Among these methods, stacked upsampling-downsampling operations inevitably cause irreversible loss of high-frequency information when performing feature scaling, ultimately resulting in blurred detailed information in the enhanced image. To address this issue, this paper proposed a low-light image detail enhancement method based on edge feature guidance. The method consisted of three components: an edge feature extract module (EFEM), an enhancement module, and an edge-aware feature guidance module (EFGM). By leveraging Transformer and guided by edge features, it progressively generated high-quality enhanced images in a coarse-to-fine manner. First, the EFEM acquired edge features from low-light images via a parallel window transformer block (PWTB), which guided the image enhancement process. Second, the enhancement module employed a coarse-to-fine transformer block (CFTB), which included a channel transformer block (CTB) and a PWTB. These two components extracted global coarse-grained features and local fine-grained features respectively, and modifications were made to the feed-forward network (FFN) in the Transformer. Finally, the EFGM embedded edge features into the image feature space, mitigating the severe loss of details in dark regions. The experimental results show that the proposed method achieves peak signal-to-noise ratio (PSNR) of 24.97 dB, 23.20 dB, and 25.92 dB, and structural similarity index measure (SSIM) of 0.873, 0.865, and 0.941 on the LOL-v1, LOL-v2-real, and LOLv2-synthetic datasets, respectively. All these metrics outperform those of the current mainstream methods. In terms of subjective quality, the enhanced images well preserve the image detail information.

参考文献

[1] 罗凡, 熊邦书, 余磊, 等. 基于DBAFFNet的低照度图像增强[J]. 应用科学学报, 2023, 41(3): 476-487. Luo F, Xiong B S, Yu L, et al. Low-light image enhancement based on DBAFFNet [J]. Journal of Applied Sciences, 2023, 41(3): 476-487. (in Chinese)
[2] Ren W Q, Liu S F, Ma L, et al. Low-light image enhancement via a deep hybrid network [J]. IEEE Transactions on Image Processing, 2019, 28(9): 4364-4375.
[3] Zhang Y H, Zhang J W, Guo X J. Kindling the darkness: a practical low-light image enhancer [C]//27th ACM International Conference on Multimedia, 2019: 1632-1640.
[4] Guo C L, Li C Y, Guo J C, et al. Zero-reference deep curve estimation for low-light image enhancement [C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020: 1777-1786.
[5] Liu R S, Ma L, Zhang J A, et al. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement [C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021: 10556-10565.
[6] Ma L, Ma T Y, Liu R S, et al. Toward fast, flexible, and robust low-light image enhancement [C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022: 5627- 5636.
[7] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need [J]. Advances in Neural Information Processing Systems, 2017, 30: 5998-6008
[8] Xu X G, Wang R X, Fu C W, et al. SNR-aware low-light image enhancement [C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022: 17693-17703.
[9] Wang Z D, Cun X D, Bao J M, et al. Uformer: a general U-shaped transformer for image restoration [C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022: 17662-17672.
[10] Liu Z, Lin Y T, Cao Y, et al. Swin transformer: hierarchical vision transformer using shifted windows [C]//IEEE/CVF International Conference on Computer Vision (ICCV), 2021: 9992- 10002.
[11] Zamir S W, Arora A, Khan S, et al. Restormer: efficient transformer for high-resolution image restoration [C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022: 5718-5729.
[12] Wang T, Zhang K H, Shen T R, et al. Ultra-high-definition low-light image enhancement: a benchmark and transformer-based method [J]. AAAI Conference on Artificial Intelligence, 2023, 37(3): 2654-2662.
[13] Chen X, Li H, Li M Q, et al. Learning a sparse transformer network for effective image deraining [C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023: 5896-5905.
[14] Wang C, Pan J, Wu X M. Structural prior guided generative adversarial transformers for low-light image enhancement[DB/OL]. (2022-07-16) [2025-01-02]. https://export.arxiv.org/abs/2207.07828
[15] Tanaka M, Shibata T, Okutomi M. Gradient-based low-light image enhancement [C]//IEEE International Conference on Consumer Electronics (ICCE), 2019: 1-2.
[16] Liang D, Li L, Wei M Q, et al. Semantically contrastive learning for low-light image enhancement [J]. AAAI Conference on Artificial Intelligence, 2022, 36(2): 1555-1563.
[17] Zhu M F, Pan P B, Chen W, et al. EEMEFN: low-light image enhancement via edgeenhanced multi-exposure fusion network [J]. AAAI Conference on Artificial Intelligence, 2020, 34(7): 13106-13113.
[18] Rana D, Lal K J, Parihar A S. Edge guided low-light image enhancement [C]//5th International Conference on Intelligent Computing and Control Systems (ICICCS), 2021: 871-877.
[19] Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation [C]//Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015. 2015: 234-241.
[20] Shi W Z, Caballero J, Huszár F, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network [C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 1874-1883.
[21] Fang F M, Li J C, Yuan Y T, et al. Multilevel edge features guided network for image denoising [J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(9): 3956- 3970.
[22] Hendrycks D, Gimpel K. Gaussian error linear units (GELUs) [DB/OL]. (2016-06-17) [2025- 01-02]. https://arxiv.org/abs/1606.08415.
[23] Wei C, Wang W, Yang W. Deep retinex decomposition for low-light enhancement [C]//British Machine Vision Conference (BMVC), 2018: 155-165.
[24] Yang W H, Wang W J, Huang H F, et al. Sparse gradient regularized deep retinex network for robust low-light image enhancement [J]. IEEE Transactions on Image Processing, 2021, 30: 2072-2086.
[25] Fu Z Q, Yang Y, Tu X T, et al. Learning a simple low-light image enhancer from paired low-light instances [C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023: 22252-22261.
[26] Cai Y H, Bian H, Lin J, et al. Retinexformer: one-stage retinex-based transformer for low-light image enhancement [C]//IEEE/CVF International Conference on Computer Vision (ICCV), 2023: 12470-12479.
[27] Jiang Y F, Gong X Y, Liu D, et al. EnlightenGAN: deep light enhancement without paired supervision [J]. IEEE Transactions on Image Processing, 2021, 30: 2340-2349.
[28] Li C Y, Guo C L, Loy C C. Learning to enhance low-light image via zero-reference deep curve estimation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(8): 4225-4238.
[29] Zhang Y H, Guo X J, Ma J Y, et al. Beyond brightening low-light images [J]. International Journal of Computer Vision, 2021, 129(4): 1013-1037.
文章导航

/