Signal and Information Processing

Active Defense Method Based on Recoverable Adversarial Watermarks

  • WANG Jinwei ,
  • HUANG Wanyun ,
  • ZHANG Jiawei ,
  • LUO Xiangyang ,
  • MA Bin
Expand
  • 1. School of Computer Science, Nanjing University of Information Science and Technology, Nanjing 210044, Jiangsu, China;
    2. Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology, Nanjing University of Information Science and Technology, Nanjing 210044, Jiangsu, China;
    3. State Key Laboratory of Mathematical Engineering and Advanced Computing, Information Engineering University, Zhengzhou 450001, Henan, China;
    4. Shandong Provincial Key Laboratory of Computer Networks, Qilu University of Technology, Jinan 250353, Shandong, China

Received date: 2024-01-08

  Online published: 2025-12-19

Abstract

Visible watermarks are widely adopted as an important tool for copyright protection. However, as visible watermarks follow fixed embedding rules, they are hardly resistant to destruction by neural networks, which poses significant threats and challenges to existing copyright protection methods. To solve this problem, this paper proposed an active defense method based on recoverable adversarial watermarks, which improved the anti-removal ability of visible watermarks by introducing adversarial noise, thereby forming a new and more effective copyright protection method. In addition, to address the problem that watermarks may cover important areas of the host image after embedding, a recoverable adversarial visible watermark scheme was proposed. This scheme assisted authorized users in recovering the host image by embedding the important regions of the host image as secret information into non-watermark regions, thereby improving the recoverability of adversarial visible watermarks. Experimental results demonstrate that this method can effectively attack watermark removal networks while maintaining favorable recoverability.

Cite this article

WANG Jinwei , HUANG Wanyun , ZHANG Jiawei , LUO Xiangyang , MA Bin . Active Defense Method Based on Recoverable Adversarial Watermarks[J]. Journal of Applied Sciences, 2025 , 43(6) : 935 -947 . DOI: 10.3969/j.issn.0255-8297.2025.06.004

References

[1] Liu Y, Zhu Z, Bai X. WDNet: watermark-decomposition network for visible watermark removal [C]//IEEE Winter Conference on Applications of Computer Vision, 2021: 3684-3692.
[2] Cun X D, Pun C M. Split then refine: stacked attention-guided ResUNets for blind single image visible watermark removal [J]. AAAI Conference on Artificial Intelligence, 2021, 35(2): 1184-1192.
[3] Hertz A, Fogel S, Hanocka R, et al. Blind visual motif removal from a single image [C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 6851-6860.
[4] Sun R Z, Su Y K, Wu Q Y. DENet: disentangled embedding network for visible watermark removal [C]//AAAI Conference on Artificial Intelligence, 2023, 37(2): 2411-2419.
[5] Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks [DB/OL]. (2013-11-21) [2024-01-08]. https://arxiv.org/abs/1312.6199.
[6] Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples[DB/OL]. (2014-11-20) [2024-01-08]. https://arxiv.org/abs/1412.6572.
[7] Carlini N, Wagner D. Towards evaluating the robustness of neural networks [C]//2017 IEEE Symposium on Security and Privacy (SP), 2017: 39-57.
[8] Yin Z, Wang H, Chen L, et al. Reversible adversarial attack based on reversible image transformation [DB/OL]. (2021-05-25) [2024-01-08]. https://arxiv.org/pdf/1911.02360.
[9] Chen L, Zhu S W, Andrew A, et al. Reversible attack based on local visible adversarial perturbation [J]. Multimedia Tools and Applications, 2024, 83(4): 11215-11227.
[10] Chattopadhay A, Sarkar A, Howlader P, et al. Grad-CAM++: generalized gradientbased visual explanations for deep convolutional networks [C]//IEEE Winter Conference on Applications of Computer Vision, 2018: 839-847.
[11] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition [C]//IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770-778.
[12] Woo S, Park J, Lee J Y, et al. CBAM: convolutional block attention module [C]//Computer Vision-ECCV 2018, 2018: 3-19.
[13] Deng J, Dong W, Socher R, et al. ImageNet: a large-scale hierarchical image database [C]//IEEE Conference on Computer Vision and Pattern Recognition, 2009: 248-255.
[14] Madry A, Makelov A, Schmidt L, et al. Towards deep learning models resistant to adversarial attacks [DB/OL]. (2017-06-19) [2024-01-08]. https://arxiv.org/abs/1706.06083.
[15] Xiao C, Li B, Zhu J Y, et al. Generating adversarial examples with adversarial networks [DB/OL]. (2018-01-08) [2024-01-08]. https://arxiv.org/abs/1801.02610.
[16] Zhang J W, Wang J W, Wang H, et al. Self-recoverable adversarial examples: a new effective protection mechanism in social networks [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(2): 562-574.
[17] Jing J P, Deng X, Xu M, et al. HiNet: deep image hiding by invertible network [C]//IEEE/CVF International Conference on Computer Vision, 2021: 4713-4722.
[18] Almohammad A, Ghinea G. Stego image quality and the reliability of PSNR [C]//2nd International Conference on Image Processing Theory, Tools and Applications, 2010: 215-220.
[19] Wang Z, Bovik A C, Sheikh H R, et al. Image quality assessment: from error visibility to structural similarity [J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.
Outlines

/