应用科学学报 ›› 2023, Vol. 41 ›› Issue (1): 107-120.doi: 10.3969/j.issn.0255-8297.2023.01.009

• 计算机应用专辑 • 上一篇    下一篇

基于通道特征聚合的行人重识别算法

徐增敏1,3, 陆光建1, 陈俊彦2, 陈金龙2, 丁勇2   

  1. 1. 桂林电子科技大学 数学与计算科学学院, 广西 桂林 541004;
    2. 桂林电子科技大学 计算机与信息安全学院, 广西 桂林 541004;
    3. 桂林安维科技有限公司, 广西 桂林 541010
  • 收稿日期:2022-06-23 出版日期:2023-01-31 发布日期:2023-02-03
  • 通信作者: 陈金龙,高级实验师,研究方向为图像处理、强化学习。E-mail:7259@guet.edu.cn E-mail:7259@guet.edu.cn
  • 基金资助:
    国家自然科学基金(No.61862015);广西科技基地和人才专项基金(No.2021AC06001);广西重点研发计划项目基金(No.AB17195025)资助

Person Re-identification Algorithm Based on Channel Feature Aggregation

XU Zengmin1,3, LU Guangjian1, CHEN Junyan2, CHEN Jinlong2, DING Yong2   

  1. 1. School of Mathematics and Computing Science, Guilin University of Electronic Technology, Guilin 541004, Guangxi, China;
    2. School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, Guangxi, China;
    3. Anview. ai, Guilin 541010, Guangxi, China
  • Received:2022-06-23 Online:2023-01-31 Published:2023-02-03

摘要: 在基于深度学习的行人重识别算法中,通道特征易被忽视而导致模型表达能力降低。为此,以ResNeSt50为骨干网络,借鉴SENet通道注意力特点在残差块末尾接入SE block,增强网络对通道特征的提取能力;针对ReLU函数因缺少控制因子而限制不同通道特征图对激活值的准确响应问题,引入一个动态学习因子来丰富通道特征权重信息,以形成新的加权激活函数Weighted ReLU (WReLU);基于分组卷积特征图局部而设计新的激活函数LeakyWeighted ReLU (LWReLU),有效提高不同位置的深度特征表达能力;在Split-Attention和SE block中应用LWReLU,改善Split-Attention对各组特征图的权重学习能力;利用circleloss改进损失函数,优化目标收敛过程,从而提高模型精度。实验结果表明:在CUHK03-NP、Market1501和DukeMTMC-ReID数据集上,所提方法的Rank-1比原骨干网络分别提高了19.08%、0.98%、2.02%,且其mAP比原骨干网络分别提高了17.13%、2.11%、2.56%。

关键词: 分组卷积, 通道注意力, 修正线性单元, 激活函数, 动态学习因子

Abstract: In deep-learning person re-identification algorithms, channel characteristics may be neglected, leading to a degraded model-expression ability. Address to the problem, we choose the ResNeSt50 as backbone network, and add an SE block to the end of residual blocks by using characteristics of SENet channel attention for enhancing features extraction of channels in networks. In addition, due to lack of control factors, ReLU function may reduce the correct responses of different feature graphs to activation values. Thus, we present two new activation functions. One is named as Weighted ReLU (WReLU) by combining ReLU with weight bias term, which can effectively improve feature selection ability in neural networks, and the other is Leaky Weighted ReLU (LWReLU), which is applied in Split-Attention and SE block, and enables Split-Attention to promote the weight learning ability from feature maps. Moreover, a new loss function with circle loss is also proposed for optimizing the convergence of objective function. Experimental results show that the proposed algorithm outperforms original backbone by 19.08%, 0.98%, and 2.02% in Rank-1, and 17.13%, 2.11%, and 2.56% in mAP respectively on CUHK03-NP, Market1501, and DukeMTMC-ReID datasets.

Key words: group convolution, channel attention, rectified linear unit, activation function, dynamic learning factor

中图分类号: