Journal of Applied Sciences ›› 2024, Vol. 42 ›› Issue (5): 884-892.doi: 10.3969/j.issn.0255-8297.2024.05.014

• Computer Science and Applications • Previous Articles    

Cross-Modal Person Re-identification Driven by Cross-Channel Interactive Attention Mechanism in Dual-Stream Networks

HE Lei1, LI Fengyong1, QIN Chuan2   

  1. 1. College of Computer Science and Technology, Shanghai University of Electric Power, Shanghai 201306, China;
    2. School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
  • Received:2022-11-22 Published:2024-09-29

Abstract: Existing cross-modal person re-identification methods often fail to take into account the difference of target person between modes and within modes, making it difficult to further improve the retrieval accuracy. To solve this problem, this paper introduces the cross-channel interaction attention mechanism to enhance the robust extraction of person features, effectively suppresses the extraction of irrelevant features and achieves more discriminative feature expression. Furthermore, hetero-center triplet loss, triplet loss and identity loss are combined for supervised learning, effectively integrating the intermodal and intra-class differences in person features. Experimental results demonstrate the effectiveness of the proposed method, which outperforms seven existing methods on two standard datasets, RegDB and SYSU-MM01.

Key words: cross-modal, person re-identification, convolutional neural network, attention mechanism

CLC Number: