应用科学学报 ›› 2024, Vol. 42 ›› Issue (6): 1016-1026.doi: 10.3969/j.issn.0255-8297.2024.06.010

• 计算机科学与应用 • 上一篇    下一篇

基于子领域适应和时空学习的脑电情感识别

唐意恒, 王永雄, 王哲, 张晓理   

  1. 上海理工大学 光电信息与计算机工程学院, 上海 200093
  • 收稿日期:2023-03-05 出版日期:2024-11-30 发布日期:2024-11-30
  • 通信作者: 王永雄,教授,博导,研究方向为图像处理、人机交互、模式识别。E-mail:wyxiong@usst.edu.cn E-mail:wyxiong@usst.edu.cn
  • 基金资助:
    上海市自然科学基金(No.22ZR1443700)资助

Emotion Recognition of EEG Using Subdomain Adaptation and Spatial-Temporal Learning

TANG Yiheng, WANG Yongxiong, WANG Zhe, ZHANG Xiaoli   

  1. School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
  • Received:2023-03-05 Online:2024-11-30 Published:2024-11-30

摘要: 在跨受试情感识别中,不同受试者的脑电样本分布存在显著差异,常采用域适应方法减小脑电信号的个体差异。然而,全局域适应忽略了不同情感类别子领域的脑电分布差异,降低了情感特征的可区分性。此外,脑电信号包含众多的电极通道,并且受试者也只在部分刺激期间达到预期情绪,因此通道间的复杂空间信息和脑电信号的关键帧学习也是亟待解决的难题。为解决上述问题,提出了一种基于子领域适应和时空学习的脑电情感识别网络。首先,利用模型中的子领域适应模块,通过最小化类内差异和最大化类间差异以减小子领域的差异损失;再利用时空特征提取器捕获空间关联性及时间上下文信息,提取判别性情感特征。最后,在DEAP数据集上进行了受试者独立实验,实验结果验证了所提方法的性能,唤醒和效价的分类准确度分别达0.688 0和0.696 8。

关键词: 情感识别, 脑电, 子领域适应, 时空学习, 深度学习

Abstract: In cross-subject emotion recognition, there are significant differences in the distribution of electroencephalogram (EEG) samples among different subjects, and domain adaptation is commonly used to alleviate the differences. However, the differences in the EEG distribution across affective subdomains are ignored by global adaptation, which reduces the distinguishability of emotional features. Besides, EEG contains a number of electrodes, and subjects only reach the prospective emotion during part of stimuli. Learning the complex spatial information between channels and emphasizing critical EEG frames is essential. Hence, we propose a subdomain adaptation and spatial-temporal learning network for EEG-based emotion recognition. In the subdomain adaptation module, the difference loss in subdomains is reduced by minimizing intra-class differences and maximizing inter-class differences. A spatial-temporal feature extractor captures spatial correlations and temporal contexts, extracting discriminative emotional features. Subject-independent experiments conducted on the public DEAP dataset demonstrate the superior performance of the proposed method, achieving classification accuracies of 0.688 0 for arousal and 0.696 8 for valence, respectively.

Key words: emotion recognition, electroencephalogram (EEG), subdomain adaptation, spatial-temporal learning, deep learning

中图分类号: