应用科学学报 ›› 2021, Vol. 39 ›› Issue (4): 641-649.doi: 10.3969/j.issn.0255-8297.2021.04.011

• CCF NCCA 2020专辑 • 上一篇    

基于正弦注意力表征网络的环境声音识别

彭宁1,3, 陈爱斌1,2,3, 周国雄1,3, 陈文洁1,3, 刘晶1,3   

  1. 1. 中南林业科技大学 人工智能应用研究所, 湖南 长沙 410004;
    2. 中南林业科技大学 智慧物流技术湖南省重点实验室, 湖南 长沙 410004;
    3. 中南林业科技大学 计算机与信息工程学院, 湖南 长沙 410004
  • 收稿日期:2020-08-23 发布日期:2021-08-04
  • 通信作者: 陈爱斌,教授,研究方向为人工智能。E-mail:hotaibin@163.com E-mail:hotaibin@163.com
  • 基金资助:
    中南林业科技大学研究生科技创新基金(No.CX20192014)资助

Environmental Sound Recognition Based on Attention Sinusoidal Representation Network

PENG Ning1,3, CHEN Aibin1,2,3, ZHOU Guoxiong1,3, CHEN Wenjie1,3, LIU Jing1,3   

  1. 1. Institute of Artificial Intelligence Application, Central South University of Forestry and Technology, Changsha 410004, Hunan, China;
    2. Hunan Key Laboratory of Intelligent Logistics Technology, Central South University of Forestry and Technology, Changsha 410004, Hunan, China;
    3. College of Computer and Information Engineering, Central South University of Forestry and Technology, Changsha 410004, Hunan, China
  • Received:2020-08-23 Published:2021-08-04

摘要: 将正弦注意力表征网络引入环境声音识别,首先提取梅尔频率倒谱系数(Melfrequency cepstral coefficient,MFCC)作为音频识别特征,使用门控循环单元提取MFCC每一帧的特征,根据正弦函数激活每一帧音频得分,并依照每一帧的音频得分为音频重新分配权重,从而将注意力集中在音频重点区域。最后结合全连接层和Softmax分类器对环境声音类别进行判别。实验在公开数据集Urban Sound 8K上验证并与其他模型对比,结果表明所提出模型效果最好,在数据集上的识别率高达93.5%。

关键词: 环境声音识别, 注意力机制, 梅尔频率倒谱系数, 门控循环单元, 正弦注意力表征网络

Abstract: In this paper, we propose an attention sinusoidal representation network (A-SIREN). Firstly, Mel -frequency cepstral coefficient (MFCC) as an audio recognition feature is extracted from a dataset. Then, feature extraction is performed on each frame of the MFCC by using a neural network named gated recurrent unit (GRU). And audio score is calculated for each frame by using sine function and the audio is re-weighted according to the audio score of each frame. Finally, the categories of environmental sound are discriminated by using the full connection layer in combination with the Softmax classifier. In the experiments of this paper, we validated the designed model in an open-source dataset Urban Sound 8K and compared the performance of the designed model with that of other models. Experimental results show that the A-SIREN works best on the Urban Sound 8K dataset with recognition rate as high as 93.5%.

Key words: environment sound recognition, attention mechanism, Mel-frequency cepstral coefficient (MFCC), gated recurrent unit (GRU), attention sinusoidal representation network (A-SIREN)

中图分类号: