将正弦注意力表征网络引入环境声音识别,首先提取梅尔频率倒谱系数(Melfrequency cepstral coefficient,MFCC)作为音频识别特征,使用门控循环单元提取MFCC每一帧的特征,根据正弦函数激活每一帧音频得分,并依照每一帧的音频得分为音频重新分配权重,从而将注意力集中在音频重点区域。最后结合全连接层和Softmax分类器对环境声音类别进行判别。实验在公开数据集Urban Sound 8K上验证并与其他模型对比,结果表明所提出模型效果最好,在数据集上的识别率高达93.5%。
In this paper, we propose an attention sinusoidal representation network (A-SIREN). Firstly, Mel -frequency cepstral coefficient (MFCC) as an audio recognition feature is extracted from a dataset. Then, feature extraction is performed on each frame of the MFCC by using a neural network named gated recurrent unit (GRU). And audio score is calculated for each frame by using sine function and the audio is re-weighted according to the audio score of each frame. Finally, the categories of environmental sound are discriminated by using the full connection layer in combination with the Softmax classifier. In the experiments of this paper, we validated the designed model in an open-source dataset Urban Sound 8K and compared the performance of the designed model with that of other models. Experimental results show that the A-SIREN works best on the Urban Sound 8K dataset with recognition rate as high as 93.5%.
[1] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[C]//Proceedings of the Advances in Neural Information Processing Systems (NIPS), Nevada, USA, 2012:1097-1105.
[2] Piczak K J. Environmental sound classification with convolutional neural networks[C]//Proceedings of the 2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP), Boston, MA, 2015:1-6.
[3] Zhang X, Zou Y, Shi W. Dilated convolution neural network with LeakyReLU for environmental sound classification[C]//Proceedings of the 22nd International Conference on Digital Signal Processing, London, UK, 2017:1-5
[4] Zhrer M, Pernkopf F. Gated recurrent networks applied to acoustic scene classification and acoustic event detection[C]//European Signal Processing Conference, 2016.
[5] Sitzmann V, Martel J, Bergman A, et al. Implicit neural representations with periodic activation functions[C]//The 34th Conference on Neural Information Processing Systems, 2020.
[6] Tokozume Y, Harada T. Learning environmental sounds with end-to-end convolutional neural network[C]//Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing, New Orleans, LA, 2017:2721-2725.
[7] Hochreiter S, Schmidhuber J. Long short-term memory[J]. Neural Computation, 1997, 9(8):1735-1780.
[8] Chung J, Gulcehre C, Cho K H, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling, 2014[2020-08-15]. https://arxiv.org/abs/1412.3555.
[9] Chan W, Jaitly N, Le Q V, et al. Listen, attend and spell:a neural network for large vocabulary conversational speech recognition[C]//Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Shanghai, 2016:4960-4964.
[10] Salamon J, Jacoby C, Bello J P. A dataset and taxonomy for urban sound research[C]//Proceedings of the 22nd ACM International Conference on Multimedia, Xiamen, 2014:1041-1044.
[11] Mcfee B, Raffel C, Liang D, et al. Librosa:audio and music signal analysis in Python[C]//Proceedings of the 14th Python in Science Conference, Austin, Texas, 2015:18-25.
[12] Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017:2980-2988.
[13] Chen Y, Guo Q, Liang X, et al. Environmental sound classification with dilated convolutions[J]. Applied Acoustics, 2019, 148:123-132.
[14] 张科, 苏雨, 王靖宇, 等. 基于融合特征以及卷积神经网络的环境声音分类系统研究[J]. 西北工业大学学报, 2020, 38(1):162-169. Zhang K, Su Y, Wang J Y, et al. Environment sound classification system based on hybrid feature and convolutional neural network[J]. Journal of Northwestern Polytechnical University, 2020, 38(1):162-169. (in Chinese)
[15] Zhang Z, Xu S, Cao S, et al, Deep convolutional neural network with mixup for environmental sound classification[C]//Proceedings of the Chinese Conference on Pattern Recognition and Computer Vision. Cham:Springer, 2018:356-367.
[16] Lim M, Lee D, Park H, et al. Convolutional neural network based audio event classification[J]. KSII Transactions on Internet and Information Systems, 2018, 12(6):2748-2760.