CCF NCCA 2020专辑

基于深度学习的竹笛吹奏技巧自动分类

展开
  • 1. 黑龙江大学 计算机科学技术学院, 黑龙江 哈尔滨 150080;
    2. 黑龙江大学 黑龙江省数据库与并行计算重点实验室, 黑龙江 哈尔滨 150080

收稿日期: 2020-08-26

  网络出版日期: 2021-08-04

Automatic Classification of Bamboo Flute Playing Skills Based on Deep Learning

Expand
  • 1. College of Computer Science and Technology, Heilongjiang University, Harbin 150080, Heilongjiang, China;
    2. Key Laboratory of Database and Parallel Computing of Heilongjiang Province, Heilongjiang University, Harbin 150080, Heilongjiang, China

Received date: 2020-08-26

  Online published: 2021-08-04

摘要

提出了一种针对竹笛技巧分类的数据集Breath和两个用于竹笛技巧分类的神经网络参考模型Breath1d和Breath2d,并针对此数据集的不同分类任务给出了最佳方法。将Breath数据集划分成子集,以多层感知机为性能评价基准方法,先用Breath1d和Breath2d模型对子集进行训练和预测,再用长短期记忆网络模型进行辅助测试,最后得出了最适合子任务的分类参考模型。对全数据集进行分类时,将Breath2d与Breath1d模型进行融合,并采用数据增强方法使全集分类准确率达到0.913。与传统音频分类任务相比,该工作扩展了音乐分类的研究领域,对民族音乐现代化发展有着良好的推动作用。

本文引用格式

郭毓博, 陆军, 段鹏启 . 基于深度学习的竹笛吹奏技巧自动分类[J]. 应用科学学报, 2021 , 39(4) : 685 -694 . DOI: 10.3969/j.issn.0255-8297.2021.04.015

Abstract

A dataset named Breath and two neural network reference models named Breath1d and Breath2d respectively are proposed for bamboo flute skill classification, and the optimal method is achieved for different classification tasks on this dataset. This paper divides the Breath dataset into subsets, and takes the multi-layer perceptron as the benchmark method of performance evaluation. First, the subsets are trained and predicted by the breath1d and breath2d models, and then the long short-term memory (LSTM) network model is used for auxiliary testing. Finally, the most suitable classification reference model for subtasks is obtained. When the whole dataset is classified, the breath2d and breath1d models are fused, and the data enhancement method is used. All of these make the classification accuracy of the whole dataset reach 91.3%. Compared with traditional audio classification tasks, this work expands the research field of music classification, and has a great effect on the modernization of national music.

参考文献

[1] 刘子阳. 浅谈竹笛演奏技巧与乐曲情感表达之间的关系[J]. 北方音乐, 2017, 37(22):60. Liu Z Y. On the relationship between performances kills of bamboo flute and emotional expression of music[J]. Northern Music, 2017, 37(22):60. (in Chinese).
[2] Fu Z, Lu G, Ting K M. A survey of audio-based music classification and annotation[J]. IEEE Transactions on Multimedia, 2010, 13(2):303-319.
[3] Schedl M, Gómez E, Urbano J. Music information retrieval:recent developments and applications[J]. Foundations and Trends in Information Retrieval, 2014, 8(2/3):127-261.
[4] 王悦虹. 民乐美感判断与持续时长的关联分析[C]//中国声学学会2019年全国声学大会论文集. 北京:中国声学学会, 2019:543-544.
[5] 陈燕文. 基于人工神经网络的琵琶声学品质评价及其音符识别[D]. 太原:中北大学, 2019.
[6] 王芳. 基于尝试学习的音乐流派及中国传统乐器中识别分类研究[D]. 南京:南京理工大学, 2017.
[7] Liu Y J, Zhang J J, Xiao Z Z. Grid diagram features for automatic Pipa fingering technique classification[C]//201912th International Symposium on Computational Intelligence and Design, 2019(1):24-28.
[8] Schmidhuber J. Deep learning in neural networks:an overview[J]. Neural Networks, 2015, 61:85-117.
[9] Weihs C, Ligges U, Morchen F. Classification in music research[J]. Advances in Data Analysis and Classification, 2007, 1(3):255-291.
[10] Chatterjee S. An optimized music recognition system using Mel-frequency cepstral coefficient (MFCC) and vector quantization (VQ)[C]//Research Directions:Special Issue International Business Research Conference on Transformation Opportunities and Sustainability Challenges in Technology and Management, 2019(45489):100-106.
[11] 司亚辉. 浅议竹笛演奏风格流派及演奏技法[J]. 北方音乐, 2020(3):50-53. Si Y H. On the style and technique of bamboo flute playing[J]. Northern Music, 2020(3):50-53. (in Chinese).
[12] Abdoli S, Cardinal P, Koerich A L. End-to-end environmental sound classification using a 1D convolutional neural network[J]. Expert Systems with Applications, 2019, 136:252-263.
[13] Bian W, Wang J, Zhuang B. Audio-based music classification with DenseNet and data augmentation[C]//Pacific Rim International Conference on Artificial Intelligence. Cham:Springer, 2019:56-65.
[14] Solanki A, Pandey S. Music instrument recognition using deep convolutional neural networks[J]. International Journal of Information Technology, 2019:1-10.
[15] Lee J, Park J, Kim K L. SampleCNN:end-to-end deep convolutional neural networks using very small filters for music classification[J]. Applied Sciences, 2018, 8(1):150.
[16] 何丽, 袁斌. 利用长短期记忆网络进行音乐流派的分类[J]. 计算机技术与发展, 2019, 29(11):190-194. He L, Yuan B. Classification of music genres using long short term memory network[J]. Computer Technology and Development, 2019, 29(11):190-194. (in Chinese).
[17] Uhlich S, Porcu M, Giron F, et al. Improving music source separation based on deep neural networks through data augmentation and network blending[C]//2017 IEEE International Conference on Acoustics, Speech and Signal Processing, 2017:261-265.
文章导航

/