计算复杂度高导致循环神经网络语言模型训练效率很低,是影响实际应用的一个
瓶颈. 针对这个问题,提出一种基于批处理(mini-batch) 的并行优化训练算法. 该算法利用
GPU 的强大计算能力来提高网络训练时的矩阵及向量运算速度,优化后的网络能同时并行处
理多个数据流即训练多个句子样本,加速训练过程. 实验表明,优化算法有效提升了RNN 语
言模型训练速率,且模型性能下降极少,并在实际汉语语音识别系统中得到了验证.
High computational complexity leads to low efficiency in training a recurrent
neural network (RNN) language model. This becomes a major bottleneck in practical applications.
To deal with this problem, this paper proposes a parallel optimization algorithm
to speed up matrix and vector operations by taking the advantage of GPU’s computational
capability. The optimized network can handle multiple data streams in parallel and train
several sentence samples simultaneously so that the training process is significantly accelerated.
Experimental results show that the model training of RNN is speeded up effectively
without noticeable sacrifice of model performance. The algorithm is verified in an actual
Chinese speech recognition system.
[1] 倪崇嘉,刘文举,徐波. 汉语大词汇量连续语音识别系统研究进展[J]. 中文信息学报,2009, 23(1): 114-117.
NI C J, LIU W J, XU B. Research on large vocabulary continuous speech recognition system for mandarin Chinese[J]. Journal of Chinese Information Processing, 2009, 23(1): 114-117.
[2] XU W, RUDNICKY A. Can artificial neural networks learn models?[C]// International Conference on Statistical Language Processing, 2000.
[3] Mikolov T, Karafi´at M, Burget L, Cernocky? J, Khudanpur S. Recurrent neural network based language model[C]// in Proceedings of Interspeech, 2010:1045-1048.
[4] Mikolov T. Statistical language models based on neural networks [D]. Brno University of Technology, Czech Republic,2012.
[5] Mikolov T, Deoras A, Povery D. Strategies for training large scale neural network language models. in ASRU ,2011:196-201.
[6] Kombrink S, Mikolov T, Karafi´at M, Burget L. Recurrent neural network based language modeling in meeting recognition[C]// in Proceedings of Interspeech, 2011:2877-2880.
[7] Mikolov T, Kombrink S, Burget L, Cernocky J H, Khudanpur S. Extensions of recurrent neural network language model[C]// in Proceedings of ICASSP, 2011:5528-5531.
[8] Yao K S, Zweig G, Hwang MY, Shi Y Y, Yu D. Recurrent neural network for language understanding[C]//in Proceedings of Interspeech, 2013.
[9] Mnih V. “Cudamat: a CUDA-based matrix class for python,” Tech. Rep. UTML TR 2009-004, Department of Computer Science, University of Toronto, November 2009.
[10] Shalev-Shwartz S, Zhang T. Accelerated Mini-batch Stochastic Dual Coordinate Ascent. Technical report, arXiv, 2013.
[11] Dekel O, Gilad-Bachrach R, Shamir O, Xiao L. Optimal distributed online prediction using mini-batches. The Journal of Machine Learning Research, 2012, 13:165-202.