通信工程

基于循环神经网络的汉语语言模型并行优化算法

展开
  • 1. 电子工程学院,合肥230037
    2. 安徽省电子制约技术重点实验室,合肥230037
    3. 安徽科大讯飞公司,合肥230037
杨俊安,教授,博导,研究方向:信号处理、智能计算等研究,E-mail:yangjunan@ustc.edu

收稿日期: 2014-12-25

  修回日期: 2015-03-04

  网络出版日期: 2015-03-04

基金资助

国家自然科学基金(No.60872113);安徽省自然科学基金(No.1208085MF94)资助

Parallel Optimization of Chinese Language Model Based on Recurrent Neural Network

Expand
  • 1. Electronic Engineering Institute, Hefei 230037, China
    2. Key Laboratory of Electronic Restriction, Anhui Province, Hefei 230037, China
    3. Anhui USTC iFlytek Corporation, Hefei 230027, China

Received date: 2014-12-25

  Revised date: 2015-03-04

  Online published: 2015-03-04

摘要

计算复杂度高导致循环神经网络语言模型训练效率很低,是影响实际应用的一个
瓶颈. 针对这个问题,提出一种基于批处理(mini-batch) 的并行优化训练算法. 该算法利用
GPU 的强大计算能力来提高网络训练时的矩阵及向量运算速度,优化后的网络能同时并行处
理多个数据流即训练多个句子样本,加速训练过程. 实验表明,优化算法有效提升了RNN 语
言模型训练速率,且模型性能下降极少,并在实际汉语语音识别系统中得到了验证.

本文引用格式

王龙1,2, 杨俊安1,2, 陈雷1,2, 林伟3, 刘辉1,2 . 基于循环神经网络的汉语语言模型并行优化算法[J]. 应用科学学报, 2015 , 33(3) : 253 -261 . DOI: 10.3969/j.issn.0255-8297.2015.03.004

Abstract

 High computational complexity leads to low efficiency in training a recurrent
neural network (RNN) language model. This becomes a major bottleneck in practical applications.
To deal with this problem, this paper proposes a parallel optimization algorithm
to speed up matrix and vector operations by taking the advantage of GPU’s computational
capability. The optimized network can handle multiple data streams in parallel and train
several sentence samples simultaneously so that the training process is significantly accelerated.
Experimental results show that the model training of RNN is speeded up effectively
without noticeable sacrifice of model performance. The algorithm is verified in an actual
Chinese speech recognition system.

参考文献

[1] 倪崇嘉,刘文举,徐波. 汉语大词汇量连续语音识别系统研究进展[J]. 中文信息学报,2009, 23(1): 114-117.

NI C J, LIU W J, XU B. Research on large vocabulary continuous speech recognition system for mandarin Chinese[J]. Journal of Chinese Information Processing, 2009, 23(1): 114-117.

[2] XU W, RUDNICKY A. Can artificial neural networks learn models?[C]// International Conference on Statistical Language Processing, 2000.

[3] Mikolov T, Karafi´at M, Burget L, Cernocky? J, Khudanpur S. Recurrent neural network based language model[C]// in Proceedings of Interspeech, 2010:1045-1048.

[4] Mikolov T. Statistical language models based on neural networks [D]. Brno University of Technology, Czech Republic,2012.

[5] Mikolov T, Deoras A, Povery D. Strategies for training large scale neural network language models. in ASRU ,2011:196-201.

[6] Kombrink S, Mikolov T, Karafi´at M, Burget L. Recurrent neural network based language modeling in meeting recognition[C]// in Proceedings of Interspeech, 2011:2877-2880.

[7] Mikolov T, Kombrink S, Burget L, Cernocky J H, Khudanpur S. Extensions of recurrent neural network language model[C]// in Proceedings of ICASSP, 2011:5528-5531.

[8] Yao K S, Zweig G, Hwang MY, Shi Y Y, Yu D. Recurrent neural network for language understanding[C]//in Proceedings of Interspeech, 2013.

[9] Mnih V. “Cudamat: a CUDA-based matrix class for python,” Tech. Rep. UTML TR 2009-004, Department of Computer Science, University of Toronto, November 2009.

[10] Shalev-Shwartz S, Zhang T. Accelerated Mini-batch Stochastic Dual Coordinate Ascent. Technical report, arXiv, 2013.

[11] Dekel O, Gilad-Bachrach R, Shamir O, Xiao L. Optimal distributed online prediction using mini-batches. The Journal of Machine Learning Research, 2012, 13:165-202.
文章导航

/