Journal of Applied Sciences ›› 2015, Vol. 33 ›› Issue (3): 253-261.doi: 10.3969/j.issn.0255-8297.2015.03.004

• Communication Engineering • Previous Articles     Next Articles

Parallel Optimization of Chinese Language Model Based on Recurrent Neural Network

WANG Long1,2, YANG Jun-an1,2, CHEN Lei1,2, LIN Wei3, LIU Hui1,2   

  1. 1. Electronic Engineering Institute, Hefei 230037, China
    2. Key Laboratory of Electronic Restriction, Anhui Province, Hefei 230037, China
    3. Anhui USTC iFlytek Corporation, Hefei 230027, China
  • Received:2014-12-25 Revised:2015-03-04 Online:2015-05-30 Published:2015-03-04

Abstract:  High computational complexity leads to low efficiency in training a recurrent
neural network (RNN) language model. This becomes a major bottleneck in practical applications.
To deal with this problem, this paper proposes a parallel optimization algorithm
to speed up matrix and vector operations by taking the advantage of GPU’s computational
capability. The optimized network can handle multiple data streams in parallel and train
several sentence samples simultaneously so that the training process is significantly accelerated.
Experimental results show that the model training of RNN is speeded up effectively
without noticeable sacrifice of model performance. The algorithm is verified in an actual
Chinese speech recognition system.

Key words: speech recognition, recurrent neural network, language model, parallel optimization

CLC Number: