Toggle accesibility mode

Conference paper

Recurrent LSTM Neural Networks for Language Modelling and Speech Recognition

P. Kłosowski (Silesian Univ. of Techn., Poland)

This paper examines interesting natural language modelling tasks, such as word-based and subword-based language modelling, where deep learning methods are making some progress. Language modelling helps to predict the sequence of recognised words or subwords and thus can be used to improve the speech recognition process. However, the field of language modelling is currently witnessing a shift from statistical methods to recurrent neural networks and deep learning techniques. This article focusses on an example of using recurrent LSTM neural networks for language modelling and speech recognition. The new research results presented in this paper, following on from previous papers, focus on how to develop word-based and subword-based LSTM language models and how to use them together. The simultaneous use of both LSTM language modelling methods allows for the development of hybrid language models that have even better properties and can further improve the speech recognition process. The results presented in this paper apply to Polish language modelling, but the results obtained and the conclusions formulated on their basis can also be applied to language modelling applications for other languages.

Download one page abstract

Receipt of papers:

March 15th, 2025

Notification of acceptance:

April 30th, 2025

Registration opening:

May 2nd, 2025

Final paper versions:

May 15th, 2025