DOI QR코드

DOI QR Code

Performance of Korean spontaneous speech recognizers based on an extended phone set derived from acoustic data

음향 데이터로부터 얻은 확장된 음소 단위를 이용한 한국어 자유발화 음성인식기의 성능

  • Bang, Jeong-Uk (Department of Control and Robot Engineering, Graduate School, Chungbuk National University) ;
  • Kim, Sang-Hun (Electronics and Telecommunications Research Institute) ;
  • Kwon, Oh-Wook (School of Electronics Engineering, Chungbuk National University)
  • 방정욱 (충북대학교 일반대학원 제어로봇공학전공) ;
  • 김상훈 (한국전자통신연구원) ;
  • 권오욱 (충북대학교 전자공학부)
  • Received : 2019.07.24
  • Accepted : 2019.09.23
  • Published : 2019.09.30

Abstract

We propose a method to improve the performance of spontaneous speech recognizers by extending their phone set using speech data. In the proposed method, we first extract variable-length phoneme-level segments from broadcast speech signals, and convert them to fixed-length latent vectors using an long short-term memory (LSTM) classifier. We then cluster acoustically similar latent vectors and build a new phone set by choosing the number of clusters with the lowest Davies-Bouldin index. We also update the lexicon of the speech recognizer by choosing the pronunciation sequence of each word with the highest conditional probability. In order to analyze the acoustic characteristics of the new phone set, we visualize its spectral patterns and segment duration. Through speech recognition experiments using a larger training data set than our own previous work, we confirm that the new phone set yields better performance than the conventional phoneme-based and grapheme-based units in both spontaneous speech recognition and read speech recognition.

본 논문에서는 대량의 음성 데이터를 이용하여 기존의 음소 세트를 확장하여 자유발화 음성인식기의 성능을 향상시키는 방법을 제안한다. 제안된 방법은 먼저 방송 데이터에서 가변 길이의 음소 세그먼트를 추출한 다음 LSTM 구조를 기반으로 고정 길이의 잠복벡터를 얻는다. 그런 다음, k-means 군집화 알고리즘을 사용하여 음향적으로 유사한 세그먼트를 군집시키고, Davies-Bouldin 지수가 가장 낮은 군집 수를 선택하여 새로운 음소 세트를 구축한다. 이후, 음성인식기의 발음사전은 가장 높은 조건부 확률을 가지는 각 단어의 발음 시퀀스를 선택함으로써 업데이트된다. 새로운 음소 세트의 음향적 특성을 분석하기 위하여, 확장된 음소 세트의 스펙트럼 패턴과 세그먼트 지속 시간을 시각화하여 비교한다. 제안된 단위는 자유발화뿐만 아니라, 낭독체 음성인식 작업에서 음소 단위 및 자소 단위보다 더 우수한 성능을 보였다.

Keywords

References

  1. Bang, J. U., Choi, M. Y., Kim, S. H., & Kwon, O. W. (2017, August). Improving speech recognizers by refining broadcast data with inaccurate subtitle timestamps. Proceedings of the Interspeech 2017 (pp. 2929-2933). Stockholm, Sweden.
  2. Bang, J. U., Choi, M. Y., Kim, S. H., & Kwon, O. W. (2019, September). Extending an acoustic data-driven phone set for spontaneous speech recognition. Proceedings of the Interspeech 2019 (pp. 4405-4409). Graz, Austria.
  3. Chung, Y. A., Wu, C. C., Shen, C. H., Lee, H. Y., & Lee, L. S. (2016, September). Audio word2vec: Unsupervised learning of audio segment representations using sequence-to-sequence autoencoder. Proceedings of the Interspeech 2016 (pp. 410-415). San Francisco, CA.
  4. Hain, T. (2005). Implicit modelling of pronunciation variation in automatic speech recognition. Speech Communication, 46(2), 171-188. https://doi.org/10.1016/j.specom.2005.03.008
  5. Killer, M., Stuker, S., & Schultz, T. (2003). Grapheme based speech recognition. Proceedings of the Eurospeech 2003 (pp. 3141-3144). Geneva, Switzerland.
  6. Lamel, L., Gauvain, J. L., & Adda, G. (2002). Lightly supervised and unsupervised acoustic model training. Computer Speech and Language, 16(1), 115-129. https://doi.org/10.1006/csla.2001.0186
  7. Lee, K. N., & Chung, M. (2003, January). Modeling cross-morpheme pronunciation variations for Korean large vocabulary continuous speech recognition. Proceedings of the Eurospeech 2003 (pp. 261-264). Geneva, Switzerland.
  8. MacQueen, J. (1967). Some methods for classification and analysis of multivariate observations. Proceedings of the fifth Berkeley Symposium on Mathematical Statistics and Probability (pp. 281-297). Berkeley, CA.
  9. Mitra, V., Vergyri, D., & Franco, H. (2016, September). Unsupervised learning of acoustic units using autoencoders and Kohonen nets. Proceedings of the Interspeech 2016 (pp. 1300-1304). San Francisco, CA.
  10. Nakamura, M., Iwano, K., & Furui, S. (2008). Differences between acoustic characteristics of spontaneous and read speech and their effects on speech recognition performance. Computer Speech and Language, 22(2), 171-184. https://doi.org/10.1016/j.csl.2007.07.003
  11. Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek, O., Goel, N., Hannemann, M., ... Vesely, K. (2011). The Kaldi speech recognition toolkit. IEEE 2011 Workshop on Automatic Speech Recognition and Understanding (ASRU). Hawaii.
  12. Sainath, T. N., Prabhavalkar, R., Kumar, S., Lee, S., Kannan, A., Rybach, D., Schoglo, V., ... Chiu, C. C. (2018, April). No need for a lexicon? Evaluating the value of the pronunciation lexica in end-to-end models. Proceedings of the International Conference on Acoustics, Speech, Signal Processing (pp. 5859-5863). Calgary, Canada.
  13. Sak, H., Senior, A., & Beaufays, F. (2014, September). Long shortterm memory recurrent neural network architectures for large scale acoustic modeling. Proceedings of the Interspeech 2014 (pp. 338-342). Singapore.
  14. Stolcke, A. (2002, September). SRILM-an extensible language modeling toolkit. Proceedings of the Interspeech 2002 (pp. 901-904). Denver, CO.
  15. Young, S. J., Odell, J. J., & Woodland, P. C. (1994, March). Tree-based state tying for high accuracy acoustic modelling. Proceedings of the Workshop on Human Language Technology (pp. 307-312). Plainsboro, NJ.