DOI QR코드

DOI QR Code

Building robust Korean speech recognition model by fine-tuning large pretrained model

대형 사전훈련 모델의 파인튜닝을 통한 강건한 한국어 음성인식 모델 구축

  • Changhan Oh (Integrated Intelligence Research Section, Electronics and Telecommunications Research Institute (ETRI)) ;
  • Cheongbin Kim (Integrated Intelligence Research Section, Electronics and Telecommunications Research Institute (ETRI)) ;
  • Kiyoung Park (Integrated Intelligence Research Section, Electronics and Telecommunications Research Institute (ETRI))
  • 오창한 (한국전자통신연구원 복합지능연구실) ;
  • 김청빈 (한국전자통신연구원 복합지능연구실) ;
  • 박기영 (한국전자통신연구원 복합지능연구실)
  • 투고 : 2023.08.11
  • 심사 : 2023.09.14
  • 발행 : 2023.09.30

초록

Automatic speech recognition (ASR) has been revolutionized with deep learning-based approaches, among which self-supervised learning methods have proven to be particularly effective. In this study, we aim to enhance the performance of OpenAI's Whisper model, a multilingual ASR system on the Korean language. Whisper was pretrained on a large corpus (around 680,000 hours) of web speech data and has demonstrated strong recognition performance for major languages. However, it faces challenges in recognizing languages such as Korean, which is not major language while training. We address this issue by fine-tuning the Whisper model with an additional dataset comprising about 1,000 hours of Korean speech. We also compare its performance against a Transformer model that was trained from scratch using the same dataset. Our results indicate that fine-tuning the Whisper model significantly improved its Korean speech recognition capabilities in terms of character error rate (CER). Specifically, the performance improved with increasing model size. However, the Whisper model's performance on English deteriorated post fine-tuning, emphasizing the need for further research to develop robust multilingual models. Our study demonstrates the potential of utilizing a fine-tuned Whisper model for Korean ASR applications. Future work will focus on multilingual recognition and optimization for real-time inference.

자동 음성 인식(automatic speech recognition, ASR)은 딥러닝 기반 접근 방식으로 혁신되었으며, 그중에서도 자기 지도 학습 방법이 특히 효과적일 수 있음이 입증되고 있다. 본 연구에서는 다국어 ASR 시스템인 OpenAI의 Whisper 모델의 한국어 성능을 향상시키는 것을 목표하여 다국어 음성인식 시스템에서의 비주류 언어의 성능 문제를 개선하고자 한다. Whisper는 대용량 웹 음성 데이터 코퍼스(약 68만 시간)에서 사전 학습되었으며 주요 언어에 대한 강력한 인식 성능을 입증했다. 그러나 훈련 중 주요 언어가 아닌 한국어와 같은 언어를 인식하는 데 어려움을 겪을 수 있다. 우리는 약 1,000시간의 한국어 음성으로 구성된 추가 데이터 세트로 Whisper 모델을 파인튜닝하여 이 문제를 해결한다. 또한 동일한 데이터 세트를 사용하여 전체 훈련된 Transformer 모델을 베이스 라인으로 선정하여 성능을 비교한다. 실험 결과를 통해 Whisper 모델을 파인튜닝하면 문자 오류율(character error rate, CER) 측면에서 한국어 음성 인식 기능이 크게 향상되었음을 확인할 수 있다. 특히 모델 크기가 증가함에 따라 성능이 향상되는 경향을 포착하였다. 그러나 Whisper 모델의 영어 성능은 파인튜닝 후 성능이 저하됨을 확인하여 강력한 다국어 모델을 개발하기 위한 추가 연구의 필요성을 확인할 수 있었다. 추가적으로 우리의 연구는 한국어 음성인식 애플리케이션에 파인튜닝된 Whisper 모델을 활용할 수 있는 가능성을 확인할 수 있다. 향후 연구는 실시간 추론을 위한 다국어 인식과 최적화에 초점을 맞춰 실용적 연구를 이어갈 수 있겠다.

키워드

과제정보

This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (2022-0-00989, Development of Artificial Intelligence Technology for Multi-speaker Dialog Modeling).

참고문헌

  1. AiHub (2021). Aihub broadcast content korean speech recognition data. Retrieved from https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=463
  2. Baevski, A., Zhou, Y., Mohamed, A., & Auli, M. (2020, December). wav2vec 2.0: A framework for self-supervised learning of speech representations. Proceedings of the Advances in Neural Information Processing Systems (pp. 12449-12460). Online Conference.
  3. Bang, J. U., Yun, S., Kim, S. H., Choi, M. Y., Lee, M. K., Kim, Y. J., Kim, D. H., ... Kim, S. H. (2020). KsponSpeech: Korean spontaneous speech corpus for automatic speech recognition. Applied Sciences, 10(19), 6936.
  4. Chan, W., Jaitly, N., Le, Q., & Vinyals, O. (2016, March). Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4960-4964). Shanghai, China.
  5. Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020, July). A simple framework for contrastive learning of visual representations. Proceedings of the 37th International Conference on Machine Learning (pp. 1597-1607). Online Conference.
  6. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. Retrieved from https://arxiv.org/abs/1810.04805
  7. Graves, A., Fernandez, S., Gomez, F., & Schmidhuber, J. (2006, June). Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. Proceedings of the 23rd International Conference on Machine Learning (pp. 369-376). Pittsburgh, PA.
  8. Graves, A., & Jaitly, N. (2014, June). Towards end-to-end speech recognition with recurrent neural networks. Proceedings of the 31st International Conference on Machine Learning (pp. 1764-1772). Beijing, China.
  9. Hadsell, R., Chopra, S., & LeCun, Y. (2006, June). Dimensionality reduction by learning an invariant mapping. Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06) (pp. 1735-1742). New York, NY.
  10. Nam, K. (2019). A study on processing of speech recognition Korean words. The Journal of the Convergence on Culture Technology, 5(4), 407-412. https://doi.org/10.17703/JCCT.2019.5.4.407
  11. Oh, Y. R., Park, K., & Park, J. G. (2022). Fast offline transformer-based end-to-end automatic speech recognition for real-world applications. ETRI Journal, 44(3), 476-490. https://doi.org/10.4218/etrij.2021-0106
  12. OpenAi (2023). Openai/whisper. Retrieved from https://github.com/openai/whisper
  13. Panayotov, V., Chen, G., Povey, D., & Khudanpur, S. (2015, April). Librispeech: An ASR corpus based on public domain audio books. Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 5206-5210). South Brisbane, Australia.
  14. Radford, A., Kim, J. W., Xu, T., Brockman, G., McLeavey, C., & Sutskever, I. (2023, July). Robust speech recognition via large-scale weak supervision. Proceedings of the 40th International Conference on Machine Learning (pp. 28492-28518). Honolulu, HI.
  15. Schneider, S., Baevski, A., Collobert, R., & Auli, M. (2019, September). wav2vec: Unsupervised pre-training for speech recognition. Proceedings of the Interspeech 2019 (pp. 3465-3469). Graz, Austria.
  16. Tsai, Y. H. H., Bai, S., Liang, P. P., Kolter, J. Z., Morency, L. P., & Salakhutdinov, R. (2019, July). Multimodal transformer for unaligned multimodal language sequences. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. (pp. 6558-6569). Florence, Italy.
  17. van den Oord, A., Li, Y., & Vinyals, O. (2018). Representation learning with contrastive predictive coding. Retrieved from https://doi.org/10.48550/arxiv.1807.03748
  18. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., ... Polosukhin, I. (2017, December). Attention is all you need. Proceedings of the Advances in Neural Information Processing Systems. Long Beach, CA.
  19. Watanabe, S., Hori, T., Karita, S., Hayashi, T., Nishitoba, J., Unno, Y., Soplin, N. E. Y., ... Ochiai, T. (2018). ESPnet: End-to-end speech processing toolkit. Retrieved from https://doi.org/10.48550/arXiv.1804.00015
  20. Yadav, H., & Sitaram, S. (2022, June). A survey of multilingual models for automatic speech recognition. Proceedings of the Thirteenth Language Resources and Evaluation Conference (pp. 5071-5079). Marseille, France.