DOI QR코드

DOI QR Code

치매 환자를 포함한 한국 노인 음성 데이터 딥러닝 기반 음성인식

Deep learning-based speech recognition for Korean elderly speech data including dementia patients

  • 문정현 (중앙대학교 응용통계학과) ;
  • 강준서 (중앙대학교 응용통계학과) ;
  • 김기웅 (서울대학교병원 정신건강의학과) ;
  • 배종빈 (서울대학교병원 정신건강의학과) ;
  • 이현준 (세븐포인트원) ;
  • 임창원 (중앙대학교 응용통계학과)
  • Jeonghyeon Mun (Department of Applied Statistics, Chung-Ang University) ;
  • Joonseo Kang (Department of Applied Statistics, Chung-Ang University) ;
  • Kiwoong Kim (Department of Neuropsychiatry, Seoul National University Bundang Hospital) ;
  • Jongbin Bae (Department of Neuropsychiatry, Seoul National University Bundang Hospital) ;
  • Hyeonjun Lee (Sevenpointone) ;
  • Changwon Lim (Department of Applied Statistics, Chung-Ang University)
  • 투고 : 2022.10.10
  • 심사 : 2022.12.13
  • 발행 : 2023.02.28

초록

본 연구에서는 발화자가 동물이나 채소와 같은 일련의 단어를 무작위로 일 분 동안 말하는 한국어 음성 데이터에 대한 자동 음성 인식(ASR) 문제를 고려하였다. 발화자의 대부분은 60세 이상의 노인이며 치매 환자를 포함하고 있다. 우리의 목표는 이러한 데이터에 대한 딥러닝 기반 자동 음성 인식 모델을 비교하고 성능이 좋은 모델을 찾는 것이다. 자동 음성 인식은 컴퓨터가 사람이 말하는 말을 자동으로 인식하여 음성을 텍스트로 변환할 수 있는 기술이다. 최근 들어 자동 음성 인식 분야에서 성능이 좋은 딥러닝 모델들이 많이 개발되어 왔다. 이러한 딥러닝 모델을 학습시키기 위한 데이터는 대부분 대화나 문장 형식으로 이루어져 있다. 게다가, 발화자들 대부분은 어휘를 정확하게 발음할 수 있어야 한다. 반면에, 우리 데이터의 발화자 대부분은 60세 이상의 노인으로 발음이 부정확한 경우가 많다. 또한, 우리 데이터는 발화자가 1분 동안 문장이 아닌 일련의 단어를 무작위로 말하는 한국어 음성 데이터이다. 따라서 이러한 일반적인 훈련 데이터를 기반으로 한 사전 훈련 모델은 본 논문에서 고려하는 우리 데이터에 적합하지 않을 수 있으므로, 우리는 우리의 데이터를 사용하여 딥러닝 기반 자동 음성 인식 모델을 처음부터 훈련한다. 또한 데이터 크기가 작기 때문에 일부 데이터 증강 방법도 적용한다.

In this paper we consider automatic speech recognition (ASR) for Korean speech data in which elderly persons randomly speak a sequence of words such as animals and vegetables for one minute. Most of the speakers are over 60 years old and some of them are dementia patients. The goal is to compare deep-learning based ASR models for such data and to find models with good performance. ASR is a technology that can recognize spoken words and convert them into written text by computers. Recently, many deep-learning models with good performance have been developed for ASR. Training data for such models are mostly composed of the form of sentences. Furthermore, the speakers in the data should be able to pronounce accurately in most cases. However, in our data, most of the speakers are over the age of 60 and often have incorrect pronunciation. Also, it is Korean speech data in which speakers randomly say series of words, not sentences, for one minute. Therefore, pre-trained models based on typical training data may not be suitable for our data, and hence we train deep-learning based ASR models from scratch using our data. We also apply some data augmentation methods due to small data size.

키워드

과제정보

이 논문은 2021년도 중앙대학교 CAU GRS 지원에 의하여 작성되었음. 이 성과는 과학기술정보통신부의 재원으로 한국연구재단의 지원을 받아 수행된 연구임 (NRF-2021R1F1A1056516).

참고문헌

  1. Malik M, Malik MK, Mehmood K, and Makhdoom I (2020). Automatic speech recognition: A survey, Multimedia Tools and Applications, 80, 9411-9457. https://doi.org/10.1007/s11042-020-10073-7
  2. Kim YG and Lee JW (2020). Development of Korean automatic speech recognition model using transformer, Proceedings of Symposium of the Korean Institute of Communications and Information Sciences, 71, 659-660.
  3. Lee SG and Kwon SI (2014). Elderly speech analysis for improving elderly speech recognition, Communications of the Korean Institute of Information Scientists and Engineers, 32, 16-20.
  4. Young V and Mihailidis A (2010). Difficulties in automatic speech recognition of dysarthric speakers and implications for speech-based applications used by the elderly: A literature review, Assistive Technology, 22, 99-112. https://doi.org/10.1080/10400435.2010.483646
  5. Audacity Team, Audacity® (2020). Free audio editor and recorder [Computer application] Version 2.4.1. 2020.
  6. Chi YK, Han JW, Jeong HJ et al. (2014). Development of a screening algorithm for Alzheimer's disease using categorical verbal fluency, PLOS ONE, 9, e84111.
  7. Chakraborty K, Talele A, and Upadhya S (2014). Voice recognition using MFCC algorithm, International Journal of Innovative Research in Advanced Engineering (IJIRAE), 1, 2349-2163.
  8. Chan W, Jaitly N, Quoc VL, and Vinyals O (2016). Listen, attend and spell: A neural network for large vocabulary conversational speech recognition, In proceedings of 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, 4960-4964.
  9. Dong L, Xu S, and Xu B (2018). Speech-transformer: A no-recurrence sequence-to-sequence model for speech recognition, In proceedings of 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, 5884-5888.
  10. Han W, Zhang Z, Zhang Y et al. (2020). Contextnet: Improving convolutional neural networks for automatic speech recognition with global context, Available from: arXiv preprint arXiv:2005.03191..
  11. Wu Z, Liu Z, Lin J, Lin Y, and Han S (2020). Lite transformer with long-short range attention, Available from: arXiv preprint arXiv:2004.11886, 2020.
  12. Lu Y, Li Z, He D, Sun Z, Dong B, Qin T, Wang L, and Liu TY (2019). Understanding and improving transformer from a multi-particle dynamic system point of view, Available from: arXiv preprint arXiv:1906.02762.
  13. Gulati A, Qin J, Chiu CC et al. (2020). Conformer: Convolution-augmented transformer for speech recognition, Available from: arXiv preprint arXiv:2005.08100.
  14. Graves A and Jaitly N (2013). Towards end-to-end speech recognition with recurrent neural networks, Proceedings of the 31st International Conference on International Conference on Machine Learning PMLR, 32, 1764-1772.
  15. Kim KW, Ji YG, and Han JW (2014). inventors. Method of diagnosing dementia based on verbal fluency and apparatus therefore. South Korea patent KR101437569B1, 2014 Sep 04.
  16. Panayotov V, Chen G, Povey D, and Khudanpur S (2015). Librispeech: An ASR corpus based on public domain audio books, In proceedings of 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, 5206-5210.
  17. Park DS, Chan W, Zhang Y, Chiu C, Zoph B, Cubuk ED, and Quoc VL (2019). Specaugment: A simple data augmentation method for automatic speech recognition, Available from: arXiv preprint arXiv:1904.08779