DOI QR코드

DOI QR Code

청각장애인의 수어 교육을 위한 MediaPipe 활용 수어 학습 보조 시스템 개발

Development of a Sign Language Learning Assistance System using Mediapipe for Sign Language Education of Deaf-Mutility

  • 김진영 (순천대학교 대학원 과학정보융합) ;
  • 심현 (순천대학교)
  • 투고 : 2021.10.20
  • 심사 : 2021.12.17
  • 발행 : 2021.12.31

초록

최근 선천적 청각장애 뿐만 아니라 후천적 요인으로 인해 청각장애를 가지게 되는 사람들도 증가하고 있지만, 수어를 익힐 수 있는 환경은 열악한 상황이다. 이에 본 연구에서는 수어를 배우는 수어 학습자를 위한 수어학습 보조도구로써 수어(지숫자/지문자) 평가 시스템을 제시하고자 한다. 이에 본 논문에서는 OpenCV 라이브러와 MediaPipe를 이용하여 손과 손가락을 추적하여 수어 동작을 인식하고 CNN기법을 이용하여 수어의 의미를 텍스트 형태의 데이터로 변환하여 학습자에게 제공하는 시스템을 연구한다. 이를 통해 수어를 배우는 학습자가 스스로 올바른 수형인지를 판단할 수 있도록 자기주도학습을 가능하게 하여 수어를 익히는데 도움이 되는 수어학습보조 시스템을 개발하고, 청각장애인들의 의사소통의 주언어인 수어학습을 지원하기 위한 방안으로 수어학습보조 시스템을 제안하는 데 목적이 있다.

Recently, not only congenital hearing impairment, but also the number of people with hearing impairment due to acquired factors is increasing. The environment in which sign language can be learned is poor. Therefore, this study intends to present a sign language (sign language number/sign language text) evaluation system as a sign language learning assistance tool for sign language learners. Therefore, in this paper, sign language is captured as an image using OpenCV and Convolutional Neural Network (CNN). In addition, we study a system that recognizes sign language behavior using MediaPipe, converts the meaning of sign language into text-type data, and provides it to users. Through this, self-directed learning is possible so that learners who learn sign language can judge whether they are correct dez. Therefore, we develop a sign language learning assistance system that helps us learn sign language. The purpose is to propose a sign language learning assistance system as a way to support sign language learning, the main language of communication for the hearing impaired.

키워드

과제정보

이 논문은 2021년 순천대학교 교연비 사업에 의하여 연구되었음.

참고문헌

  1. Employment Development Institute Survey Statistics Team, "2020 disability statistics at a glance," EDI Report, Apr. 2020.
  2. J. Park, "A qualitative study on the Deaf for Meaning of the Sign Language Experience and Sign Language interpretation services Experience based on narrative inquiry," Social Science Research Review, vol. 26, no. 4, 2010, pp. 93-122.
  3. S. Koo, I. Jang, and Y. Son, "An Open Source Hardware based Sign Language Interpreter Glove & Situation Awareness Auxiliary IoT Device for the Hearing Impaired," Korean Institute of Information Scientists and Engineers Transactions on Computing Practices, vol. 24, no. 4, 2018, pp. 204-209.
  4. D. Rempel, M. Camilleri, and D. Lee, "The design of hand gestures for human. computer interaction: Lessons from sign language interpreters," International Journal of Human-Computer Studies, vol. 72, no. 10, 2017, pp. 728-735. https://doi.org/10.1016/j.ijhcs.2014.05.003
  5. R. Sigit and D. Kartika, "3D Sign language translator using optical flow," 2016 International Electronics Symposium(IES), Denpasar, Indonesia, 2016, pp. 262-266.
  6. K. Shin, S. Lee, J. Kwon, and H. Jang, "Interpretation of Sign Language Using Image Recognition & Leap Motion," The HCI(Human Computer Interaction) Society of Korea, Vol.2019, No.2, Feb. 2019, pp. 645-649.
  7. S. Kang, Y. Kim, Y. Kim, S. Bae, J. Park, J. Lee, D. Jeong, and W. Lim, "Korean Sign Language Translation using Neural Network," The Institute of Electronics and Information Engineers, Vol.2018, No.11, Nov. 2018, pp.765-768.
  8. D. Kim, E. Lee, Y. Kim, J. Gim, S. Lee, and H. Lee, "Development of sign language video translation service based on OpenPose and CNN," The Korea Contents Association Comprehensive Conference, Vol.2021, No.8, Aug, 2021, pp. 393-394.
  9. S. Lee and H. Ahn, "Piano practice using OpenCV and the Android application project," Proceedings of the Korean Society of Computer Information Conference, vol. 20, no. 2, 2012, pp. 267-268.
  10. J. Kim, S. Shim, B. Oh, J. Lee, H. Choi, and W. Cha, "Implementation of an educational interactive game using OpenCV and color recognition," Proceedings of the Korea Multimedia Society Conference, Jeju, Korea, 2006, pp. 30-34.
  11. Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," nature, vol. 521, no. 7553, 2015, pp. 436-444. https://doi.org/10.1038/nature14539
  12. T. Bluche, H. Ney, and C. Kermorvant, "Feature extraction with convolutional neural networks for handwritten word recognition," In 2013 12th International Conference on Document Analysis and Recognition. IEEE, Washington, DC, USA, 2013, pp. 285-289.