DOI QR코드

DOI QR Code

딥러닝 방식의 웨어러블 센서를 사용한 미국식 수화 인식 시스템

American Sign Language Recognition System Using Wearable Sensors with Deep Learning Approach

  • 정택위 (계명대학교 전자전기공학과) ;
  • 김범준 (계명대학교 전자전기공학과)
  • 투고 : 2020.02.12
  • 심사 : 2020.04.15
  • 발행 : 2020.04.30

초록

수화는 청각 장애인이 다른 사람들과 의사소통할 수 있도록 설계된 것이다. 그러나 수화는 충분히 대중화되어 있지 않기 때문에 청각 장애인이 수화를 통해서 일반 사람들과 원활하게 의사소통하는 것은 쉽지 않은 문제이다. 이러한 문제점에 착안하여 본 논문에서는 웨어러블 컴퓨팅 및 딥러닝 기반 미국식 수화인식 시스템을 설계하고 구현하였다. 이를 위해서 본 연구에서는 손등과 손가락에 장착되는 총 6개의 IMUs(Inertial Measurement Unit) 센서로 구성된 시스템을 구현하고 이를 이용한 실험을 수행하여 156개 특징이 수집된 데이터 추출을 통해서 총 28개 단어에 대한 미국식 수화 인식 방법을 제안하였다. 특히 LSTM (Long Short-Term Memory) 알고리즘을 사용하여 최대 99.89%의 정확도를 달성할 수 있었고 향후 청각 장애인들의 의사소통에 큰 도움이 될 것으로 예상된다.

Sign language was designed for the deaf and dumb people to allow them to communicate with others and connect to the society. However, sign language is uncommon to the rest of the society. The unresolved communication barrier had eventually isolated deaf and dumb people from the society. Hence, this study focused on design and implementation of a wearable sign language interpreter. 6 inertial measurement unit (IMU) were placed on back of hand palm and each fingertips to capture hand and finger movements and orientations. Total of 28 proposed word-based American Sign Language were collected during the experiment, while 156 features were extracted from the collected data for classification. With the used of the long short-term memory (LSTM) algorithm, this system achieved up to 99.89% of accuracy. The high accuracy system performance indicated that this proposed system has a great potential to serve the deaf and dumb communities and resolve the communication gap.

키워드

참고문헌

  1. T. Chong and B. Lee, "American Sign Language Recognition Using Leap Motion Controller with Machine Learning Approach," Sensors, vol. 18, no. 10, 2018, pp. 35-54.
  2. M. J. Cheok, "A review of hand gesture and sign language recognition techniques," Int. J. Mach. Learn. Cybern., vol. 10, issue 1, 2019, pp. 131-153. https://doi.org/10.1007/s13042-017-0705-5
  3. M. A. Ahmed, B. B. Zaidan, and A. A. Zaidan, "A Review on Systems-Based Sensory Gloves for Sign Language Recognition State of the Art between 2007 and 2017," Sensors, vol. 18, no. 7, 2018, pp. 1-44. https://doi.org/10.1109/JSEN.2018.2870228
  4. I. Infantino, R. Rizzo, and S. Gaglio, "A Framework for Sign Language Sentence Recognition by Commonsense Context," IEEE Trans. Syst. Man, Cybern. vol. 37, no. 5, 2007, pp. 1034-1039. https://doi.org/10.1109/TSMCC.2007.900624
  5. L. Ding and A. M. Martinez, "Modelling and Recognition of the Linguistic Components in American Sign Language Title," Image Vis. Comput., vol. 27, no. 12, 2009, pp. 1826-1844. https://doi.org/10.1016/j.imavis.2009.02.005
  6. H. Lane and F. Grosjean, Recent perspective on American Sign Language. New York, Pyschology Press, 2017.
  7. A. Haria, A. Subramanian, N. Asokkumar, and S. Poddar, "ScienceDirect ScienceDirect Hand Gesture Recognition for Human Computer Interaction," Procedia Comput. Sci., vol. 115, 2017, pp. 367-374. https://doi.org/10.1016/j.procs.2017.09.092
  8. M. Elmezain and A. Al-hamadi, "A Hidden Markov Model-Based Isolated and Meaningful Hand Gesture Recognition," Int. J. Electr. Comput. Syst. Eng., vol. 3, no. 3, 2009, pp. 156-163.
  9. P. Molchanov, S. Gupta, K. Kim, and K. Pulli, "Multi-sensor System for Driver's Hand-Gesture Recognition," In Proc. 2015 11th IEEE Int. Conf. Work. Autom. Face Gesture Recognit., vol. 1, 2015, pp. 1-8.
  10. C. Preetham, G. Ramakrishnan, S. Kumar, and A. Tamse, "Hand Talk- Implementation of a Gesture Recognizing Glove," In Proc. 2013 Texas Instruments India Educators' Conference, Bangalore, India, 2013, pp. 328-331.
  11. K. Patil, G. Pendharkar, and P. G. N. Gaikwad, "American Sign Language Detection," Int. J. Sci. Res. Publ., vol. 4, no. 11, 2014, pp. 4-9.
  12. X. Wang, M. Xia, H. Cai, Y. Gao, and C. Cattani, "Hidden-Markov-Models-Based Dynamic Hand Gesture Recognition," Math. Probl. Eng., vol. 2012, 2012, p. 11.
  13. B. G. Lee and S. M. Lee, "Smart Wearable Hand Device for Sign Language Interpretation System With Sensors Fusion," IEEE Sens. J., vol. 18, no. 3, 2018, pp. 1224-1232. https://doi.org/10.1109/jsen.2017.2779466
  14. C. K. Mummadi, F. Leo, K. Verma, S. Kasireddy, P. Scholl, J. Kempfle, and K. Laehoven, "Real-Time and Embedded Detection of Hand Gestures with an IMU-Based Glove," Informatics, vol. 5, no. 2, 2018, pp. 1-18.
  15. J. Kim and W. Lee, "An User-aware System using Visible Light Communication," J. of the Korea Institute of Electronic Communication Sciences, vol. 14, no. 4, 2019, pp. 715-722. https://doi.org/10.13067/JKIECS.2019.14.4.715
  16. J. Jo, "Effectiveness of Normalization Pre-Processing of Big Data to the Machine Learning Performance," J. of the Korea Institute of Electronic Communication Sciences, vol. 14, no. 3, 2019, pp. 547-552. https://doi.org/10.13067/JKIECS.2019.14.3.547
  17. J. Jo, "Performance Comparison Analysis of AI Supervised Learning Methods of Tensorflow and Scikit-Learn in the Writing Digit Data," J. of the Korea Institute of Electronic Communication Sciences, vol. 14, no. 4, 2019, pp. 701-706. https://doi.org/10.13067/JKIECS.2019.14.4.701