Research on Methods to Increase Recognition Rate of Korean Sign Language using Deep Learning

  • So-Young Kwon (Kumoh National Institute of Technology, Dept. of Electronic Engineering) ;
  • Yong-Hwan Lee (Kumoh National Institute of Technology, School of Electronic Engineering)
  • Received : 2024.01.23
  • Accepted : 2024.02.16
  • Published : 2024.02.28

Abstract

Deaf people who use sign language as their first language sometimes have difficulty communicating because they do not know spoken Korean. Deaf people are also members of society, so we must support to create a society where everyone can live together. In this paper, we present a method to increase the recognition rate of Korean sign language using a CNN model. When the original image was used as input to the CNN model, the accuracy was 0.96, and when the image corresponding to the skin area in the YCbCr color space was used as input, the accuracy was 0.72. It was confirmed that inserting the original image itself would lead to better results. In other studies, the accuracy of the combined Conv1d and LSTM model was 0.92, and the accuracy of the AlexNet model was 0.92. The CNN model proposed in this paper is 0.96 and is proven to be helpful in recognizing Korean sign language.

Keywords

Acknowledgement

This research was supported by Kumoh National Institute of Technology(2022~2023).

References

  1. J. Z. Young, "Biological Point of View", Approaches to Human Communication, Bud, Richard W and Brent D.Ruben (els), New Jersey : Hayden Book Co., Inc., 1972.
  2. Y. K. An, "Communication, Reason and Artificial Intelligence," Journal of AI Humanities, vol. 3, pp. 99-120, 2019. https://doi.org/10.46397/JAIH.3.5
  3. J. H. Ahn, "Communication and Human Relations", Cogito, 34, pp. 163-188.
  4. I. J. Kim, "Effective Communication Skills", Engineering education and technology transfer, vol. 6, no.3/4 , 1999, pp.55-59.
  5. K. C. Hong, H. S. Kim and Y. H. Han, "CNN-based Sign Language Translation Program for the Deaf", KicsP, vol. 22, no. 4, 2021, pp.206-212.
  6. "Korean Sign Language Method", National Law Information Center. https://www.law.go.kr/%EB%B2%95%EB%A0%B9/%ED%95%9C%EA%B5%AD%EC%88%98%ED%99%94%EC%96%B8%EC%96%B4%EB%B2%95
  7. S. M. Youn, "Korean Sign Language(KSL) is Another Korean Language", Korean Language and Literature, vol. 78, no. 78, 2021, pp. 121-144. https://doi.org/10.23016/KLLJ.2021.78.78.121
  8. S. Glennen, D. C. Decoste, "The Handbook of Augmentative and Alternative Communication", Singular Publishing Group, INC. San Diego. London, 1997.
  9. K. P. Gwon and J. H. Yoo, "Numeric Sign Language Interpreting Algorithm Based on Hand Image Processing", IEMEK, vol. 14, no. 3, pp. 133-142, 2019. https://doi.org/10.14372/IEMEK.2019.14.3.133
  10. Joshua R. New, "A Method for Hand Gesture Recognition", Proceedings of IEEE Communication Systems and Network Technologies, pp. 919-923, 2002.
  11. C. Dong, C. Leu and Z. Yin, "American Sign Language Alphabet Recognition Using Microsoft Kinect", Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 44-25, 2015.
  12. EnableTalk website, Avaliavle on : http://enabletalk.com.
  13. S. Kolkur, D. Kalbandee, P. Shimpi, C. Bapat and J. Jatakia, "Human Skin Detection Using RGB, HSB, and YCbCr Color Models", Proceedings of IEEE Conference on Acoustic, Speech and Signal processing, pp. 324-332, 2017.
  14. H. S. Park, "Vehicle Tracking System using HSV Color Space at nighttime", jkiiect, vol. 8, no. 4, pp. 270-274, 2015. https://doi.org/10.17661/jkiiect.2015.8.4.270
  15. Y. Xu and G. Pok, "Identification of Hand Region Based on YCgCr color Representation", Journal of Applied Engineering Research, vol. 12, no. 6, pp. 1031-1034, 2017.
  16. Z. Zhengzhen and S. Yuexiang, "Skin Color Detecting Unite YCgCb Color Space with YCgCr Color Space", Proceedings of Conference on Image Analysis and Signal Processing, pp. 221-225, 2009.
  17. Y. W. Choi, W. M. Yook and G. S. Cho, "Development of Building 3D Spatial Information Extracting System using HSI Color Model", journal of Korean Society for Geospatial Information Science, vol. 21, no. 4, pp. 151-159, 2013. https://doi.org/10.7319/kogsis.2013.21.4.151
  18. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard and L. D. Jackel , "Backpropagation Applied to Handwritten Zip Code Recognition", Neural Computation, vol. 1, pp. 541-551, 1989. https://doi.org/10.1162/neco.1989.1.4.541
  19. P. Molchanov, S Gupta, K. Kim and J. Kautz, "Hand Gesture Recognition with 3D Convolutional Neural Networks", Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-7, 2015.
  20. S. H. Park, J. M. Goo and C. H. Jo, "Receiver operating characteristic (ROC) curve: practical review for radiologists", Korean journal of radiology. vol. 5, no. 1, pp. 11-18, 2012. https://doi.org/10.3348/kjr.2004.5.1.11
  21. G. C. Kim and R. Ha, "Real-time Hand Expression Recognition Translation using Deep Learning", Korean Institute of Information Scientists and Engineers, pp. 1774-1776, 2022.
  22. S. Gnanapriya, K. Rahimunnisa, M. Sowmiya, P. Deepika and S. Praveena Rachel Kamala, "Hand Detection and Gesture Recognition in Complex Backgrounds", ICCMC-2023, pp. 829-833, 2023.
  23. K. C. Hong, H. S. Kim and Y. H. Han, "CNN-based Sign Language Translation Program for the Deaf", JISPS, vol. 22, no. 4, pp. 206-212, 2021.