• Title/Summary/Keyword: Sign language education

Search Result 31, Processing Time 0.022 seconds

Research on Development of VR Realistic Sign Language Education Content Using Hand Tracking and Conversational AI (Hand Tracking과 대화형 AI를 활용한 VR 실감형 수어 교육 콘텐츠 개발 연구)

  • Jae-Sung Chun;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.3
    • /
    • pp.369-374
    • /
    • 2024
  • This study aims to improve the accessibility and efficiency of sign language education for both hearing impaired and non-deaf people. To this end, we developed VR realistic sign language education content that integrates hand tracking technology and conversational AI. Through this content, users can learn sign language in real time and experience direct communication in a virtual environment. As a result of the study, it was confirmed that this integrated approach significantly improves immersion in sign language learning and contributes to lowering the barriers to sign language learning by providing learners with a deeper understanding. This presents a new paradigm for sign language education and shows how technology can change the accessibility and effectiveness of education.

Development of Smart Phone App. Contents for 3D Sign Language Education (3D 수화교육 스마트폰 앱콘텐츠 개발)

  • Jung, Young Kee
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.8-14
    • /
    • 2012
  • In this paper, we develope the smart phone App. contents of 3D sign language to widen the opportunity of the korean sign language education for the hearing-impaired and normal people. Especially, we propose the sign language conversion algorithm that automatically transform the structure of Korean phrases to the structure of the sign language. Also, we implement the 3D sign language animation DB using motion capture system and data glove for acquiring the natural motions. Finally, UNITY 3D engine is used for the realtime 3D rendering of sign language motion. We are distributing the proposed App. with 3D sign language DB of 1,300 words to the iPhone App. store and Android App. store.

  • PDF

A Low-Cost Speech to Sign Language Converter

  • Le, Minh;Le, Thanh Minh;Bui, Vu Duc;Truong, Son Ngoc
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.3
    • /
    • pp.37-40
    • /
    • 2021
  • This paper presents a design of a speech to sign language converter for deaf and hard of hearing people. The device is low-cost, low-power consumption, and it can be able to work entirely offline. The speech recognition is implemented using an open-source API, Pocketsphinx library. In this work, we proposed a context-oriented language model, which measures the similarity between the recognized speech and the predefined speech to decide the output. The output speech is selected from the recommended speech stored in the database, which is the best match to the recognized speech. The proposed context-oriented language model can improve the speech recognition rate by 21% for working entirely offline. A decision module based on determining the similarity between the two texts using Levenshtein distance decides the output sign language. The output sign language corresponding to the recognized speech is generated as a set of sequential images. The speech to sign language converter is deployed on a Raspberry Pi Zero board for low-cost deaf assistive devices.

The Effects of Sign Language Video Location in e-Learning System for the Hearing-impaired

  • Muhn, Seung Ho;Jung, Kwang Tae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.6
    • /
    • pp.597-607
    • /
    • 2015
  • Objective: The purpose of this study is to identify the effects of sign language video location in e-learning system for the hearing-impaired. Background: E-learning education is a good way to resolve the inequality of education for the disabled. Providing a sign language video in e-learning education for the hearing-impaired is very important for their learning. Although the location of sign language video is an important factor in the design of the video, the effect of its location in learning using the e-learning system was not studied. Method: In order to identify the effect of sign language video location on the learning of the hearing-impaired using the e-learning system, the prototypes of the system with different locations were developed. Eighteen people with hearing impairment participated in this experiment. Learning presence, learning immersion, and learning satisfaction were used to measure learning effects with sign language video location. Results: Bottom right position was more preferred through preference evaluation for sign language video location. The learning effect with sign language video location (bottom-left and bottom-right) was not significant. That is, the effects of learning presence, immersion, and satisfaction were not statistically significant with video location. Conclusion: From this study, the following have to be considered in e-learning system design for the hearing-impaired. Although the location of a sign language video is not a significant factor from the experiment, the bottom right position in the design is proposed because learning presence and satisfaction is slightly higher at the bottom right position, and the position is preferred from subjective evaluation. From the analysis of interview data, it was also proposed that the design of a sign language video should be improved for the hearing-impaired. Application: The result of this study can be applied to the e-learning system design for the hearing-impaired.

A Study on Semantic Logic Platform of multimedia Sign Language Content (멀티미디어 수화 콘텐츠의 Semantic Logic 플랫폼 연구)

  • Jung, Hoe-Jun;Park, Dea-Woo;Han, Kyung-Don
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.199-206
    • /
    • 2009
  • The development of broadband multimedia content, a deaf sign language sign language is being used in education. Most of the content used in sign language training for Hangul word representation of sign language is sign language videos for the show. For the first time to learn sign language, sign language users are unfamiliar with the sign language characteristics difficult to understand, difficult to express the sign is displayed. In this paper, online, learning sign language to express the sign with reference to the attributes, Semantic Logic applying the sign language of multimedia content model for video-based platform is designed to study.

Development of a Sign Language Learning Assistance System using Mediapipe for Sign Language Education of Deaf-Mutility (청각장애인의 수어 교육을 위한 MediaPipe 활용 수어 학습 보조 시스템 개발)

  • Kim, Jin-Young;Sim, Hyun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.6
    • /
    • pp.1355-1362
    • /
    • 2021
  • Recently, not only congenital hearing impairment, but also the number of people with hearing impairment due to acquired factors is increasing. The environment in which sign language can be learned is poor. Therefore, this study intends to present a sign language (sign language number/sign language text) evaluation system as a sign language learning assistance tool for sign language learners. Therefore, in this paper, sign language is captured as an image using OpenCV and Convolutional Neural Network (CNN). In addition, we study a system that recognizes sign language behavior using MediaPipe, converts the meaning of sign language into text-type data, and provides it to users. Through this, self-directed learning is possible so that learners who learn sign language can judge whether they are correct dez. Therefore, we develop a sign language learning assistance system that helps us learn sign language. The purpose is to propose a sign language learning assistance system as a way to support sign language learning, the main language of communication for the hearing impaired.

Types of Subjective Perception of Hearing-Impaired Persons on Sign Language Interpretation (수화언어통역서비스에 대한 청각장애인의 주관적 인식 유형)

  • Oh, Soo-Kyung;Song, Mi-Yeon
    • 재활복지
    • /
    • v.21 no.4
    • /
    • pp.1-31
    • /
    • 2017
  • This study identities various types of subjective perception of hearing-impaired persons on sign language interpretation using Q-methodology in an attempt to explore ways for developing more useful sign language interpretation services for them. For this purpose, 35 statements are extracted based on literatures and in-depth interviews and Q-classification is conducted for 20 interviewees. The results identify a common perception on the provision of sign language interpreter to frequently used institutions of hearing-impaired persons. They also derive three distinguished types of perception: seeking specialization of sign language interpretation, seeking traditional sign language interpretation services and seeking expansion and improvement of sign language interpretation services. Based on the results, the study suggests residence of sign language interpreters in public facilities, improvements of qualification system and education and training courses, specialization by field and improvement of the practice of sign language interpretation services, provision of customized services to the elderly and use of sign language interpreters reflecting the needs of the users.

A Method for Generating Inbetween Frames in Sign Language Animation (수화 애니메이션을 위한 중간 프레임 생성 방법)

  • O, Jeong-Geun;Kim, Sang-Cheol
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.5
    • /
    • pp.1317-1329
    • /
    • 2000
  • The advanced techniques for video processing and computer graphics enables a sign language education system to appear. the system is capable of showing a sign language motion for an arbitrary sentence using the captured video clips of sign language words. In this paper, a method is suggested which generates the frames between the last frame of a word and the first frame of its following word in order to animate hand motion. In our method, we find hand locations and angles which are required for in between frame generation, capture and store the hand images at those locations and angles. The inbetween frames generation is simply a task of finding a sequence of hand angles and locations. Our method is computationally simple and requires a relatively small amount of disk space. However, our experiments show that inbetween frames for the presentation at about 15fps (frame per second) are achieved so tat the smooth animation of hand motion is possible. Our method improves on previous works in which computation cost is relativey high or unnecessary images are generated.

  • PDF

Efficient Sign Language Recognition and Classification Using African Buffalo Optimization Using Support Vector Machine System

  • Karthikeyan M. P.;Vu Cao Lam;Dac-Nhuong Le
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.8-16
    • /
    • 2024
  • Communication with the deaf has always been crucial. Deaf and hard-of-hearing persons can now express their thoughts and opinions to teachers through sign language, which has become a universal language and a very effective tool. This helps to improve their education. This facilitates and simplifies the referral procedure between them and the teachers. There are various bodily movements used in sign language, including those of arms, legs, and face. Pure expressiveness, proximity, and shared interests are examples of nonverbal physical communication that is distinct from gestures that convey a particular message. The meanings of gestures vary depending on your social or cultural background and are quite unique. Sign language prediction recognition is a highly popular and Research is ongoing in this area, and the SVM has shown value. Research in a number of fields where SVMs struggle has encouraged the development of numerous applications, such as SVM for enormous data sets, SVM for multi-classification, and SVM for unbalanced data sets.Without a precise diagnosis of the signs, right control measures cannot be applied when they are needed. One of the methods that is frequently utilized for the identification and categorization of sign languages is image processing. African Buffalo Optimization using Support Vector Machine (ABO+SVM) classification technology is used in this work to help identify and categorize peoples' sign languages. Segmentation by K-means clustering is used to first identify the sign region, after which color and texture features are extracted. The accuracy, sensitivity, Precision, specificity, and F1-score of the proposed system African Buffalo Optimization using Support Vector Machine (ABOSVM) are validated against the existing classifiers SVM, CNN, and PSO+ANN.

The Expository Dictionary using the Sign Language about Information Communication for Deaf (청각장애인을 위한 정보통신용어 수화해설 사전)

  • Kim Ho-Yong;Seo Yeong-Geon
    • Journal of Digital Contents Society
    • /
    • v.6 no.4
    • /
    • pp.217-222
    • /
    • 2005
  • The purpose of this study is to design and implement a sign language dictionary for the deaf to understand information communication terminologies. When the deafs who have difficulties in communication use the internet, they an get help from this dictionary in accessing various types of information and expressing their intension. In order for the deaf to utilize the internet as efficiently as ordinary people, they must understand information communication terminologies first In order to implement the dictionary, we defined the concepts of the deaf and examined their characteristics. In addition, we established principles in designing this dictionary and selected some terminologies. When explaining the terminologies. we tried to use expressions common to the deaf, but sometimes modified them to keep the original meanings of the terms in producing sign language videos. This studies are applied as learning aid to information education for the deaf, and the deaf's understanding of ICT was measured through two tests.

  • PDF