• Title/Summary/Keyword: Sign Language Translation

Search Result 37, Processing Time 0.031 seconds

A Study on Finger Language Translation System using Machine Learning and Leap Motion (머신러닝과 립 모션을 활용한 지화 번역 시스템 구현에 관한 연구)

  • Son, Da Eun;Go, Hyeong Min;Shin, Haeng yong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.552-554
    • /
    • 2019
  • Deaf mutism (a hearing-impaired person and speech disorders) communicates using sign language. There are difficulties in communicating by voice. However, sign language can only be limited in communicating with people who know sign language because everyone doesn't use sign language when they communicate. In this paper, a finger language translation system is proposed and implemented as a means for the disabled and the non-disabled to communicate without difficulty. The proposed algorithm recognizes the finger language data by leap motion and self-learns the data using machine learning technology to increase recognition rate. We show performance improvement from the simulation results.

Morpheme Conversion for korean Text-to-Sign Language Translation System (한국어-수화 번역시스템을 위한 형태소 변환)

  • Park, Su-Hyun;Kang, Seok-Hoon;Kwon, Hyuk-Chul
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.3
    • /
    • pp.688-702
    • /
    • 1998
  • In this paper, we propose sign language morpheme generation rule corresponding to morpheme analysis for each part of speech. Korean natural sign language has extremely limited vocabulary, and the number of grammatical components eing currently used are limited, too. In this paper, therefore, we define natural sign language grammar corresponding to Korean language grammar in order to translate natural Korean language sentences to the corresponding sign language. Each phrase should define sign language morpheme generation grammar which is different from Korean language analysis grammar. Then, this grammar is applied to morpheme analysis/combination rule and sentence structure analysis rule. It will make us generate most natural sign language by definition of this grammar.

  • PDF

Hand Language Translation Using Kinect

  • Pyo, Junghwan;Kang, Namhyuk;Bang, Jiwon;Jeong, Yongjin
    • Journal of IKEEE
    • /
    • v.18 no.2
    • /
    • pp.291-297
    • /
    • 2014
  • Since hand gesture recognition was realized thanks to improved image processing algorithms, sign language translation has been a critical issue for the hearing-impaired. In this paper, we extract human hand figures from a real time image stream and detect gestures in order to figure out which kind of hand language it means. We used depth-color calibrated image from the Kinect to extract human hands and made a decision tree in order to recognize the hand gesture. The decision tree contains information such as number of fingers, contours, and the hand's position inside a uniform sized image. We succeeded in recognizing 'Hangul', the Korean alphabet, with a recognizing rate of 98.16%. The average execution time per letter of the system was about 76.5msec, a reasonable speed considering hand language translation is based on almost still images. We expect that this research will help communication between the hearing-impaired and other people who don't know hand language.

Sign Language Generation with Animation by Adverbial Phrase Analysis (부사어를 활용한 수화 애니메이션 생성)

  • Kim, Sang-Ha;Park, Jong-C.
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.27-32
    • /
    • 2008
  • Sign languages, commonly used in aurally challenged communities, are a kind of visual language expressing sign words with motion. Spatiality and motility of a sign language are conveyed mainly via sign words as predicates. A predicate is modified by an adverbial phrase with an accompanying change in its semantics so that the adverbial phrase can also affect the overall spatiality and motility of expressions of a sign language. In this paper, we analyze the semantic features of adverbial phrases which may affect the motion-related semantics of a predicate in converting expressions in Korean into those in a sign language and propose a system that generates corresponding animation by utilizing these features.

  • PDF

Hybrid HMM for Transitional Gesture Classification in Thai Sign Language Translation

  • Jaruwanawat, Arunee;Chotikakamthorn, Nopporn;Werapan, Worawit
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1106-1110
    • /
    • 2004
  • A human sign language is generally composed of both static and dynamic gestures. Each gesture is represented by a hand shape, its position, and hand movement (for a dynamic gesture). One of the problems found in automated sign language translation is on segmenting a hand movement that is part of a transitional movement from one hand gesture to another. This transitional gesture conveys no meaning, but serves as a connecting period between two consecutive gestures. Based on the observation that many dynamic gestures as appeared in Thai sign language dictionary are of quasi-periodic nature, a method was developed to differentiate between a (meaningful) dynamic gesture and a transitional movement. However, there are some meaningful dynamic gestures that are of non-periodic nature. Those gestures cannot be distinguished from a transitional movement by using the signal quasi-periodicity. This paper proposes a hybrid method using a combination of the periodicity-based gesture segmentation method with a HMM-based gesture classifier. The HMM classifier is used here to detect dynamic signs of non-periodic nature. Combined with the periodic-based gesture segmentation method, this hybrid scheme can be used to identify segments of a transitional movement. In addition, due to the use of quasi-periodic nature of many dynamic sign gestures, dimensionality of the HMM part of the proposed method is significantly reduced, resulting in computational saving as compared with a standard HMM-based method. Through experiment with real measurement, the proposed method's recognition performance is reported.

  • PDF

Design and Implementation of Finger Language Translation System using Raspberry Pi and Leap Motion (라즈베리 파이와 립 모션을 이용한 지화 번역 시스템 설계 및 구현)

  • Jeong, Pil-Seong;Cho, Yang-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.9
    • /
    • pp.2006-2013
    • /
    • 2015
  • Deaf are it is difficult to communicate to represent the voice heard, so theay use mostly using the speech, sign language, writing, etc. to communicate. It is the best way to use sign language, in order to communicate deaf and normal people each other. But they must understand to use sign language. In this paper, we designed and implementated finger language translation system to support communicate between deaf and normal people. We used leap motion as input device that can track finger and hand gesture. We used raspberry pi that is low power sing board computer to process input data and translate finger language. We implemented application used Node.js and MongoDB. The client application complied with HTML5 so that can be support any smart device with web browser.

System implementation share of voice and sign language (지화인식 기반의 음성 및 SNS 공유 시스템 구현)

  • Kang, Jung-Hun;Yang, Dea-Sik;Oh, Min-Seok;Sir, Jung-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.644-646
    • /
    • 2016
  • Deaf are it is difficult to communicate to represent the voice heard, so theay use mostly using the speech, sign language, writing, etc. to communicate. It is the best way to use sign language, in order to communicate deaf and normal people each other. But they must understand to use sign language. In this paper, we designed and implementated finger language translation system to support communicate between deaf and normal people. We used leap motion as input device that can track finger and hand gesture. We used raspberry pi that is low power sing board computer to process input data and translate finger language. We implemented application used Node.js and MongoDB. The client application complied with HTML5 so that can be support any smart device with web browser.

  • PDF

A Design of Sign Language-Text Translation System Using Deep Learning Vedio Recognition (딥러닝 영상인식을 이용한 수화-텍스트 번역 시스템 설계)

  • Lee, JongMyeong;Kim, Kang-Gyoo;Yoo, Seoyeon;Lee, SeungGeon;Chun, Seunghyun;Beak, JeongYoon;Ha, Ok-Kyoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.475-476
    • /
    • 2022
  • 본 논문에서는 청각장애인의 사회참여성 증진 및 사회적 차별감소를 목적으로 딥러닝 영상인식 기반으로 MediaPipe 기술을 활용한 수화-텍스트 번역시스템을 설계한다. 제시하는 시스템은 실시간으로 수집된 수화 사용자의 영상정보를 통해 동작과 표정을 인식하여 텍스트로 번역함으로써 장애인과 비장애인의 원활한 의사소통 서비스를 제공하는 것을 주 목적으로한다. 향후 개선된 수화 인식 및 문장 조합을 통해 일상에서 청각장애인과 일반인의 자유로운 커뮤니케이션을 제공하는 서비스로 확장하고자한다.

  • PDF

Study on Korean-Korean Sign language Translation Technology for Avatar Sign language Service (아바타 수어 서비스를 위한 한국어-한국수어 변환 기술 연구)

  • Choi, Ji Hoon;Lee, Han-kyu;AHN, ChungHyun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.459-460
    • /
    • 2020
  • 한국수화언어가 2016년 2월 제정된 한국수화언어법(약칭, 한국수어법)을 통해 한국어와 동일한 대한민국 공식 언어로 인정받았지만, 사회적 인식 부족과 서비스 비용 문제로 널리 사용되지 못하고 있다. 그리고 일상생활에서 접하는 많은 한국어 정보들 조차도 농인들은 쉽게 이해하기 어렵기 때문에 정보 접근에 대한 차별성 문제가 지속적으로 언급되고 있다. 이를 해결하기 위한 대안으로 아바타를 이용한 수어 서비스가 대두되고 있지만, 한국어-한국수어 번역을 위한 자연어처리 기술의 한계로 인해 일기예보와 같이 탬플릿 기반의 서비스에 국한되거나 비수지신호 표현에 대한 기술 부족으로 인해서 서비스 상용화까지 도달하지 못하고 있는 상황이다. 본 논문에서는 딥러닝 기반으로 한국어에서 한국수어로 변환하기 위한 병렬 말뭉치 데이터 전사 및 변환 시스템 설계 방법을 제안하고자 한다.

  • PDF

Artificial Neural Network for Quantitative Posture Classification in Thai Sign Language Translation System

  • Wasanapongpan, Kumphol;Chotikakamthorn, Nopporn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1319-1323
    • /
    • 2004
  • In this paper, a problem of Thai sign language recognition using a neural network is considered. The paper addresses the problem in classifying certain signs conveying quantitative meaning, e.g., large or small. By treating those signs corresponding to different quantities as derived from different classes, the recognition error rate of the standard multi-layer Perceptron increases if the precision in recognizing different quantities is increased. This is due the fact that, to increase the quantitative recognition precision of those signs, the number of (increasingly similar) classes must also be increased. This leads to an increase in false classification. The problem is due to misinterpreting the amount of quantity the quantitative signs convey. In this paper, instead of treating those signs conveying quantitative attribute of the same quantity type (such as 'size' or 'amount') as derived from different classes, here they are considered instances of the same class. Those signs of the same quantity type are then further divided into different subclasses according to the level of quantity each sign is associated with. By using this two-level classification, false classification among main gesture classes is made independent to the level of precision needed in recognizing different quantitative levels. Moreover, precision of quantitative level classification can be made higher during the recognition phase, as compared to that used in the training phase. A standard multi-layer Perceptron with a back propagation learning algorithm was adapted in the study to implement this two-level classification of quantitative gesture signs. Experimental results obtained using an electronic glove measurement of hand postures are included.

  • PDF