• Title/Summary/Keyword: sign language translation

Search Result 36, Processing Time 0.028 seconds

Sign Language Translation Using Deep Convolutional Neural Networks

  • Abiyev, Rahib H.;Arslan, Murat;Idoko, John Bush
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.631-653
    • /
    • 2020
  • Sign language is a natural, visually oriented and non-verbal communication channel between people that facilitates communication through facial/bodily expressions, postures and a set of gestures. It is basically used for communication with people who are deaf or hard of hearing. In order to understand such communication quickly and accurately, the design of a successful sign language translation system is considered in this paper. The proposed system includes object detection and classification stages. Firstly, Single Shot Multi Box Detection (SSD) architecture is utilized for hand detection, then a deep learning structure based on the Inception v3 plus Support Vector Machine (SVM) that combines feature extraction and classification stages is proposed to constructively translate the detected hand gestures. A sign language fingerspelling dataset is used for the design of the proposed model. The obtained results and comparative analysis demonstrate the efficiency of using the proposed hybrid structure in sign language translation.

A Study on the Development Process of Sign Language Interpreting Content in the Medical Setting (의료 환경의 수어통역 콘텐츠 개발 과정에 관한 연구)

  • Lee, Jun-Woo;Oh, Byung-Mo;Cho, Jung-Hwan;Kang, Yi-Sul
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.12
    • /
    • pp.505-516
    • /
    • 2021
  • The purpose of this study is to develop sign language interpreting content in the medical setting that facilitates Deaf people's access to medical services in situations where professional and accurate medical sign language interpreting is insufficient. To achieve the purpose of this study, we conducted a literature review, individual interviews for Deaf people, on-site requirement surveys of sign language interpreters and sign language experts, and medical and sign language expert consultations. Based on this, we developed sign language interpreting content such as main care contextual scenarios, basic medical terms, and medical term descriptions. Through this study, we developed medical sign language content considering the situation and medical importance of Deaf people to promote expertise in the medial sign language area and developed a responsive website of sign language medical dictionary that effectively and efficiently delivers information to Deaf people and sign language interpreters; we realized the need and importance of sign language translation for Deaf people to be the main bodies.

A construction of dictionary for Korean Text to Sign Language Translation (한글문장-수화 번역기를 위한 사전구성)

  • 권경혁;민홍기
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.841-844
    • /
    • 1998
  • Korean Text to Sign Language Traslator could be applied to learn letters for both the deaf and hard-of-hearing people, and to have a conversation with normal people. This paper describes some useful dictionaries for developing korean text to sign language translator; Base sign language dictionary, Compound sign language dictionary, and Resemble sign language dictionary. As korean sign language is composed entirely of about 6,000 words, the additional dictionaries are required for matching them to korean written language. We design base sign language dictionary which was composed of basic symbols and moving picture of korean sign language, and propose the definition of compound isng language dictionary which was composed of symbols of base sing language. In addition, resemble sign language dictionary offer sign symbols and letters which is used same meaning in conversation. By using these methods, we could search quickly sign language during korean text to sign language translating process, and save storage space. We could also solve the lack of sign language words by using them, which are appeared on translating process.

  • PDF

CNN-based Online Sign Language Translation Counseling System (CNN기반의 온라인 수어통역 상담 시스템에 관한 연구)

  • Park, Won-Cheol;Park, Koo-Rack
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.5
    • /
    • pp.17-22
    • /
    • 2021
  • It is difficult for the hearing impaired to use the counseling service without sign language interpretation. Due to the shortage of sign language interpreters, it takes a lot of time to connect to sign language interpreters, or there are many cases where the connection is not available. Therefore, in this paper, we propose a system that captures sign language as an image using OpenCV and CNN (Convolutional Neural Network), recognizes sign language motion, and converts the meaning of sign language into textual data and provides it to users. The counselor can conduct counseling by reading the stored sign language translation counseling contents. Consultation is possible without a professional sign language interpreter, reducing the burden of waiting for a sign language interpreter. If the proposed system is applied to counseling services for the hearing impaired, it is expected to improve the effectiveness of counseling and promote academic research on counseling for the hearing impaired in the future.

Three-Dimensional Convolutional Vision Transformer for Sign Language Translation (수어 번역을 위한 3차원 컨볼루션 비전 트랜스포머)

  • Horyeor Seong;Hyeonjoong Cho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.3
    • /
    • pp.140-147
    • /
    • 2024
  • In the Republic of Korea, people with hearing impairments are the second-largest demographic within the registered disability community, following those with physical disabilities. Despite this demographic significance, research on sign language translation technology is limited due to several reasons including the limited market size and the lack of adequately annotated datasets. Despite the difficulties, a few researchers continue to improve the performacne of sign language translation technologies by employing the recent advance of deep learning, for example, the transformer architecture, as the transformer-based models have demonstrated noteworthy performance in tasks such as action recognition and video classification. This study focuses on enhancing the recognition performance of sign language translation by combining transformers with 3D-CNN. Through experimental evaluations using the PHOENIX-Wether-2014T dataset [1], we show that the proposed model exhibits comparable performance to existing models in terms of Floating Point Operations Per Second (FLOPs).

Sign language translation using video captioning and sign language recognition using action recognition (비디오 캡셔닝을 적용한 수어 번역 및 행동 인식을 적용한 수어 인식)

  • Gi-Duk Kim;Geun-Hoo Lee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.317-319
    • /
    • 2024
  • 본 논문에서는 비디오 캡셔닝 알고리즘을 적용한 수어 번역 및 행동 인식 알고리즘을 적용한 수어 인식 알고리즘을 제안한다. 본 논문에 사용된 비디오 캡셔닝 알고리즘으로 40개의 연속된 입력 데이터 프레임을 CNN 네트워크를 통해 임베딩 하고 트랜스포머의 입력으로 하여 문장을 출력하였다. 행동 인식 알고리즘은 랜덤 샘플링을 하여 한 영상에 40개의 인덱스에서 40개의 연속된 데이터에 CNN 네트워크를 통해 임베딩하고 GRU, 트랜스포머를 결합한 RNN 모델을 통해 인식 결과를 출력하였다. 수어 번역에서 BLEU-4의 경우 7.85, CIDEr는 53.12를 얻었고 수어 인식으로 96.26%의 인식 정확도를 얻었다.

  • PDF

Development of Sign Language Translation System using Motion Recognition of Kinect (키넥트의 모션 인식 기능을 이용한 수화번역 시스템 개발)

  • Lee, Hyun-Suk;Kim, Seung-Pil;Chung, Wan-Young
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.4
    • /
    • pp.235-242
    • /
    • 2013
  • In this paper, the system which can translate sign language through motion recognition of Kinect camera system is developed for the communication between hearing-impaired person or language disability, and normal person. The proposed algorithm which can translate sign language is developed by using core function of Kinect, and two ways such as length normalization and elbow normalization are introduced to improve accuracy of translating sign langauge for various sign language users. After that the sign language data is compared by chart in order to know how effective these ways of normalization. The accuracy of this program is demonstrated by entering 10 databases and translating sign languages ranging from simple signs to complex signs. In addition, the reliability of translating sign language is improved by applying this program to people who have various body shapes and fixing measure errors in body shapes.

Sentence Type Identification in Korean Applications to Korean-Sign Language Translation and Korean Speech Synthesis (한국어 문장 유형의 자동 분류 한국어-수화 변환 및 한국어 음성 합성에의 응용)

  • Chung, Jin-Woo;Lee, Ho-Joon;Park, Jong-C.
    • Journal of the HCI Society of Korea
    • /
    • v.5 no.1
    • /
    • pp.25-35
    • /
    • 2010
  • This paper proposes a method of automatically identifying sentence types in Korean and improving naturalness in sign language generation and speech synthesis using the identified sentence type information. In Korean, sentences are usually categorized into five types: declarative, imperative, propositive, interrogative, and exclamatory. However, it is also known that these types are quite ambiguous to identify in dialogues. In this paper, we present additional morphological and syntactic clues for the sentence type and propose a rule-based procedure for identifying the sentence type using these clues. The experimental results show that our method gives a reasonable performance. We also describe how the sentence type is used to generate non-manual signals in Korean-Korean sign language translation and appropriate intonation in Korean speech synthesis. Since the method of using sentence type information in speech synthesis and sign language generation is not much studied previously, it is anticipated that our method will contribute to research on generating more natural speech and sign language expressions.

  • PDF

Design and Implementation of Data Acquisition and Storage Systems for Multi-view Points Sign Language (다시점 수어 데이터 획득 및 저장 시스템 설계 및 구현)

  • Kim, Geunmo;Kim, Bongjae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.3
    • /
    • pp.63-68
    • /
    • 2022
  • There are 395,789 people with hearing impairment in Korea, according to the 2021 Disability Statistics Annual Report by the Korea Institute for the Development of Disabled Persons. These people are experiencing a lot of inconvenience through hearing impairment, and many studies related to recognition and translation of Korean sign language are being conducted to solve this problem. In sign language recognition and translation research, collecting sign language data has many difficulties because few people use sign language professionally. In addition, most of the existed data is sign language data taken from the front of the speaker. To solve this problem, in this paper, we designed and developed a storage system that can collect sign language data based on multi-view points in real-time, rather than a single point, and store and manage it with high usability.

A Study on Finger Language Translation System using Machine Learning and Leap Motion (머신러닝과 립 모션을 활용한 지화 번역 시스템 구현에 관한 연구)

  • Son, Da Eun;Go, Hyeong Min;Shin, Haeng yong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.552-554
    • /
    • 2019
  • Deaf mutism (a hearing-impaired person and speech disorders) communicates using sign language. There are difficulties in communicating by voice. However, sign language can only be limited in communicating with people who know sign language because everyone doesn't use sign language when they communicate. In this paper, a finger language translation system is proposed and implemented as a means for the disabled and the non-disabled to communicate without difficulty. The proposed algorithm recognizes the finger language data by leap motion and self-learns the data using machine learning technology to increase recognition rate. We show performance improvement from the simulation results.