• 제목/요약/키워드: gesture language

검색결과 102건 처리시간 0.024초

계절의 변화 원인에 대한 초등학생들의 설명에서 확인된 정신 모델과 묘사적 몸짓의 관계 분석 (The Relationship between the Mental Model and the Depictive Gestures Observed in the Explanations of Elementary School Students about the Reason Why Seasons change)

  • 김나영;양일호;고민석
    • 대한지구과학교육학회지
    • /
    • 제7권3호
    • /
    • pp.358-370
    • /
    • 2014
  • The purpose of this study is to analyze the relationship between the mental model and the depictive gestures observed in the explanations of elementary school students about the reason why seasons change. As a result of analysis in gestures of each mental model, mental model was remembered as "motion" in case of CM-type, and showed more "Exphoric" gestures that expressed gesture as a language. CF type is remembered in "writings or pictures," and metaphoric gestures were used when explaining some alternative concepts. CF-UM type explained with language in detail, and showed a number of gestures with "Lexical." Analyzing depictive gestures, even with sub-categories such as rotation, revolution and meridian altitude, etc., a great many types of gestures were expressed such as indicating with fingers, palms, arms, ball-point pens, and fists, etc., or drawing, spinning and indicating them. We could check up concept understandings of the students through this. In addition, as we analyzed inconsistencies among external representations such as verbal language and gesture, writing and gesture, and picture and gesture, we realized that gestures can help understanding mental models of the students, and sometimes, we could know that information that cannot be shown by linguistic explanations or pictures was expressed in gestures. Additionally, we looked into two research participants that showed conspicuous differences. One participant seemed to be wrong as he used his own expressions, but he expressed with gestures precisely, while the other participant seemed to be accurate, but when he analyzed gestures, he had whimsical concepts.

Vision- Based Finger Spelling Recognition for Korean Sign Language

  • Park Jun;Lee Dae-hyun
    • 한국멀티미디어학회논문지
    • /
    • 제8권6호
    • /
    • pp.768-775
    • /
    • 2005
  • For sign languages are main communication means among hearing-impaired people, there are communication difficulties between speaking-oriented people and sign-language-oriented people. Automated sign-language recognition may resolve these communication problems. In sign languages, finger spelling is used to spell names and words that are not listed in the dictionary. There have been research activities for gesture and posture recognition using glove-based devices. However, these devices are often expensive, cumbersome, and inadequate for recognizing elaborate finger spelling. Use of colored patches or gloves also cause uneasiness. In this paper, a vision-based finger spelling recognition system is introduced. In our method, captured hand region images were separated from the background using a skin detection algorithm assuming that there are no skin-colored objects in the background. Then, hand postures were recognized using a two-dimensional grid analysis method. Our recognition system is not sensitive to the size or the rotation of the input posture images. By optimizing the weights of the posture features using a genetic algorithm, our system achieved high accuracy that matches other systems using devices or colored gloves. We applied our posture recognition system for detecting Korean Sign Language, achieving better than $93\%$ accuracy.

  • PDF

Korean /l/-flapping in an /i/-/i/ context

  • Son, Minjung
    • 말소리와 음성과학
    • /
    • 제7권1호
    • /
    • pp.151-163
    • /
    • 2015
  • In this study, we aim to describe kinematic characteristics of Korean /l/-flapping in two speech rates (fast vs. comfortable). Production data was collected from seven native speakers of Seoul Korean (four females and three males) using electromagnetic midsagittal articulometry (EMMA), which provided two dimensional data on the x-y plane. We examined kinematic properties of the vertical/horizontal tongue tip gesture, the vertical/horizontal (rear) tongue body gesture, and the jaw gesture in an /i/-/i/ context. Gestural landmarks of the vertical tongue tip gesture are directly measured. This serves as the actual anchoring time points to which relevant measures of other trajectories referred. The study focuses on velocity profiles, closing/opening spatiotemporal properties, constriction duration, and constriction minima were analyzed. The results are summarized as follows. First, gradiently distributed spatiotemporal values of the vertical tongue tip gesture were on a continuum. This shows more of a reduction in fast speech rate, but no single instance of categorical reduction (deletion). Second, Korean /l/-flapping predominantly exhibited a backward sliding tongue tip movement, in 83% of production, which is apparently distinguished from forward sliding movement in English. Lastly, there was an indication of vocalic reduction in fast rate, truncating spatial displacement of the jaw and the tongue body, although we did not observe positional variations with speech rate. The present study shows that Korean /l/-flapping is characterized by mixed articulatory properties with respect to flapping sounds of other languages such as English and Xiangxiang Chinese. Korean /l/ flapping demonstrates a language-universal property, such as the gradient nature of its flapping sounds that is compatible with other languages. On the other hand, Korean /l/-flapping also shows a language-particular property, particularly distinguished from English, in that a backward gliding movement occurs during the tongue tip closing movement. Although, there was no vocalic reduction in V2 observed in terms of jaw and tongue body height, spatial displacement of these articulators still suggests truncation in fast speech rate.

How Well Did We Know About Our Communication? "Origins of Human Communication"

  • Jung-Woo Son
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • 제34권1호
    • /
    • pp.57-58
    • /
    • 2023
  • Through accurate observation and the results of experimental studies using great apes, the author tells us exactly what we have not known about human communication. The author persuasively conveys to the reader the grand history of developing from great apes' gestures to human gestures, to human speech. Given that great apes and human gestures were the origin of human voice language, we have once again realized that our language is, after all, an "embodied language."

『제스처 라이프』에 나타난 '차별'과 '차이'의 징후적 읽기 (A Symptomatic Reading of 'Discrimination' and 'Difference' in A Gesture Life)

  • 이석구
    • 영어영문학
    • /
    • 제56권5호
    • /
    • pp.907-930
    • /
    • 2010
  • Most previous studies on A Gesture Life focused on illuminating the role and significance of Kkutaeh, the Korean comfort woman, whom Hata runs across at a military camp in the Burmese jungle. For instance, Carroll Hamilton argues that the return of Kkutaeh as a traumatic subject disrupts Hata's nationalist narrative, causing the protagonist's eventual failure at national enfranchisement. However, this paper focuses on Hata's relationship with Bedley Run, the sleepy suburban white town, in which the protagonist settles down right after immigration to the US. The racial/racist nature of Bedley Run has not received due critical attention, although a few studies on the novel saw Hata's gestures as a survival tactic deployed against the hostile environment of his new host society. This paper, resorting to Pierre Macherey's thesis on symptomatic reading, exposes what Hata, the narrator/protagonist, hides from his readers concerning his status in his muchbeloved town; and it also explores the subversive significance of Hata's ethnic memories. The aim of this study is, after all, to map both the subversive possibilities and the limitations of Hata's immigrant narrative as a bildungsroman.

Artificial Neural Network for Quantitative Posture Classification in Thai Sign Language Translation System

  • Wasanapongpan, Kumphol;Chotikakamthorn, Nopporn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1319-1323
    • /
    • 2004
  • In this paper, a problem of Thai sign language recognition using a neural network is considered. The paper addresses the problem in classifying certain signs conveying quantitative meaning, e.g., large or small. By treating those signs corresponding to different quantities as derived from different classes, the recognition error rate of the standard multi-layer Perceptron increases if the precision in recognizing different quantities is increased. This is due the fact that, to increase the quantitative recognition precision of those signs, the number of (increasingly similar) classes must also be increased. This leads to an increase in false classification. The problem is due to misinterpreting the amount of quantity the quantitative signs convey. In this paper, instead of treating those signs conveying quantitative attribute of the same quantity type (such as 'size' or 'amount') as derived from different classes, here they are considered instances of the same class. Those signs of the same quantity type are then further divided into different subclasses according to the level of quantity each sign is associated with. By using this two-level classification, false classification among main gesture classes is made independent to the level of precision needed in recognizing different quantitative levels. Moreover, precision of quantitative level classification can be made higher during the recognition phase, as compared to that used in the training phase. A standard multi-layer Perceptron with a back propagation learning algorithm was adapted in the study to implement this two-level classification of quantitative gesture signs. Experimental results obtained using an electronic glove measurement of hand postures are included.

  • PDF

제스처 형태의 한글입력을 위한 오토마타에 관한 연구 (A Study on the Automata for Hangul Input of Gesture Type)

  • 임양원;임한규
    • 한국산업정보학회논문지
    • /
    • 제16권2호
    • /
    • pp.49-58
    • /
    • 2011
  • 터치스크린을 이용한 스마트 디바이스의 보급이 활성화되어 한글 입력방식도 다양해지고 있다. 본 논문에서는 스마트 디바이스에 적합한 한글 입력방식을 조사 분석하고 오토마타 이론을 이용하여 터치 UI에 적합한 제스처 형태의 한글 입력방식에서 사용할 수 있는 간단하고 효율적인 오토마타를 제시하였다.

립모션 센서 기반 증강현실 인지재활 훈련시스템을 위한 합성곱신경망 손동작 인식 (Hand Gesture Recognition with Convolution Neural Networks for Augmented Reality Cognitive Rehabilitation System Based on Leap Motion Controller)

  • 송근산;이현주;태기식
    • 대한의용생체공학회:의공학회지
    • /
    • 제42권4호
    • /
    • pp.186-192
    • /
    • 2021
  • In this paper, we evaluated prediction accuracy of Euler angle spectrograph classification method using a convolutional neural networks (CNN) for hand gesture recognition in augmented reality (AR) cognitive rehabilitation system based on Leap Motion Controller (LMC). Hand gesture recognition methods using a conventional support vector machine (SVM) show 91.3% accuracy in multiple motions. In this paper, five hand gestures ("Promise", "Bunny", "Close", "Victory", and "Thumb") are selected and measured 100 times for testing the utility of spectral classification techniques. Validation results for the five hand gestures were able to be correctly predicted 100% of the time, indicating superior recognition accuracy than those of conventional SVM methods. The hand motion recognition using CNN meant to be applied more useful to AR cognitive rehabilitation training systems based on LMC than sign language recognition using SVM.

증강현실의 3D 객체 조작을 위한 핸드-제스쳐 인터페이스 구현 (Implementation of Hand-Gesture Interface to manipulate a 3D Object of Augmented Reality)

  • 장명수;이우범
    • 한국인터넷방송통신학회논문지
    • /
    • 제16권4호
    • /
    • pp.117-123
    • /
    • 2016
  • 본 논문에서는 사용자의 손가락 제스쳐를 인식하여 증강현실(Augmented Reality) 환경에서 3D 객체를 조작하기 위한 핸드-제스쳐 인터페이스를 구현한다. 구현된 핸드-제스쳐 인터페이스는 입력된 실 영상으로부터 손 영역을 추출하고, 사용자의 핸드 제스쳐에 의한 핸드 마커에 의해서 증강 객체를 생성한다. 그리고 사용자 제스쳐에 상응하는 3D 객체 조작은 손 영역의 면적 비율, 손가락 개수, 손 영역 중심점의 변화 등의 상관 관계를 분석하여 수행한다. 구현된 증강현실 3D 객체 조작 인터페이스의 성능 평가를 위해서는 OpenGL로 3D 객체를 제작하고, OpenCV 라이브러리를 기반으로 C++언어를 사용하여 핸드 마커 및 제스쳐 인식의 모든 처리 과정을 구현하였다. 그 결과, 각 사용자 핸드-제스쳐 명령-모드별 평균 인식률이 90%이상으로 성공적인 인터페이스 기능을 보였다.

사전 자세에 따른 근전도 기반 손 제스처 인식 (Recognition of hand gestures with different prior postures using EMG signals)

  • 최현태;김덕화;장원두
    • 사물인터넷융복합논문지
    • /
    • 제9권6호
    • /
    • pp.51-56
    • /
    • 2023
  • 손 제스처의 인식은 구어 사용이 어려운 사람들의 의사소통을 위한 중요한 기술이다. 제스처 인식에 널리 사용되는 근전도 신호는 사전 자세에 따라 동작이 달라지기 때문에 제스처 인식의 어려움이 있을 것으로 예상되지만, 이에 관한 연구는 찾기 어렵다. 본 연구에서는 사전 자세에 따른 제스처 인식 성능의 변화를 분석하였다. 이를 위해 총 20명의 피험자에게서 사전 자세를 가지는 동작에 대한 근전도 신호를 측정하고, 제스처 인식을 실험하였다. 그 결과, 학습 및 테스트 데이터 간 사전 상태가 단일한 경우에는 평균 89.6%의 정확도를, 상이한 경우에는 평균 52.65%의 정확도를 보였다. 반면, 사전 자세를 모두 고려한 경우에는 정확도가 다시 회복됨을 발견하였다. 이를 통해 본 연구에서는 근전도를 활용하는 손 제스처 인식시에 사전 자세가 다양하게 고려하여야 함을 실험적으로 확인하였다.