• Title/Summary/Keyword: facial gestures

Search Result 47, Processing Time 0.033 seconds

Recognizing Human Facial Expressions and Gesture from Image Sequence (연속 영상에서의 얼굴표정 및 제스처 인식)

  • 한영환;홍승홍
    • Journal of Biomedical Engineering Research
    • /
    • v.20 no.4
    • /
    • pp.419-425
    • /
    • 1999
  • In this paper, we present an algorithm of real time facial expression and gesture recognition for image sequence on the gray level. A mixture algorithm of a template matching and knowledge based geometrical consideration of a face were adapted to locate the face area in input image. And optical flow method applied on the area to recognize facial expressions. Also, we suggest hand area detection algorithm form a background image by analyzing entropy in an image. With modified hand area detection algorithm, it was possible to recognize hand gestures from it. As a results, the experiments showed that the suggested algorithm was good at recognizing one's facial expression and hand gesture by detecting a dominant motion area on images without getting any limits from the background image.

  • PDF

Virtual Human Authoring ToolKit for a Senior Citizen Living Alone (독거노인용 가상 휴먼 제작 툴킷)

  • Shin, Eunji;Jo, Dongsik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.9
    • /
    • pp.1245-1248
    • /
    • 2020
  • Elderly people living alone need smart care for independent living. Recent advances in artificial intelligence have allowed for easier interaction by a computer-controlled virtual human. This technology can realize services such as medicine intake guide for the elderly living alone. In this paper, we suggest an intelligent virtual human and present our virtual human toolkit for controlling virtual humans for a senior citizen living alone. To make the virtual human motion, we suggest our authoring toolkit to map gestures, emotions, voices of virtual humans. The toolkit configured to create virtual human interactions allows the response of a suitable virtual human with facial expressions, gestures, and voice.

Real-Time Recognition Method of Counting Fingers for Natural User Interface

  • Lee, Doyeob;Shin, Dongkyoo;Shin, Dongil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.5
    • /
    • pp.2363-2374
    • /
    • 2016
  • Communication occurs through verbal elements, which usually involve language, as well as non-verbal elements such as facial expressions, eye contact, and gestures. In particular, among these non-verbal elements, gestures are symbolic representations of physical, vocal, and emotional behaviors. This means that gestures can be signals toward a target or expressions of internal psychological processes, rather than simply movements of the body or hands. Moreover, gestures with such properties have been the focus of much research for a new interface in the NUI/NUX field. In this paper, we propose a method for recognizing the number of fingers and detecting the hand region based on the depth information and geometric features of the hand for application to an NUI/NUX. The hand region is detected by using depth information provided by the Kinect system, and the number of fingers is identified by comparing the distance between the contour and the center of the hand region. The contour is detected using the Suzuki85 algorithm, and the number of fingers is calculated by detecting the finger tips in a location at the maximum distance to compare the distances between three consecutive dots in the contour and the center point of the hand. The average recognition rate for the number of fingers is 98.6%, and the execution time is 0.065 ms for the algorithm used in the proposed method. Although this method is fast and its complexity is low, it shows a higher recognition rate and faster recognition speed than other methods. As an application example of the proposed method, this paper explains a Secret Door that recognizes a password by recognizing the number of fingers held up by a user.

The Congruent Effects of Gesture and Facial Expression of Virtual Character on Emotional Perception: What Facial Expression is Significant? (가상 캐릭터의 몸짓과 얼굴표정의 일치가 감성지각에 미치는 영향: 어떤 얼굴표정이 중요한가?)

  • Ryu, Jeeheon;Yu, Seungbeom
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.5
    • /
    • pp.21-34
    • /
    • 2016
  • In the design and develop a virtual character, it is important to correctly deliver target emotions generated by the combination of facial expression and gesture. The purpose of this study is to examine the effect of congruent/incongruent between gesture and facial expression on target emotion. In this study four emotions, joy, sadness, fear, and anger, are applied. The results of study showed that sadness emotion were incorrectly perceived. Moreover, it was perceived as anger instead of sadness. Sadness can be easily confused when facial expression and gestures were simultaneously presented. However, in the other emotional status, the intended emotional expressions were correctly perceived. The overall evaluation of virtual character's emotional expression was significantly low when joy gesture was combined with sad facial expression. The results of this study suggested that emotional gesture is more influential correctly to deliver target emotions to users. This study suggested that social cues like gender or age of virtual character should be further studied.

A Gesture-Emotion Keyframe Editor for sign-Language Communication between Avatars of Korean and Japanese on the Internet

  • Kim, Sang-Woon;Lee, Yung-Who;Lee, Jong-Woo;Aoki, Yoshinao
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.831-834
    • /
    • 2000
  • The sign-language tan be used a9 an auxiliary communication means between avatars of different languages. At that time an intelligent communication method can be also utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real pictures. In this paper we design a gesture-emotion keyframe editor to provide the means to get easily the parameter values. To calculate both joint angles of the arms and the hands and to goner-ate the in keyframes realistically, a transformation matrix of inverse kinematics and some kinds of constraints are applied. Also, to edit emotional expressions efficiently, a comic-style facial model having only eyebrows, eyes nose, and mouth is employed. Experimental results show a possibility that the editor could be used for intelligent sign-language image communications between different lan-guages.

  • PDF

Sign Language Translation Using Deep Convolutional Neural Networks

  • Abiyev, Rahib H.;Arslan, Murat;Idoko, John Bush
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.631-653
    • /
    • 2020
  • Sign language is a natural, visually oriented and non-verbal communication channel between people that facilitates communication through facial/bodily expressions, postures and a set of gestures. It is basically used for communication with people who are deaf or hard of hearing. In order to understand such communication quickly and accurately, the design of a successful sign language translation system is considered in this paper. The proposed system includes object detection and classification stages. Firstly, Single Shot Multi Box Detection (SSD) architecture is utilized for hand detection, then a deep learning structure based on the Inception v3 plus Support Vector Machine (SVM) that combines feature extraction and classification stages is proposed to constructively translate the detected hand gestures. A sign language fingerspelling dataset is used for the design of the proposed model. The obtained results and comparative analysis demonstrate the efficiency of using the proposed hybrid structure in sign language translation.

Design and implement of the Educational Humanoid Robot D2 for Emotional Interaction System (감성 상호작용을 갖는 교육용 휴머노이드 로봇 D2 개발)

  • Kim, Do-Woo;Chung, Ki-Chull;Park, Won-Sung
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.1777-1778
    • /
    • 2007
  • In this paper, We design and implement a humanoid robot, With Educational purpose, which can collaborate and communicate with human. We present an affective human-robot communication system for a humanoid robot, D2, which we designed to communicate with a human through dialogue. D2 communicates with humans by understanding and expressing emotion using facial expressions, voice, gestures and posture. Interaction between a human and a robot is made possible through our affective communication framework. The framework enables a robot to catch the emotional status of the user and to respond appropriately. As a result, the robot can engage in a natural dialogue with a human. According to the aim to be interacted with a human for voice, gestures and posture, the developed Educational humanoid robot consists of upper body, two arms, wheeled mobile platform and control hardware including vision and speech capability and various control boards such as motion control boards, signal processing board proceeding several types of sensors. Using the Educational humanoid robot D2, we have presented the successful demonstrations which consist of manipulation task with two arms, tracking objects using the vision system, and communication with human by the emotional interface, the synthesized speeches, and the recognition of speech commands.

  • PDF

Sign2Gloss2Text-based Sign Language Translation with Enhanced Spatial-temporal Information Centered on Sign Language Movement Keypoints (수어 동작 키포인트 중심의 시공간적 정보를 강화한 Sign2Gloss2Text 기반의 수어 번역)

  • Kim, Minchae;Kim, Jungeun;Kim, Ha Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1535-1545
    • /
    • 2022
  • Sign language has completely different meaning depending on the direction of the hand or the change of facial expression even with the same gesture. In this respect, it is crucial to capture the spatial-temporal structure information of each movement. However, sign language translation studies based on Sign2Gloss2Text only convey comprehensive spatial-temporal information about the entire sign language movement. Consequently, detailed information (facial expression, gestures, and etc.) of each movement that is important for sign language translation is not emphasized. Accordingly, in this paper, we propose Spatial-temporal Keypoints Centered Sign2Gloss2Text Translation, named STKC-Sign2 Gloss2Text, to supplement the sequential and semantic information of keypoints which are the core of recognizing and translating sign language. STKC-Sign2Gloss2Text consists of two steps, Spatial Keypoints Embedding, which extracts 121 major keypoints from each image, and Temporal Keypoints Embedding, which emphasizes sequential information using Bi-GRU for extracted keypoints of sign language. The proposed model outperformed all Bilingual Evaluation Understudy(BLEU) scores in Development(DEV) and Testing(TEST) than Sign2Gloss2Text as the baseline, and in particular, it proved the effectiveness of the proposed methodology by achieving 23.19, an improvement of 1.87 based on TEST BLEU-4.

Human Robot Interaction Using Face Direction Gestures

  • Kwon, Dong-Soo;Bang, Hyo-Choong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.171.4-171
    • /
    • 2001
  • This paper proposes a method of human- robot interaction (HRI) using face directional gesture. A single CCD color camera is used to input face region, and the robot recognizes the face directional gesture based on the facial feature´s positions. One can give a command such as stop, go, left and right turn to the robot using the face directional gesture. Since the robot also has the ultra sonic sensors, it can detect obstacles and determine a safe direction at the current position. By combining the user´s command with the sensed obstacle configuration, the robot selects the safe and efficient motion direction. From simulation results, we show that the robot with HRI is more reliable for the robot´s navigation.

  • PDF

Method for Inference of Operators' Thoughts from Eye Movement Data in Nuclear Power Plants

  • Ha, Jun Su;Byon, Young-Ji;Baek, Joonsang;Seong, Poong Hyun
    • Nuclear Engineering and Technology
    • /
    • v.48 no.1
    • /
    • pp.129-143
    • /
    • 2016
  • Sometimes, we need or try to figure out somebody's thoughts from his or her behaviors such as eye movement, facial expression, gestures, and motions. In safety-critical and complex systems such as nuclear power plants, the inference of operators' thoughts (understanding or diagnosis of a current situation) might provide a lot of opportunities for useful applications, such as development of an improved operator training program, a new type of operator support system, and human performance measures for human factor validation. In this experimental study, a novel method for inference of an operator's thoughts from his or her eye movement data is proposed and evaluated with a nuclear power plant simulator. In the experiments, about 80% of operators' thoughts can be inferred correctly using the proposed method.