• Title/Summary/Keyword: facial expression language

Search Result 30, Processing Time 0.036 seconds

A Case Study on Childcare Teachers' Facial Expression Language: Focused on the Opinions of Teachers, Directors, and Parents (보육교사의 표정언어에 관한 사례연구: 교사, 원장, 학부모의 견해를 중심으로)

  • Kim, Seon-Ju;Ju, Young-Ae
    • Journal of Families and Better Life
    • /
    • v.32 no.5
    • /
    • pp.107-123
    • /
    • 2014
  • The purpose of this study is to investigate the opinions of teachers, directors, and parents about childcare teachers' facial expression language. We performed in-depth interviews based on literature and previous studies, which consisted of ten childcare teachers, ten directors, and ten parents. From the in-depth interviews, we are able to mainly conclude that all groups think that a childcare teacher's facial expression language is very important, and strongly influences interpersonal problems and work performance. Mostly, childcare teachers aim to have pleasant facial expressions. However they complain that in some situations it is difficult to do so, which causes severe stress. They argued that the facial expressions of both the director of the childcare center and the parents are also very important for children. However, the directors thought that teachers' facial expressions affect children and their parents' impressions of the teachers and quality of childcare. The parents are usually highly satisfied with the childcare when the teacher has a pleasant facial expression, which motivates the parents to have a positive impression of the teacher. Taken together, childcare teachers' facial expression language is critical for children and the childcare environment. Thus, developing an education program for facial expression language would be helpful for improving the quality of child care. Plus, childcare environments should be developed so that teachers do not experience difficulty in having pleasant facial expressions. This result might have the limitation of being collected from only female childcare teachers, directors, and parents.

A Design of Stress Measurement System using Facial and Verbal Sentiment Analysis (표정과 언어 감성 분석을 통한 스트레스 측정시스템 설계)

  • Yuw, Suhwa;Chun, Jiwon;Lee, Aejin;Kim, Yoonhee
    • KNOM Review
    • /
    • v.24 no.2
    • /
    • pp.35-47
    • /
    • 2021
  • Various stress exists in a modern society, which requires constant competition and improvement. A person under stress often shows his pressure in his facial expression and language. Therefore, it is possible to measure the pressure using facial expression and language analysis. The paper proposes a stress measurement system using facial expression and language sensitivity analysis. The method analyzes the person's facial expression and language sensibility to derive the stress index based on the main emotional value and derives the integrated stress index based on the consistency of facial expression and language. The quantification and generalization of stress measurement enables many researchers to evaluate the stress index objectively in general.

A Study on Pattern of Facial Expression Presentation in Character Animation (애니메이선 캐릭터의 표정연출 유형 연구)

  • Hong Soon-Koo
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.8
    • /
    • pp.165-174
    • /
    • 2006
  • Birdwhistell explains in the whole communication, language conveys only 35% of the meaning and the rest 65% is conveyed by non-linguistic media. Humans do not entirely depend on linguistic communication, but are sensitive being, using every sense of theirs. Human communication, by using facial expression, gesture as well as language, is able to convey more concrete meaning. Especially, facial expression is a many-sided message system, which delivers Individual Personality, interest, information about response and emotional status, and can be said as powerful communication tool. Though being able to be changed according to various expressive techniques and degree and quality of expression, the symbolic sign of facial expression is characterized by generalized qualify. Animation characters, as roles in story, have vitality by emotional expression of which mental world and psychological status can reveal and read naturally on their actions or facial expressions.

  • PDF

A Comic Emotional Expression for 3D Facial Model (3D 얼굴 모델의 코믹한 감정 표현)

  • ;;Shin Tanahashi;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.536-539
    • /
    • 1999
  • In this paper we propose a 3D emotional expression method using a comic model for effective sign-language communications. Until now we have investigated to produce more realistic facial and emotional expression. When representing only emotional expression, however, a comic expression could be better than the real picture of a face. The comic face is a comic-style expression model in which almost components except the necessary parts like eyebrows, eyes, nose and mouth are discarded. We represent emotional expression using Action Units(AU) of Facial Action Coding System(FACS). Experimental results show a possibility that the proposed method could be used efficiently for sign-language image communications.

  • PDF

Comic Emotional Expression for Effective Sign-Language Communications (효율적인 수화 통신을 위한 코믹한 감정 표현)

  • ;;Shin Tanahashi;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.651-654
    • /
    • 1999
  • In this paper we propose an emotional expression method using a comic model and special marks for effective sign-language communications. Until now we have investigated to produce more realistic facial and emotional expression. When representing only emotional expression, however, a comic expression could be better than the real picture of a face. The comic face is a comic-style expression model in which almost components except the necessary parts like eyebrows, eyes, nose and mouth are discarded. In the comic model, we can use some special marks for the purpose of emphasizing various emotions. We represent emotional expression using Action Units(AU) of Facial Action Coding System(FACS) and define Special Unit(SU) for emphasizing the emotions. Experimental results show a possibility that the proposed method could be used efficiently for sign-language image communications.

  • PDF

Dynamic Emotion Classification through Facial Recognition (얼굴 인식을 통한 동적 감정 분류)

  • Han, Wuri;Lee, Yong-Hwan;Park, Jeho;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.12 no.3
    • /
    • pp.53-57
    • /
    • 2013
  • Human emotions are expressed in various ways. It can be expressed through language, facial expression and gestures. In particular, the facial expression contains many information about human emotion. These vague human emotion appear not in single emotion, but in combination of various emotion. This paper proposes a emotional expression algorithm using Active Appearance Model(AAM) and Fuzz k- Nearest Neighbor which give facial expression in similar with vague human emotion. Applying Mahalanobis distance on the center class, determine inclusion level between center class and each class. Also following inclusion level, appear intensity of emotion. Our emotion recognition system can recognize a complex emotion using Fuzzy k-NN classifier.

Soft Sign Language Expression Method of 3D Avatar (3D 아바타의 자연스러운 수화 동작 표현 방법)

  • Oh, Young-Joon;Jang, Hyo-Young;Jung, Jin-Woo;Park, Kwang-Hyun;Kim, Dae-Jin;Bien, Zeung-Nam
    • The KIPS Transactions:PartB
    • /
    • v.14B no.2
    • /
    • pp.107-118
    • /
    • 2007
  • This paper proposes a 3D avatar which expresses sign language in a very using lips, facial expression, complexion, pupil motion and body motion as well as hand shape, Hand posture and hand motion to overcome the limitation of conventional sign language avatars from a deaf's viewpoint. To describe motion data of hand and other body components structurally and enhance the performance of databases, we introduce the concept of a hyper sign sentence. We show the superiority of the developed system by a usability test through a questionnaire survey.

Sign2Gloss2Text-based Sign Language Translation with Enhanced Spatial-temporal Information Centered on Sign Language Movement Keypoints (수어 동작 키포인트 중심의 시공간적 정보를 강화한 Sign2Gloss2Text 기반의 수어 번역)

  • Kim, Minchae;Kim, Jungeun;Kim, Ha Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1535-1545
    • /
    • 2022
  • Sign language has completely different meaning depending on the direction of the hand or the change of facial expression even with the same gesture. In this respect, it is crucial to capture the spatial-temporal structure information of each movement. However, sign language translation studies based on Sign2Gloss2Text only convey comprehensive spatial-temporal information about the entire sign language movement. Consequently, detailed information (facial expression, gestures, and etc.) of each movement that is important for sign language translation is not emphasized. Accordingly, in this paper, we propose Spatial-temporal Keypoints Centered Sign2Gloss2Text Translation, named STKC-Sign2 Gloss2Text, to supplement the sequential and semantic information of keypoints which are the core of recognizing and translating sign language. STKC-Sign2Gloss2Text consists of two steps, Spatial Keypoints Embedding, which extracts 121 major keypoints from each image, and Temporal Keypoints Embedding, which emphasizes sequential information using Bi-GRU for extracted keypoints of sign language. The proposed model outperformed all Bilingual Evaluation Understudy(BLEU) scores in Development(DEV) and Testing(TEST) than Sign2Gloss2Text as the baseline, and in particular, it proved the effectiveness of the proposed methodology by achieving 23.19, an improvement of 1.87 based on TEST BLEU-4.

Gesture Communications Between Different Avatar Models Using A FBML (FBML을 이용한 서로 다른 아바타 모델간의 제스처 통신)

  • 이용후;김상운;아오끼요시나오
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.5
    • /
    • pp.41-49
    • /
    • 2004
  • As a means of overcoming the linguistic barrier between different languages in the Internet cyberspace, a sign-language communication system has been proposed. However, the system supports avatars having the same model structure so that it is difficult to communicate between different avatar models. Therefore, in this paper, we propose a new gesture communication system in which different avatars models can communicate with each other by using a FBML (Facial Body Markup Language). Using the FBML, we define a standard document format that contains the messages to be transferred between models, where the document includes the action units of facial expression and the joint angles of gesture animation. The proposed system is implemented with Visual C++ and Open Inventor on Windows platforms. The experimental results demonstrate a possibility that the method could be used as an efficient means to overcome the linguistic problem.

Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법)

  • Joo, Jong-Tae;Jang, In-Hun;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.