• Title/Summary/Keyword: Facial Emotion Expression

Search Result 202, Processing Time 0.024 seconds

A Study on Efficient Facial Expression Recognition System for Customer Satisfaction Feedback (고객만족도 피드백을 위한 효율적인 얼굴감정 인식시스템에 대한 연구)

  • Kang, Min-Sik
    • Convergence Security Journal
    • /
    • v.12 no.4
    • /
    • pp.41-47
    • /
    • 2012
  • For competitiveness of national B2C (Business to Customer) service industry, improvement of process and analysis focused on customer and change of service system are needed. In other words, a business and an organization should deduce and provide what kind of services customers want. Then, evaluate customers' satisfaction and improve the service quality. To achieve this goal, accurate feedbacks from customers play an important role; however, there are not quantitative and standard systems a lot in nation. Recently, the researches about ICT (Information and Communication Technology) that can recognize emotion of human being are on the increase. The facial expression recognition among them is known as most efficient and natural human interface. This research analyzes about more efficient facial expression recognition and suggests a customer satisfaction feedback system using that.

A Case Study of Emotion Expression Technologies for Emotional Characters (감성캐릭터의 감정표현 기술의 사례분석)

  • Ahn, Seong-Hye;Paek, Seon-Uck;Sung, Min-Young;Lee, Jun-Ha
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.9
    • /
    • pp.125-133
    • /
    • 2009
  • As interactivity is becoming one of the key success factors in today's digital communication environment, increasing emphasis is being placed on technologies for user-oriented emotion expression. We aim for development of enabling technologies for creation of emotional characters who can express personalized emotions in real-time. In this paper, we conduct a survey on domestic and international researches and case studies for emotional characters with a focus on facial expression. The survey result is believed to have its meaning as a guideline for future research direction.

Design and Implementation of a Real-Time Emotional Avatar (실시간 감정 표현 아바타의 설계 및 구현)

  • Jung, Il-Hong;Cho, Sae-Hong
    • Journal of Digital Contents Society
    • /
    • v.7 no.4
    • /
    • pp.235-243
    • /
    • 2006
  • This paper presents the development of certain efficient method for expressing the emotion of an avatar based on the facial expression recognition. This new method is not changing a facial expression of the avatar manually. It can be changing a real time facial expression of the avatar based on recognition of a facial pattern which can be captured by a web cam. It provides a tool for recognizing some part of images captured by the web cam. Because of using the model-based approach, this tool recognizes the images faster than other approaches such as the template-based or the network-based. It is extracting the shape of user's lip after detecting the information of eyes by using the model-based approach. By using changes of lip's patterns, we define 6 patterns of avatar's facial expression by using 13 standard lip's patterns. Avatar changes a facial expression fast by using the pre-defined avatar with corresponding expression.

  • PDF

Study for Classification of Facial Expression using Distance Features of Facial Landmarks (얼굴 랜드마크 거리 특징을 이용한 표정 분류에 대한 연구)

  • Bae, Jin Hee;Wang, Bo Hyeon;Lim, Joon S.
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.613-618
    • /
    • 2021
  • Facial expression recognition has long been established as a subject of continuous research in various fields. In this paper, the relationship between each landmark is analyzed using the features obtained by calculating the distance between the facial landmarks in the image, and five facial expressions are classified. We increased data and label reliability based on our labeling work with multiple observers. In addition, faces were recognized from the original data and landmark coordinates were extracted and used as features. A genetic algorithm was used to select features that are relatively more helpful for classification. We performed facial recognition classification and analysis with the method proposed in this paper, which shows the validity and effectiveness of the proposed method.

The Relationship between Physically Disability Persons Participation in Exercise, Heart Rate Variance, and Facial Expression Recognition (지체장애인의 운동참여와 심박변이도(HRV), 표정정서인식력과의 관계)

  • Kim, Dong hwan;Baek, Jae keun
    • 재활복지
    • /
    • v.20 no.3
    • /
    • pp.105-124
    • /
    • 2016
  • The This study aims to verify the causal relationship among physically disability persons participation in exercise, heart rate variance, and facial expression recognition. To achieve such research goal, this study targeted 139 physically disability persons and as for sampling, purposive sampling method was applied. After visiting a sporting stadium and club facilities that sporting events were held and explaining the purpose of the research in detail, only with those who agreed to participate in the research, their heart rate variance and facial emotion awareness were measured. With the results of measurement, mean value, standard deviation, correlation analysis, and structural equating model were analyzed, and the results are as follows. The quantity of exercise positively affected sympathetic activity and parasympathetic activity of autonomic nervous system. Exercise history of physically disability persons was found to have a positive influence on LF/HF, and it had a negative influence on parasympathetic activity. Sympathetic activity of physically disability persons turned out to have a positive effect on the recognition of the emotion, happiness, while the quantity of exercise had a negative influence on the recognition of the emotion, sadness. These findings were discussed and how those mechanisms that are relevant to the autonomic nervous system, facial expression recognition of physical disability persons.

An Action Unit co-occurrence constraint 3DCNN based Action Unit recognition approach

  • Jia, Xibin;Li, Weiting;Wang, Yuechen;Hong, SungChan;Su, Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.924-942
    • /
    • 2020
  • The facial expression is diverse and various among persons due to the impact of the psychology factor. Whilst the facial action is comparatively steady because of the fixedness of the anatomic structure. Therefore, to improve performance of the action unit recognition will facilitate the facial expression recognition and provide profound basis for the mental state analysis, etc. However, it still a challenge job and recognition accuracy rate is limited, because the muscle movements around the face are tiny and the facial actions are not obvious accordingly. Taking account of the moving of muscles impact each other when person express their emotion, we propose to make full use of co-occurrence relationship among action units (AUs) in this paper. Considering the dynamic characteristic of AUs as well, we adopt the 3D Convolutional Neural Network(3DCNN) as base framework and proposed to recognize multiple action units around brows, nose and mouth specially contributing in the emotion expression with putting their co-occurrence relationships as constrain. The experiments have been conducted on a typical public dataset CASME and its variant CASME2 dataset. The experiment results show that our proposed AU co-occurrence constraint 3DCNN based AU recognition approach outperforms current approaches and demonstrate the effectiveness of taking use of AUs relationship in AU recognition.

Affective interaction to emotion expressive VR agents (가상현실 에이전트와의 감성적 상호작용 기법)

  • Choi, Ahyoung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.5
    • /
    • pp.37-47
    • /
    • 2016
  • This study evaluate user feedback such as physiological response and facial expression when subjects play a social decision making game with interactive virtual agent partners. In the social decision making game, subjects will invest some of money or credit in one of projects. Their partners (virtual agents) will also invest in one of the projects. They will interact with different kinds of virtual agents which behave reciprocated or unreciprocated behavior while expressing socially affective facial expression. The total money or credit which the subject earns is contingent on partner's choice. From this study, I observed that subject's appraisal of interaction with cooperative/uncooperative (or friendly/unfriendly) virtual agents in an investment game result in increased autonomic and somatic response, and that these responses were observed by physiological signal and facial expression in real time. For assessing user feedback, Photoplethysmography (PPG) sensor, Galvanic skin response (GSR) sensor while capturing front facial image of the subject from web camera were used. After all trials, subjects asked to answer to questions associated with evaluation how much these interaction with virtual agents affect to their appraisals.

Video-based Facial Emotion Recognition using Active Shape Models and Statistical Pattern Recognizers (Active Shape Model과 통계적 패턴인식기를 이용한 얼굴 영상 기반 감정인식)

  • Jang, Gil-Jin;Jo, Ahra;Park, Jeong-Sik;Seo, Yong-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.139-146
    • /
    • 2014
  • This paper proposes an efficient method for automatically distinguishing various facial expressions. To recognize the emotions from facial expressions, the facial images are obtained by digital cameras, and a number of feature points were extracted. The extracted feature points are then transformed to 49-dimensional feature vectors which are robust to scale and translational variations, and the facial emotions are recognized by statistical pattern classifiers such Naive Bayes, MLP (multi-layer perceptron), and SVM (support vector machine). Based on the experimental results with 5-fold cross validation, SVM was the best among the classifiers, whose performance was obtained by 50.8% for 6 emotion classification, and 78.0% for 3 emotions.

Comparison Between Core Affect Dimensional Structures of Different Ages using Representational Similarity Analysis (표상 유사성 분석을 이용한 연령별 얼굴 정서 차원 비교)

  • Jongwan Kim
    • Science of Emotion and Sensibility
    • /
    • v.26 no.1
    • /
    • pp.33-42
    • /
    • 2023
  • Previous emotion studies employing facial expressions have focused on the differences between age groups for each of the emotion categories. Instead, Kim (2021) has compared representations of facial expressions in the lower-dimensional emotion space. However, he reported descriptive comparisons without statistical significance testing. This research used representational similarity analysis (Kriegeskorte et al., 2008) to directly compare empirical datasets from young, middle-aged, and old groups and conceptual models. In addition, individual differences multidimensional scaling (Carroll & Chang, 1970) was conducted to explore individual weights on the emotional dimensions for each age group. The results revealed that the old group was the least similar to the other age groups in the empirical datasets and the valence model. In addition, the arousal dimension was the least weighted for the old group compared to the other groups. This study directly tested the differences between the three age groups in terms of empirical datasets, conceptual models, and weights on the emotion dimensions.

Dynamic Facial Expression of Fuzzy Modeling Using Probability of Emotion (감정확률을 이용한 동적 얼굴표정의 퍼지 모델링)

  • Gang, Hyo-Seok;Baek, Jae-Ho;Kim, Eun-Tae;Park, Min-Yong
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.401-404
    • /
    • 2007
  • 본 논문은 거울 투영을 이용하여 2D의 감정인식 데이터베이스를 3D에 적용 가능하다는 것을 증명한다. 또한, 감정 확률을 이용하여 퍼지 모델링을 기반으로한 얼굴표정을 생성하고, 표정을 움직이는 3가지 기본 움직임에 대한 퍼지이론을 적용하여 얼굴표현함수를 제안한다. 제안된 방법은 거울 투영을 통한 다중 이미지를 이용하여 2D에서 사용되는 감정인식에 대한 특징벡터를 3D에 적용한다. 이로 인해, 2D의 모델링 대상이 되는 실제 모델의 기본감정에 대한 비선형적인 얼굴표정을 퍼지를 기반으로 모델링한다. 그리고 얼굴표정을 표현하는데 기본 감정 6가지인 행복, 슬픔, 혐오, 화남, 놀람, 무서움으로 표현되며 기본 감정의 확률에 대해서 각 감정의 평균값을 사용하고, 6가지 감정 확률을 이용하여 동적 얼굴표정을 생성한다. 제안된 방법을 3D 인간형 아바타에 적용하여 실제 모델의 표정 벡터와 비교 분석한다.

  • PDF