• Title/Summary/Keyword: Emotional expressions

Search Result 242, Processing Time 0.041 seconds

A Comparison of Effective Feature Vectors for Speech Emotion Recognition (음성신호기반의 감정인식의 특징 벡터 비교)

  • Shin, Bo-Ra;Lee, Soek-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.10
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.

The Effects of Emotional Contexts on Infant Smiling (정서 유발 맥락이 영아의 미소 얼굴 표정에 미치는 영향)

  • Hong, Hee Young;Lee, Young
    • Korean Journal of Child Studies
    • /
    • v.24 no.6
    • /
    • pp.15-31
    • /
    • 2003
  • This study examined the effects of emotion inducing contexts on types of infants smiling. Facial expressions of forty-five 11-to 15-month-old infants were videotaped in an experimental lab with positive and negative emotional contests. Infants' smiling was identified as the Duchenne smile or non-Duchenne smile based on FACS(Facial Action Coding System, Ekman & Friesen, 1978). Duration of smiling types was analyzed. Overall, infants showed more smiling in the positive than in the negative emotional context. Occurrence of Duchenne smiling was more likely in the positive than in the negative context and in the peek-a-boo than in the melody toy condition within the same positive context. Non-Duchenne smiling did not differ by context.

  • PDF

Classification and Intensity Assessment of Korean Emotion Expressing Idioms for Human Emotion Recognition

  • Park, Ji-Eun;Sohn, Sun-Ju;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.5
    • /
    • pp.617-627
    • /
    • 2012
  • Objective: The aim of the study was to develop a most widely used Korean dictionary of emotion expressing idioms. This is anticipated to assist the development of software technology that recognizes and responds to verbally expressed human emotions. Method: Through rigorous and strategic classification processes, idiomatic expressions included in this dictionary have been rated in terms of nine different emotions (i.e., happiness, sadness, fear, anger, surprise, disgust, interest, boredom, and pain) for meaning and intensity associated with each expression. Result: The Korean dictionary of emotion expression idioms included 427 expressions, with approximately two thirds classified as 'happiness'(n=96), 'sadness'(n=96), and 'anger'(n=90) emotions. Conclusion: The significance of this study primarily rests in the development of a practical language tool that contains Korean idiomatic expressions of emotions, provision of information on meaning and strength, and identification of idioms connoting two or more emotions. Application: Study findings can be utilized in emotion recognition research, particularly in identifying primary and secondary emotions as well as understanding intensity associated with various idioms used in emotion expressions. In clinical settings, information provided from this research may also enhance helping professionals' competence in verbally communicating patients' emotional needs.

An Exploratory Investigation on Visual Cues for Emotional Indexing of Image (이미지 감정색인을 위한 시각적 요인 분석에 관한 탐색적 연구)

  • Chung, SunYoung;Chung, EunKyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.48 no.1
    • /
    • pp.53-73
    • /
    • 2014
  • Given that emotion-based computing environment has grown recently, it is necessary to focus on emotional access and use of multimedia resources including images. The purpose of this study aims to identify the visual cues for emotion in images. In order to achieve it, this study selected five basic emotions such as love, happiness, sadness, fear, and anger and interviewed twenty participants to demonstrate the visual cues for emotions. A total of 620 visual cues mentioned by participants were collected from the interview results and coded according to five categories and 18 sub-categories for visual cues. Findings of this study showed that facial expressions, actions / behaviors, and syntactic features were found to be significant in terms of perceiving a specific emotion of the image. An individual emotion from visual cues demonstrated distinctive characteristics. The emotion of love showed a higher relation with visual cues such as actions and behaviors, and the happy emotion is substantially related to facial expressions. In addition, the sad emotion was found to be perceived primarily through actions and behaviors and the fear emotion is perceived considerably through facial expressions. The anger emotion is highly related to syntactic features such as lines, shapes, and sizes. Findings of this study implicated that emotional indexing could be effective when content-based features were considered in combination with concept-based features.

Making an Emotional Design Book with 5 Senses and Inspiration -Focused on the Art Book 5+1(Five Plus One)- (오감과 영감을 활용한 감성북 편집디자인 연구)

  • Hong, Dong-Sik
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.144-151
    • /
    • 2010
  • Five Plus One' is a result of visual expressions on five senses such as the sense of sight, hearing, smell, taste, and touch. Although I searched for the data on five senses, there were only information which was quoted in various languages and scientific bases, and also most of the books written were far from artistic expressions including visual pleasure and emotional expressions. In addition, most materials were mainly textbooks for children's intellectual development, academic papers, and medical publications. Therefore, I made this book that makes us to communicate about five senses and an inspiration even though we have a problem to read writings in both Korean and English. I arranged visual elements using illustrations and typography in various ways based on the scientific evidence.

Expression Characteristics of Korean Buffet applied Space Branding - Focusing on the Korean Buffet in Seoul City - (스페이스 브랜딩을 적용한 한식뷔페 표현특성 - 서울시에 위치한 한식뷔페를 중심으로 -)

  • Jeung, Yeoung-Hyun
    • Korean Institute of Interior Design Journal
    • /
    • v.26 no.3
    • /
    • pp.91-100
    • /
    • 2017
  • The rapid growth of Korean-style buffet in recent years has increased the size of corporate investment. Under this circumstance, businesses make various marketing efforts while highlighting the features and advantages of their brands. Against this backdrop, this study aims to understand which space branding has been applied to the Korean-style buffet through case studies and to propose a method of application of space branding to increase sales in an effective manner in the future. First, the research is theoretical examination and case studies with focus on the characteristics of expressions of Korean-style buffet space branding. Basically, upon completion of examining the concept and the characteristics of expressions of the Korean-style buffet and analyzing the concept of space branding, the components of space branding have been reconstructed based on preceding studies on space branding and then have been applied to each brand space. Also, the hands-on experience of the characteristics of expression of the Korean-style buffet with space branding incorporated in it and prepared a checklist via visual inspections. And then, the field surveys based on these examinations and took approach of drawing a conclusion based on the results analysis conducted by using the SPSS statistical program. Through preceding studies, the three components of space branding, that is, sensory element, emotional element, and cognitive element have been reconstructed before proceeding with this study, which has obtained five major findings as a conclusion. First, the sensory element should be given elements differentiated enough to attract consumers' attention along with a sustainable effort to have brand image imprinted in their mind. Second, in terms of emotional element, the study has found that the brand experience oriented toward interest and participation results in higher utility frequency. Third, the study has found that the cognitive element should seek consistency in communicating with consumers with focus on face-to-face contact on the display in space. Fourth, it has been found that arranging independent spaces is necessary to attract consumers' participation. Finally, the study has identified in which location area of buffet the sensory, emotional, and cognitive elements have placed a significant weight.

The Effect of Interjection in Conversational Interaction with the AI Agent: In the Context of Self-Driving Car (인공지능 에이전트 대화형 인터랙션에서의 감탄사 효과: 자율주행 맥락에서)

  • Lee, Sooji;Seo, Jeeyoon;Choi, Junho
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.551-563
    • /
    • 2022
  • This study aims to identify the effect on the user experiences when the embodied agent in a self-driving car interacts with emotional expressions by using 'interjection'. An experimental study was designed with two conditions: the inclusion of injections in the agent's conversation feedbacks (with interjections vs. without interjections) and the type of conversation (task-oriented conversation vs. social-oriented conversation). The online experiment was conducted with the four video clips of conversation scenario treatments and measured intimacy, likability, trust, social presence, perceived anthropomorphism, and future intention to use. The result showed that when the agent used interjection, the main effect on social presence was found in both conversation types. When the agent did not use interjection in the task-oriented conversation, trust and future intention to use were higher than when the agent talked with emotional expressions. In the context of the conversation with the AI agent in a self-driving car, we found only the effect of adding emotional expression by using interjection on the enhancing social presence, but no effect on the other user experience factors.

Functions and Driving Mechanisms for Face Robot Buddy (얼굴로봇 Buddy의 기능 및 구동 메커니즘)

  • Oh, Kyung-Geune;Jang, Myong-Soo;Kim, Seung-Jong;Park, Shin-Suk
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.270-277
    • /
    • 2008
  • The development of a face robot basically targets very natural human-robot interaction (HRI), especially emotional interaction. So does a face robot introduced in this paper, named Buddy. Since Buddy was developed for a mobile service robot, it doesn't have a living-being like face such as human's or animal's, but a typically robot-like face with hard skin, which maybe suitable for mass production. Besides, its structure and mechanism should be simple and its production cost also should be low enough. This paper introduces the mechanisms and functions of mobile face robot named Buddy which can take on natural and precise facial expressions and make dynamic gestures driven by one laptop PC. Buddy also can perform lip-sync, eye-contact, face-tracking for lifelike interaction. By adopting a customized emotional reaction decision model, Buddy can create own personality, emotion and motive using various sensor data input. Based on this model, Buddy can interact probably with users and perform real-time learning using personality factors. The interaction performance of Buddy is successfully demonstrated by experiments and simulations.

  • PDF

Mother-Child Interactions in a Stressful Situation by Mother's Emotional Regulation Level (스트레스 상황에서 어머니의 정서조절 수준에 따른 어머니-자녀 간 상호작용 분석)

  • Nahm, Eun Young;Park, So Eun
    • Korean Journal of Child Studies
    • /
    • v.38 no.1
    • /
    • pp.251-262
    • /
    • 2017
  • Objective: This study analyzed mother-child interactions in a stressful situation each second by mother's emotional regulation level. Methods: The study was conducted with 16 mothers and their 5-year-old children playing a teaching task for 15 min. During the interactions, the participants were videotaped and examined. Furthermore, qualitative analysis was used for analyzing mother-child interactions in detail by creating a situation that maximizes the stress and frustration of the mother and child. Results: The results showed that maternal humor and affection were significantly related to child positive emotion and that maternal coaching closely correlated with the child pride, pleasure, and whining. Additionally, maternal intrusive behavior showed a positive correlation with child anger. Lastly, mothers with higher levels of emotional regulation more often expressed affection to their children. They were more actively involved in the tasks and used fewer positive or negative directive expressions. Therefore, children of this group expressed more positive emotions. Conclusion: These findings suggests that programs improving parental emotional reaction and emotion regulation should be developed.

Acquisition of natural Emotional Voice Through Autobiographical Recall Method (자전적 회상을 통한 자연스런 정서음성정보 수집방법에 관한 연구)

  • Jo, Eun-Kyung;Jo, Cheol-Woo;Min, Kyung-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.2
    • /
    • pp.66-70
    • /
    • 1997
  • In order to obtain natural emotional voice in laboratory, an autobiographical recall method was used and happy, angry, sad and afraid feelings were induced in 16 college students. Three independent judges rated the subject's facial expressions and vocal characteristics. The mood induction results were compared with those from the actor-initiated method. Data analysis showed that recall-induced voices successfully conveyed subtle emotional cues, while actor-induced voices signaled more extreme emotioms. Implications of the autobiographical recall method in emotional voice research and potential problems are discussed.

  • PDF