• Title/Summary/Keyword: emotional generation

Search Result 214, Processing Time 0.03 seconds

A structural equation model of organizational commitment by hospital nurses: The moderating effect of each generation through multi-group analysis (병원간호사의 조직몰입 구조모형: 다중집단분석을 통한 세대별 조절 효과)

  • Chae, Jeong Hye;Kim, Young Suk
    • The Journal of Korean Academic Society of Nursing Education
    • /
    • v.28 no.3
    • /
    • pp.305-316
    • /
    • 2022
  • Purpose: The purpose of this study was to construct a structural equation model of organizational commitment in hospital nurses based on a job demands-resources model and to confirm the moderating effect(s) according to the nurses' generation. Methods: The model was constructed of the exogenous variables of social support, emotional intelligence, emotional labor, and job conflict and the endogenous variables of burnout, job engagement, and organizational commitment. The participants were 560 hospital nurses working in 3 general hospitals. Data were collected from August 1 to September 30, 2021, and analyzed using SPSS Window 23.0 and IBM AMOS 23.0. Results: The strongest factor directly influencing hospital nurses' organizational commitment was social support. In a multiple group analysis, nurses' generation had a partial moderating effect. In a generation-specific analysis, the Z generation group was higher than the X and Y generation groups in the variables of emotional labor and burnout related to organizational commitment. Conclusion: Based on the findings of this study, to improve hospital nurses' organizational commitment, social support is needed as an important management strategy. At the organizational level, we need to develop ways to improve organizational commitment by reducing the emotional labor and burnout of Generation Z.

Stylized Image Generation based on Music-image Synesthesia Emotional Style Transfer using CNN Network

  • Xing, Baixi;Dou, Jian;Huang, Qing;Si, Huahao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1464-1485
    • /
    • 2021
  • Emotional style of multimedia art works are abstract content information. This study aims to explore emotional style transfer method and find the possible way of matching music with appropriate images in respect to emotional style. DCNNs (Deep Convolutional Neural Networks) can capture style and provide emotional style transfer iterative solution for affective image generation. Here, we learn the image emotion features via DCNNs and map the affective style on the other images. We set image emotion feature as the style target in this style transfer problem, and held experiments to handle affective image generation of eight emotion categories, including dignified, dreaming, sad, vigorous, soothing, exciting, joyous, and graceful. A user study was conducted to test the synesthesia emotional image style transfer result with ground truth user perception triggered by the music-image pairs' stimuli. The transferred affective image result for music-image emotional synesthesia perception was proved effective according to user study result.

Multi Emotional Agent based Story Generation (다중 감정 에이전트를 이용한 자동 이야기 생성 시스템의 설계)

  • Kim, Won-Il;Kim, Dong-Hyun;Hong, You-Sik;Kim, Sung-Sik;Lee, Chang-Min
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.5
    • /
    • pp.134-139
    • /
    • 2008
  • In this paper, we propose a story generation system using multi emotional agents. The proposed multi emotional agents are equipped with multiple emotional model so that it can be used as individually personalized agents that can generates unique storylines. Basically these kinds of multi emotional agents are easily employed as Avatar or NPC in computer games. In the proposed system, emotional agents are used as actor or actress whose characters and preferences are different each other. The storylines generated using the proposed system are realistic since the characters are emotional as humans.

Emotional-Controllable Talking Face Generation on Real-Time System

  • Van-Thien Phan;Hyung-Jeong Yang;Seung-Won Kim;Ji-Eun Shin;Soo-Hyung Kim
    • Annual Conference of KIPS
    • /
    • 2024.10a
    • /
    • pp.523-526
    • /
    • 2024
  • Recent progress in audio-driven talking face generation has focused on achieving more realistic and emotionally expressive lip movements, enhancing the quality of virtual avatars and animated characters for applications in entertainment, education, healthcare, and more. Despite these advances, challenges remain in creating natural and emotionally nuanced lip synchronization efficiently and accurately. To address these issues, we introduce a novel method for audio-driven lip-sync that offers precise control over emotional expressions, outperforming current techniques. Our method utilizes Conditional Deep Variational Autoencoder to produce lifelike lip movements that align seamlessly with audio inputs while dynamically adjusting for various emotional states. Experimental results highlight the advantages of our approach, showing significant improvements in emotional accuracy and the overall quality of the generated facial animations, video sequences on the Crema-D dataset [1].

Story Generation System using Emotional Agent (감정 에이전트를 이용한 자동 이야기 생성 시스템의 설계)

  • Kim, Won-Il;Kim, Dong-Hyun;Hong, You-Sik;Lee, Chang-Min
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.5
    • /
    • pp.140-147
    • /
    • 2008
  • This paper proposes Story Generation system based on Emotional Agent. In the proposed system, Emotional Agent is used as Actor whereas Story Generation produces goal and detailed plans to achieve goal. The storyline is constructed when the goal oriented plan is processed. The proposed system is effective and realistic since it employs human-like Emotional Agent as a main character in generating story.

Robot's Emotion Generation Model based on Generalized Context Input Variables with Personality and Familiarity (성격과 친밀도를 지닌 로봇의 일반화된 상황 입력에 기반한 감정 생성)

  • Kwon, Dong-Soo;Park, Jong-Chan;Kim, Young-Min;Kim, Hyoung-Rock;Song, Hyunsoo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.2
    • /
    • pp.91-101
    • /
    • 2008
  • For a friendly interaction between human and robot, emotional interchange has recently been more important. So many researchers who are investigating the emotion generation model tried to naturalize the robot's emotional state and to improve the usability of the model for the designer of the robot. And also the various emotion generation of the robot is needed to increase the believability of the robot. So in this paper we used the hybrid emotion generation architecture, and defined the generalized context input of emotion generation model for the designer to easily implement it to the robot. And we developed the personality and loyalty model based on the psychology for various emotion generation. Robot's personality is implemented with the emotional stability from Big-Five, and loyalty is made of familiarity generation, expression, and learning procedure which are based on the human-human social relationship such as balance theory and social exchange theory. We verify this emotion generation model by implementing it to the 'user calling and scheduling' scenario.

  • PDF

Korean Prosody Generation Based on Stem-ML (Stem-ML에 기반한 한국어 억양 생성)

  • Han, Young-Ho;Kim, Hyung-Soon
    • MALSORI
    • /
    • no.54
    • /
    • pp.45-61
    • /
    • 2005
  • In this paper, we present a method of generating intonation contour for Korean text-to-speech (TTS) system and a method of synthesizing emotional speech, both based on Soft template mark-up language (Stem-ML), a novel prosody generation model combining mark-up tags and pitch generation in one. The evaluation shows that the intonation contour generated by Stem-ML is better than that by our previous work. It is also found that Stem-ML is a useful tool for generating emotional speech, by controling limited number of tags. Large-size emotional speech database is crucial for more extensive evaluation.

  • PDF

A Study on the Difference between Young and Old Generation of SNS Behavior (SNS(social network service)활용에 대한 세대별 차이 연구)

  • Hwang, Yoon Yong;Lee, Ki Sang;Choi, Soow-A
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.20 no.1
    • /
    • pp.63-77
    • /
    • 2015
  • As social network service(SNS) environments have been changed and increased, people perceive SNSs as a part of their daily lives. Therefore mutual communication activities based on the Internet and its influences are expanding continuously. This paper explored the difference between consumers' emotional well-being level and social capital formed through SNSs. Given that the reason of using SNS and its utilization can be different depending on consumers, this paper also examines generation differences. Hence, we examine how the forms of emotional well-being and social capital in SNSs can be different according to each generation. We conducted a survey targeting the consumers who have an experience of using online SNS and looked into the effects of emotional well-being and social capital among generations using eighty three valid samples. In this study, we find that there are differences on the effects of the sizes and the types of social capital formed through SNS, depending on the generations. In particular, the size of social capital from younger generation was larger compared to the older generation and bridging social capital, one of social capital types, was also bigger in the younger generation compared to the older generation. Although general emotional well-being was not differentiated among the generations, we could find a generation difference by showing that older generation's negative well-being, one of emotional well-being types, was more sensitive than younger generation. Based on such results, this paper proposes SNS utilization plan sub-divided by generations, suggesting management direction of online social networks.

How to Express Emotion: Role of Prosody and Voice Quality Parameters (감정 표현 방법: 운율과 음질의 역할)

  • Lee, Sang-Min;Lee, Ho-Joon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.11
    • /
    • pp.159-166
    • /
    • 2014
  • In this paper, we examine the role of emotional acoustic cues including both prosody and voice quality parameters for the modification of a word sense. For the extraction of prosody parameters and voice quality parameters, we used 60 pieces of speech data spoken by six speakers with five different emotional states. We analyzed eight different emotional acoustic cues, and used a discriminant analysis technique in order to find the dominant sequence of acoustic cues. As a result, we found that anger has a close relation with intensity level and 2nd formant bandwidth range; joy has a relative relation with the position of 2nd and 3rd formant values and intensity level; sadness has a strong relation only with prosody cues such as intensity level and pitch level; and fear has a relation with pitch level and 2nd formant value with its bandwidth range. These findings can be used as the guideline for find-tuning an emotional spoken language generation system, because these distinct sequences of acoustic cues reveal the subtle characteristics of each emotional state.

The Effects of Components of Social Information Processing and Emotional Factors on Preschoolers' Overt and Relational Aggression (사회정보처리 구성요소와 정서요인이 유아의 외현적 공격성과 관계적 공격성에 미치는 영향)

  • Choi, In-Suk;Lee, Kang-Yi
    • Korean Journal of Child Studies
    • /
    • v.31 no.6
    • /
    • pp.15-34
    • /
    • 2010
  • The present study examines the sex differences in 5-year-old preschoolers' aggression according to the type of aggression (overt, relational) and the effect of components of social information processing (SIP : interpretation, goal clarification, response generation, response evaluation) and emotional factors (emotionality, emotional knowledge, emotion regulation) on their aggression. The subjects were 112 5-year-olds (56 boys, 56 girls) and their 11 teachers recruited from 9 day-care centers in Seoul and Kyung-Ki province. Each child's SIP and emotional knowledge were individually assessed with pictorial tasks and teachers reported on children's aggression, emotionality, and emotion regulation by questionnaires. Results indicated that there was a significant sex difference only in the preschoolers' overt aggression. Overtly aggressive response generation in SIP was the strongest predictor of preschoolers' overt aggression while anger of negative emotionality in emotional factors was the strongest predictor of preschoolers' relational aggression.