• Title/Summary/Keyword: emotion technology

Search Result 802, Processing Time 0.035 seconds

The effect of Service climate on Customer emotion and Customer satisfaction (기업의 서비스 풍토가 고객감정과 고객만족도에 미치는 영향)

  • Kang, Kun-Myong;Hong, Jung-Wan
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.9
    • /
    • pp.65-74
    • /
    • 2021
  • In this study, we further study the customer's positive emotion about the impact of different inherent service climate on the emotions and satisfaction of the customers who receive the service. Through this, the purpose was to present the direction of creating a service climate. As a research method, structural equation statistical analysis, such as measurement model analysis and structural model analysis, was performed using SmartPLS (v.3.2) for data collected in surveys. Looking at the research results, first, a company's service climate has a positive (+) impact on positive customer emotions: pleasure, pleasure, and happiness. This can be interpreted as an indication that creating a business climate for service is an important factor that elicits positive emotions from customers. Second, a company's service climate and positive customer emotion also have a positive impact on customer satisfaction. Finally, when a company's service climate affects customer satisfaction, happiness has the greatest mediating effect among several parameters. This demonstrated empirically that satisfying the happy feelings of customers is the most important of the company's service climate. Since this study is aimed at a small number of restaurant companies, there is a limit to generalizing the findings and applying them to all restaurant companies. Nevertheless, it is meaningful to study the emotions of positive customers when the service climate affects customer satisfaction, and we hope that the company's analysis of service climate will continue to improve customer satisfaction through various emotional analysis as well as positive factors.

A study on the parenting stress factors and the copying strategies of marriage immigrant women raising middle and high school student (중·고등학생 자녀를 양육하는 결혼이주여성의 양육스트레스와 대처방안에 대한 연구)

  • Huang, Haiying;Lee, Mijung
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.5 no.4
    • /
    • pp.415-426
    • /
    • 2015
  • This study is intended to learn about the factors appearing in parenting stress and the copying strategies by targeting marriage immigrant women who are raising middle and high school student. To this end, in-depth interviews were conducted on seven participants of Marriage Migrant Women who are living in Seoul and Gyeongi area. Generally to say, first of results showed that the personal factors, family factors, social factors and enculturative factors were found out as the factors of parenting stress of them. Secondly, problem-centered and emotion-focused coping strategies for the factors of stress were the main ways. Specifically, as the individual factors, the self-efficiency was coped with problem-focused ways and the parenting roles were coped with emotion-focused ways. As the family factors, child's activity and sociality impact their school adjustment and their mother's parenting stress and, various copying strategies were used depending on the different situation. For the social factors, looking for family supporting as the active problem-focused coping ways were used in husband's family and looking for emotional comfort as the emotion-focused coping ways were used in parents' home. In the case of enculturative factors, the emotion-focused coping strategies were used for the Public gaze and the prejudice around them that caused overwhelming sense of helplessness.

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2E
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

Trend Analysis on Treatment of Psychological Disorders Using Virtual Reality (가상현실을 이용한 심리치료 기술 동향과 전망)

  • Yoon, Hyun Joong;Chung, Seong Youb
    • Journal of Institute of Convergence Technology
    • /
    • v.2 no.2
    • /
    • pp.5-12
    • /
    • 2012
  • Recently, peoples are suffering from various psychological disorders such as addiction, phobia, depression, and bipolar disorder. Moreover, children with ADD/ADHD and autism are increasing. Korean tends to regard the psychological disorders as taboo. Therefore, it is unusual case that the mental patient gets the psychological therapy. Virtual reality has come to the spotlight as a useful tool for the therapy due to its anonymity and easy accessibility. The therapy in the virtual reality is called cyber-therapy. Emotion of the patient is important for the treating process. The objective of this paper is to review the researches on the treatment of psychological disorders using the virtual reality and prospect the affective interaction technology for the cyber-therapy.

  • PDF

Emotional Model Focused on Robot's Familiarity to Human

  • Choi, Tae-Yong;Kim, Chang-Hyun;Lee, Ju-Jang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1025-1030
    • /
    • 2005
  • This paper deals with the emotional model of the software-robot. The software-robot requires several capabilities such as sensing, perceiving, acting, communicating, and surviving. and so on. There are already many studies about the emotional model like KISMET and AIBO. The new emotional model using the modified friendship scheme is proposed in this paper. Quite often, the available emotional models have time invariant human respond architectures. Conventional emotional models make the sociable robot get around with humans, and obey human commands during robot operation. This behavior makes the robot very different from real pets. Similar to real pets, the proposed emotional model with the modified friendship capability has time varying property depending on interaction between human and robot.

  • PDF

Automatic Textile-Image Classification System using Human Emotion (감성 기반의 자동 텍스타일 영상 분류 시스템)

  • Kim, Young-Rae;Shin, Yun-Hee;Kim, Eun-Yi
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06c
    • /
    • pp.561-564
    • /
    • 2008
  • 본 논문에서는 감성을 기반으로 텍스타일 영상을 자동으로 분류할 수 있는 시스템을 제안한다. 이 때, 사용된 감성 그룹은 고바야시의 10가지 감성 키워드 - {romantic, clear, natural, casual, elegant, chic, dynamic, classic, dandy, modern} - 를 이용한다. 제안된 시스템은 특징 추출과 분류로 구성된다. 특징 추출 단계에서는 텍스타일을 구성하는 대표 컬러를 추출하기 위해서 양자화 기법을 이용하고, 패턴정보를 표현하기 위해서는 웨이블릿 변환 후의 통계적인 정보를 이용한다. 신경망 기반의 분류기는 추출된 특징들을 입력으로 받아 입력 텍스타일 영상을 분류한다. 제안된 감성인식 방법의 효율성을 증명하기 위해서 220장의 텍스타일 영상에서 실험한 결과 제안된 방법은 99%의 정확도를 보였다. 이러한 실험 결과는 제안된 방법이 다양한 텍스타일 영상에 대해 일반화되어 사용될 수 있음을 보여주었다.

  • PDF

Toward More Reliable Emotion Recognition of Vocal Sentences by Emphasizing Information of Korean Ending Boundary Tones (한국어 문미억양 강조를 통한 향상된 음성문장 감정인식)

  • Lee Tae-Seung;Park Mikyong;Kim Tae-Soo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.514-516
    • /
    • 2005
  • 인간을 상대하는 자율장치는 고객의 자발적인 협조를 얻기 위해 암시적인 신호에 포함된 감정과 태도를 인지할 수 있어야 한다. 인간에게 음성은 가장 쉽고 자연스럽게 정보를 교환할 수 있는 수단이다. 지금까지 감정과 태도를 이해할 수 있는 자동시스템은 발성문장의 피치와 에너지에 기반한 특징을 활용하였다. 이와 같은 기존의 감정인식 시스템의 성능은 문장의 특정한 억양구간이 감정과 태도와 관련을 갖는다는 언어학적 지식의 활용으로 보다 높은 향상이 가능하다. 본 논문에서는 한국어 문미억양에 대한 언어학적 지식을 피치기반 특징과 다층신경망을 활용하여 구현한 자동시스템에 적용하여 감정인식률을 향상시킨다. 한국어 감정음성 데이터베이스를 대상으로 실험을 실시한 결과 $4\%$의 인식률 향상을 확인하였다.

  • PDF

Kinect Sensor- based LMA Motion Recognition Model Development

  • Hong, Sung Hee
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.367-372
    • /
    • 2021
  • The purpose of this study is to suggest that the movement expression activity of intellectually disabled people is effective in the learning process of LMA motion recognition based on Kinect sensor. We performed an ICT motion recognition games for intellectually disabled based on movement learning of LMA. The characteristics of the movement through Laban's LMA include the change of time in which movement occurs through the human body that recognizes space and the tension or relaxation of emotion expression. The design and implementation of the motion recognition model will be described, and the possibility of using the proposed motion recognition model is verified through a simple experiment. As a result of the experiment, 24 movement expression activities conducted through 10 learning sessions of 5 participants showed a concordance rate of 53.4% or more of the total average. Learning motion games that appear in response to changes in motion had a good effect on positive learning emotions. As a result of study, learning motion games that appear in response to changes in motion had a good effect on positive learning emotions

Facial Data Visualization for Improved Deep Learning Based Emotion Recognition

  • Lee, Seung Ho
    • Journal of Information Science Theory and Practice
    • /
    • v.7 no.2
    • /
    • pp.32-39
    • /
    • 2019
  • A convolutional neural network (CNN) has been widely used in facial expression recognition (FER) because it can automatically learn discriminative appearance features from an expression image. To make full use of its discriminating capability, this paper suggests a simple but effective method for CNN based FER. Specifically, instead of an original expression image that contains facial appearance only, the expression image with facial geometry visualization is used as input to CNN. In this way, geometric and appearance features could be simultaneously learned, making CNN more discriminative for FER. A simple CNN extension is also presented in this paper, aiming to utilize geometric expression change derived from an expression image sequence. Experimental results on two public datasets (CK+ and MMI) show that CNN using facial geometry visualization clearly outperforms the conventional CNN using facial appearance only.

Implementation of Real Time Facial Expression and Speech Emotion Analyzer based on Haar Cascade and DNN (Haar Cascade와 DNN 기반의 실시간 얼굴 표정 및 음성 감정 분석기 구현)

  • Yu, Chan-Young;Seo, Duck-Kyu;Jung, Yuchul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.33-36
    • /
    • 2021
  • 본 논문에서는 인간의 표정과 목소리를 기반으로 한 감정 분석기를 제안한다. 제안하는 분석기들은 수많은 인간의 표정 중 뚜렷한 특징을 가진 표정 7가지를 별도의 클래스로 구성하며, DNN 모델을 수정하여 사용하였다. 또한, 음성 데이터는 학습 데이터 증식을 위한 Data Augmentation을 하였으며, 학습 도중 과적합을 방지하기 위해 콜백 함수를 사용하여 가장 최적의 성능에 도달했을 때, Early-stop 되도록 설정했다. 제안하는 표정 감정 분석 모델의 학습 결과는 val loss값이 0.94, val accuracy 값은 0.66이고, 음성 감정 분석 모델의 학습 결과는 val loss 결과값이 0.89, val accuracy 값은 0.65로, OpenCV 라이브러리를 사용한 모델 테스트는 안정적인 결과를 도출하였다.

  • PDF