• Title/Summary/Keyword: emotion technology

Search Result 802, Processing Time 0.027 seconds

A study on behavior response of child by emotion coaching of teacher based on emotional recognition technology (감성인식기술 기반 교사의 감정코칭이 유아에게 미치는 반응 연구)

  • Choi, Moon Jung;Whang, Min-Cheol
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.7
    • /
    • pp.323-330
    • /
    • 2017
  • Emotion in early childhood has been observed to make an important effect on behavioral development. The teacher has coached to develop good behavior based on considering emotional response rather than rational response. This study was to determine significance of emotional coaching for behavior development according emotion recognized by non-verbal measurement system developed specially in this study. The participants were 44 people and were asked to study in four experimental situation. The experiment was designed to four situation such as class without coaching, behavioral coaching, emotion coaching, and emotion coaching based on emotional recognition system. The dependent variables were subjective evaluation, behavioral amplitude, and HRC (Heart Rhythm Coherence) of heart response. The results showed the highest positive evaluation, behavioral amplitude, and HRC at emotion coaching based on emotional recognition system. In post-doc analysis, the subjective evaluation showed no difference between emotion coaching and system based emotion coaching. However, the behavioral amplitude and HRC showed a significant response between two coaching situation. In conclusion, quantitative data such as behavioral amplitude and HRC was expected to solve the ambiguity of subjective evaluation. The emotion coaching of teacher using emotional recognition system was can be to improve positive emotion and psychological stability for children.

The Effect of Empathy induced by Positive Events on Subjective Value of Reward: Preliminary Study

  • Kim, Jong-Wan;Jung, Dae-Hyun;Eom, Ki-Min;Han, Kwang-Hee
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.11a
    • /
    • pp.228-231
    • /
    • 2009
  • Recent studies have focused on human empathic behavior regarding to physical, cognitive, and emotional aspects. Especially empathy is considered as a multidisciplinary study because of its wide application. However, majority of the studies have been focusing on empathy induced by negative emotion and physical pain. As a result, the purpose of this study, based on Loggia et al. (2008), is to investigate if empathy could be induced by positive events, and consequently if the positive empathy could increase subjective value of reward. According to the result of experiment which involved eight participants, we could confirm the inducement of empathy by positive events significantly; its power is not so strong though. However there was no interaction between empathy type (positive and no empathy) and whether the target received the reward or not. But if we would recruit more participants and additionally analyze correlation among trait/empathic state questionnaire, subjective ratings of the reward and emotion of the target, we suggest that this study would be valuable in that it could expand the empathy studies.

  • PDF

2D Emotion Classification using Short-Time Fourier Transform of Pupil Size Variation Signals and Convolutional Neural Network (동공크기 변화신호의 STFT와 CNN을 이용한 2차원 감성분류)

  • Lee, Hee-Jae;Lee, David;Lee, Sang-Goog
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.10
    • /
    • pp.1646-1654
    • /
    • 2017
  • Pupil size variation can not be controlled intentionally by the user and includes various features such as the blinking frequency and the duration of a blink, so it is suitable for understanding the user's emotional state. In addition, an ocular feature based emotion classification method should be studied for virtual and augmented reality, which is expected to be applied to various fields. In this paper, we propose a novel emotion classification based on CNN with pupil size variation signals which include not only various ocular feature information but also time information. As a result, compared to previous studies using the same database, the proposed method showed improved results of 5.99% and 12.98% respectively from arousal and valence emotion classification.

Development of Deep Learning Models for Multi-class Sentiment Analysis (딥러닝 기반의 다범주 감성분석 모델 개발)

  • Syaekhoni, M. Alex;Seo, Sang Hyun;Kwon, Young S.
    • Journal of Information Technology Services
    • /
    • v.16 no.4
    • /
    • pp.149-160
    • /
    • 2017
  • Sentiment analysis is the process of determining whether a piece of document, text or conversation is positive, negative, neural or other emotion. Sentiment analysis has been applied for several real-world applications, such as chatbot. In the last five years, the practical use of the chatbot has been prevailing in many field of industry. In the chatbot applications, to recognize the user emotion, sentiment analysis must be performed in advance in order to understand the intent of speakers. The specific emotion is more than describing positive or negative sentences. In light of this context, we propose deep learning models for conducting multi-class sentiment analysis for identifying speaker's emotion which is categorized to be joy, fear, guilt, sad, shame, disgust, and anger. Thus, we develop convolutional neural network (CNN), long short term memory (LSTM), and multi-layer neural network models, as deep neural networks models, for detecting emotion in a sentence. In addition, word embedding process was also applied in our research. In our experiments, we have found that long short term memory (LSTM) model performs best compared to convolutional neural networks and multi-layer neural networks. Moreover, we also show the practical applicability of the deep learning models to the sentiment analysis for chatbot.

Incomplete Cholesky Decomposition based Kernel Cross Modal Factor Analysis for Audiovisual Continuous Dimensional Emotion Recognition

  • Li, Xia;Lu, Guanming;Yan, Jingjie;Li, Haibo;Zhang, Zhengyan;Sun, Ning;Xie, Shipeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.810-831
    • /
    • 2019
  • Recently, continuous dimensional emotion recognition from audiovisual clues has attracted increasing attention in both theory and in practice. The large amount of data involved in the recognition processing decreases the efficiency of most bimodal information fusion algorithms. A novel algorithm, namely the incomplete Cholesky decomposition based kernel cross factor analysis (ICDKCFA), is presented and employed for continuous dimensional audiovisual emotion recognition, in this paper. After the ICDKCFA feature transformation, two basic fusion strategies, namely feature-level fusion and decision-level fusion, are explored to combine the transformed visual and audio features for emotion recognition. Finally, extensive experiments are conducted to evaluate the ICDKCFA approach on the AVEC 2016 Multimodal Affect Recognition Sub-Challenge dataset. The experimental results show that the ICDKCFA method has a higher speed than the original kernel cross factor analysis with the comparable performance. Moreover, the ICDKCFA method achieves a better performance than other common information fusion methods, such as the Canonical correlation analysis, kernel canonical correlation analysis and cross-modal factor analysis based fusion methods.

Deep Learning-Based Speech Emotion Recognition Technology Using Voice Feature Filters (음성 특징 필터를 이용한 딥러닝 기반 음성 감정 인식 기술)

  • Shin Hyun Sam;Jun-Ki Hong
    • The Journal of Bigdata
    • /
    • v.8 no.2
    • /
    • pp.223-231
    • /
    • 2023
  • In this study, we propose a model that extracts and analyzes features from deep learning-based speech signals, generates filters, and utilizes these filters to recognize emotions in speech signals. We evaluate the performance of emotion recognition accuracy using the proposed model. According to the simulation results using the proposed model, the average emotion recognition accuracy of DNN and RNN was very similar, at 84.59% and 84.52%, respectively. However, we observed that the simulation time for DNN was approximately 44.5% shorter than that of RNN, enabling quicker emotion prediction.

Robot's Motivational Emotion Model with Value Effectiveness for Social Human and Robot Interaction (사람과 로봇의 사회적 상호작용을 위한 로봇의 가치효용성 기반 동기-감정 생성 모델)

  • Lee, Won Hyong;Park, Jeong Woo;Kim, Woo Hyun;Lee, Hui Sung;Chung, Myung Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.5
    • /
    • pp.503-512
    • /
    • 2014
  • People would like to be socially engaged not only with humans but also with robots. One of the most common ways in the robotic field to enhance human robot interaction is to use emotion and integrate emotional concepts into robots. Many researchers have been focusing on developing a robot's emotional expressions. However, it is first necessary to establish the psychological background of a robot's emotion generation model in order to implement the whole process of a robot's emotional behavior. Therefore, this article suggests a robot's motivational emotion model with value effectiveness from a Higgins' motivation definition, regulatory focus theory, and Circumplex model. For the test, a game with the best-two-out-of-three rule is introduced. Each step of the game was evaluated by the proposed model. As the results imply, the proposed model generated psychologically appropriate emotions for a robot in the given situation. The empirical survey remains for future work to prove that this research improves social human robot interaction.

A Study on the Development of Emotional Content through Natural Language Processing Deep Learning Model Emotion Analysis (자연어 처리 딥러닝 모델 감정분석을 통한 감성 콘텐츠 개발 연구)

  • Hyun-Soo Lee;Min-Ha Kim;Ji-won Seo;Jung-Yi Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.687-692
    • /
    • 2023
  • We analyze the accuracy of emotion analysis of natural language processing deep learning model and propose to use it for emotional content development. After looking at the outline of the GPT-3 model, about 6,000 pieces of dialogue data provided by Aihub were input to 9 emotion categories: 'joy', 'sadness', 'fear', 'anger', 'disgust', and 'surprise'. ', 'interest', 'boredom', and 'pain'. Performance evaluation was conducted using the evaluation indices of accuracy, precision, recall, and F1-score, which are evaluation methods for natural language processing models. As a result of the emotion analysis, the accuracy was over 91%, and in the case of precision, 'fear' and 'pain' showed low values. In the case of reproducibility, a low value was shown in negative emotions, and in the case of 'disgust' in particular, an error appeared due to the lack of data. In the case of previous studies, emotion analysis was mainly used only for polarity analysis divided into positive, negative, and neutral, and there was a limitation in that it was used only in the feedback stage due to its nature. We expand emotion analysis into 9 categories and suggest its use in the development of emotional content considering it from the planning stage. It is expected that more accurate results can be obtained if emotion analysis is performed by additionally collecting more diverse daily conversations through follow-up research.

The Effects of Priming Emotion among College Students at the Processes of Words Negativity Information (유발된 정서가 대학생의 부정적 어휘정보 처리에 미치는 효과)

  • Kim, Choong-Myung
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.10
    • /
    • pp.318-324
    • /
    • 2020
  • The present study was conducted to investigate the influences of emotion priming and the number of negation words on the task of sentential predicate reasoning in groups with or without anxiety symptoms. 3 types of primed emotions and 2 types of stimulus and 3 conditions of negation words were used as a within-subject variable. The subjects were instructed to make facial expressions that match the directions, and were asked to choose the correct answer from the given examples. Mixed repeated measured ANOVA analyses on reaction time first showed main effects for the variables of emotion, stimulus, number of negation words and anxiety level, and the interaction effects for the negation words x anxiety combination. These results are presumably suggested to reflect that externally intervening emotion works on language comprehension in a way that anxiety could delay task processing speed regardless of the emotion and stimulus type, meanwhile the number of negation words can slower language processing only in a anxiety group. Implications and limitations were discussed for the future work.

The Design of Knowledge-Emotional Reaction Model considering Personality (개인성을 고려한 지식-감정 반응 모델의 설계)

  • Shim, Jeong-Yon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.116-122
    • /
    • 2010
  • As the importance of HCI(Human-Computer Interface) caused by dramatically developed computer technology is getting high, the requirement for the design of human friendly systems is also getting high. First of all, the personality and Emotional factor should be considered for implementing more human friendly systems. Many studies on Knowledge, Emotion and personality have been made, but the combined methods connecting these three factors is not so many investigated yet. It is known that memorizing process includes not only knowledge but also the emotion and the emotion state has much effects on the process of reasoning and decision making step. Accordingly, for implementing more human friendly efficient sophisticated intelligent system, the system considering these three factors should be modeled and designed. In this paper, knowledge-emotion reaction model was designed. Five types are defined for representing the personality and emotion reaction mechanism calculating emotion vector based on the extracted Thought threads by Type matching selection was proposed. This system is applied to the virtual memory and its emotional reactions are simulated.