• Title/Summary/Keyword: emotion engineering

Search Result 793, Processing Time 0.024 seconds

Emotion Recognition of Korean and Japanese using Facial Images (얼굴영상을 이용한 한국인과 일본인의 감정 인식 비교)

  • Lee, Dae-Jong;Ahn, Ui-Sook;Park, Jang-Hwan;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.197-203
    • /
    • 2005
  • In this paper, we propose an emotion recognition using facial Images to effectively design human interface. Facial database consists of six basic human emotions including happiness, sadness, anger, surprise, fear and dislike which have been known as common emotions regardless of nation and culture. Emotion recognition for the facial images is performed after applying the discrete wavelet. Here, the feature vectors are extracted from the PCA and LDA. Experimental results show that human emotions such as happiness, sadness, and anger has better performance than surprise, fear and dislike. Expecially, Japanese shows lower performance for the dislike emotion. Generally, the recognition rates for Korean have higher values than Japanese cases.

Alterations in Functions of Cognitive Emotion Regulation and Related Brain Regions in Maltreatment Victims (아동기 학대 경험이 인지적 정서조절 능력 및 관련 뇌영역 기능에 미치는 영향)

  • Kim, Seungho;Lee, Sang Won;Chang, Yongmin;Lee, Seung Jae
    • Korean Journal of Biological Psychiatry
    • /
    • v.29 no.1
    • /
    • pp.15-21
    • /
    • 2022
  • Objectives Maltreatment experiences can alter brain function related to emotion regulation, such as cognitive reappraisal. While dysregulation of emotion is an important risk factor to mental health problems in maltreated people, studies reported alterations in brain networks related to cognitive reappraisal are still lacking. Methods Twenty-seven healthy subjects were recruited in this study. The maltreatment experiences and positive reappraisal abilities were measured using the Childhood Trauma Questionnaire-Short Form and the Cognitive Emotion Regulation Questionnaire, respectively. Twelve subjects reported one or more moderate maltreatment experiences. Subjects were re-exposed to pictures after the cognitive reappraisal task using the International Affective Picture System during fMRI scan. Results The maltreatment group reported more negative feelings on negative pictures which tried cognitive reappraisal than the no-maltreatment group (p < 0.05). Activities in the right superior marginal gyrus and right middle temporal gyrus were higher in the maltreatment group (uncorrected p < 0.001, cluster size > 20). Conclusions We found that paradoxical activities in semantic networks were shown in the victims of maltreatment. Further study might be needed to clarify these aberrant functions in semantic networks related to maltreatment experiences.

A Study on Emotion Recognition of Chunk-Based Time Series Speech (청크 기반 시계열 음성의 감정 인식 연구)

  • Hyun-Sam Shin;Jun-Ki Hong;Sung-Chan Hong
    • Journal of Internet Computing and Services
    • /
    • v.24 no.2
    • /
    • pp.11-18
    • /
    • 2023
  • Recently, in the field of Speech Emotion Recognition (SER), many studies have been conducted to improve accuracy using voice features and modeling. In addition to modeling studies to improve the accuracy of existing voice emotion recognition, various studies using voice features are being conducted. This paper, voice files are separated by time interval in a time series method, focusing on the fact that voice emotions are related to time flow. After voice file separation, we propose a model for classifying emotions of speech data by extracting speech features Mel, Chroma, zero-crossing rate (ZCR), root mean square (RMS), and mel-frequency cepstrum coefficients (MFCC) and applying them to a recurrent neural network model used for sequential data processing. As proposed method, voice features were extracted from all files using 'librosa' library and applied to neural network models. The experimental method compared and analyzed the performance of models of recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU) using the Interactive emotional dyadic motion capture Interactive Emotional Dyadic Motion Capture (IEMOCAP) english dataset.

The Study of the Mechanism for Brain Function Improvement with Intentional Hand Movement (의식적인 손 운동을 통한 뇌기능 증진의 메커니즘에 관한 연구)

  • Kim, K.;Lee, S.J.;Park, Y.G.;Kim, S.H.;Lee, J.O.;Yu, M.;Hong, C.U.;Kim, N.G.
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2003.11a
    • /
    • pp.161-164
    • /
    • 2003
  • 본 연구는 집중력, 기억력 및 학습 능력의 뇌기능 증진을 위한 의식적인 손 운동에 관련된 연구이다. 우선 효과적인 재활을 위한 손가락 운동 패턴을 연구하였다. 단순한 손가락 운동(Simple Finger Movement ; SFM) 패턴과 의식적인 손가락 운동(Intentional Finger Movement ; IFM)패턴을 비교하였다. 다음으로 각각 두 패턴 운동을 시켜 피험자의 집중력과 학습 능력의 증진을 검증하고자 한다. SFM 패턴과 IFM 패턴의 비교와 집중력과 학습 능력의 증진의 검증은 뇌파(mid $\alpha$파)를 이용하였다. 실험은 먼저 SFM 패턴의 운동을 시키고 다음에 IFM 패턴의 운동을 시키는 실험을 하였다. 결과적으로 IFM 패턴에서 mid $\alpha$ 파의 증가가 이루어졌음을 측정함으로써, IFM 패턴이 뇌의 집중력과 학습 능력을 증진시킨다는 결과를 얻었다.

  • PDF

Cue exposure system using Virtual Reality for nicotine craving (니코틴 중독의 단서노출치료를 위한 가상환경의 제작 및 욕구 유발 실험)

  • Kim, Kwang-Uk;Cho, Won-Geun;Ku, Jeong-Hun;Kim, Hun;Kim, Byoung-Nyun;Lee, Jang-Han;Kim, In-Y.;Lee, Jong-Min;Kim, Sun-I.
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2002.11a
    • /
    • pp.187-192
    • /
    • 2002
  • Research has shown that many smokers experience an increase in the desire to smoke when exposed to smoking-related cues. Cue exposure treatment (CET) refers to the manualized, repeated exposure to smoking-related cues, aimed at the reducing cue reactivity by extinction. In this study, we constructed a virtual reality system for evoking a desire of nicotine, which was based on the results of a Questionnaire of Nicotine-craving. And we investigated the effectiveness of the virtual reality system as compared to classical device (pictures). As a result, we reached the conclusion that virtual reality elicits more craving symptoms than the classical devices.

  • PDF

A research on EEG coherence variation by relaxation (이완에 따른 EEG 코히런스 변화에 대한 연구)

  • Kim, Jong-Hwa;Whang, Min-Cheol;Woo, Jin-Cheol;Kim, Chi-Joong;Kim, Young-Woo;Kim, Ji-Hye;Kim, Dong-Keun
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.121-128
    • /
    • 2010
  • This study is to analyze change of connectivity between brain positions caused by relaxation through EEG coherence. EEG spectrum analysis method has been used to analyze brain activity when relaxation was experienced. However, the spectrum analysis method has a limit that could not observe interactive reaction between brain-functional positions. Therefore, coherence between positions was analyzed to observe connectivity between the measurement positions in this study. Through the method, the reaction of the central nervous system caused by the emotion change was observed. Twenty-four undergraduates of both genders(12 males and 12 females) were asked to close their eyes and listen to the sound. During experiment, EEG was measured at eight positions. The eight positions were F3, F4, T3, T4, P3, P4, O1, and O2 in accordance with International 10-20 system. The sounds with white noise and without were used for relaxation experience. Subjective emotion was measured to verify whether or not they felt relaxation. Subjective emotion of participants were analyzed by ANOVA method(Analysis of Variance). In the result, it was proved that relaxation was subjectively evoked when participants heard sound. Accordingly, it was proved that relaxation could be enhanced by the mixed white noise. EEG coherence between the measurement positions was analyzed. T-test was performed to find its significant difference between relaxation and not-relaxation. In the results of EEG coherence, connectivity with occipital lobes has been increased with relaxation, and connectivity with parietal lobes has been increased with non-relaxed state. Therefore, brain connectivity has shown different pattern between relaxed emotion and non-relaxed emotion.

  • PDF

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.

Autonomic Nervous System response affected by 3D visual fatigue evoked during watching 3D TV (3D TV 시청으로 유발된 시각피로가 자율신경계 기능에 미치는 영향)

  • Park, Sang-In;Whang, Min-Cheol;Kim, Jong-Wha;Mun, Sung-Chul;Ahn, Sang-Min
    • Science of Emotion and Sensibility
    • /
    • v.14 no.4
    • /
    • pp.653-662
    • /
    • 2011
  • As technology in 3D industry has rapidly advanced, a lot of studies primarily focusing on visual function and cognition have become vigorous. However, studies on effect of 3D visual fatigue on autonomic nervous system have not less been conducted. Thus, this study was to identify and determine the effect that might have a negative influence on sympathetic nervous system, parasympathetic nervous system, and cardiovascular system. Fifteen undergraduates (female: 9, mean age: $22.53{\pm}2.55$) participated and were sat on a comfortable chair, viewing a 3D content during about 1 hour. Cardiac responses like SDNN(standard deviation of RR intervals), RMS-SD(root mean squared successive difference), and HF/LF ratios extracted from the measured PPG(Photo-PlethysmoGram) before viewing 3D were compared to those after viewing 3D. The results showed that after subjects watched the 3D, responses in sympathetic nervous system and parasympathetic nervous system were activated and deactivated, respectively relative to those before watching the 3D. The results showed that HF/LF ratio, Ln(LF), and Ln(HF) after viewing 3D were significantly reduced relative to those before viewing 3D. No significant effects were observed in SDNN and RMS-SD. Results obtained in this study showed that visual fatigue induced by watching 3D adversely influenced autonomic nervous system, and thereby reduced heart rate variability causing sympathetic nervous acceleration.

  • PDF

Variation of facial temperature to 3D visual fatigue evoked (3D 시각피로 유발에 따른 안면 온도 변화)

  • Hwang, Sung Teac;Park, SangIn;Won, Myoung Ju;Whang, MinCheol
    • Science of Emotion and Sensibility
    • /
    • v.16 no.4
    • /
    • pp.509-516
    • /
    • 2013
  • As the visual fatigue induced by 3D visual stimulation has raised some safety concerns in the industry, this study aims to quantify the visual fatigue through the means of measuring the facial temperature changes. Facial temperature was measured for one minute before and after watching a visual stimulus. Whether the visual fatigue has occurred was measured through subjective evaluations and high cognitive tasks. The difference in the changes that occurred after watching a 2D stimulus and a 3D stimulus was computed in order to associate the facial temperature changes and the visual fatigue induced by watching 3D contents. The results showed significant differences in the subjective evaluations and in the high cognitive tasks. Also, the ERP latency increased after watching 3D stimuli. There were significant differences in the maximum value of the temperature at the forehead and at the tip of the nose. A previous study showed that 3D visual fatigue activates the sympathetic nervous system. Activation of the sympathetic nervous system is known to increase the heart rate as well as the blood flow into the face through the carotid arteries system. When watching 2D or 3D stimuli, the sympathetic nervous system activation dictates the blood flow, which then influences the facial temperature. This study is meaningful in that it is one of the first investigations that looks into the possibility to measure 3D visual fatigue with thermal images.

Artificial Intelligence for Assistance of Facial Expression Practice Using Emotion Classification (감정 분류를 이용한 표정 연습 보조 인공지능)

  • Dong-Kyu, Kim;So Hwa, Lee;Jae Hwan, Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1137-1144
    • /
    • 2022
  • In this study, an artificial intelligence(AI) was developed to help with facial expression practice in order to express emotions. The developed AI used multimodal inputs consisting of sentences and facial images for deep neural networks (DNNs). The DNNs calculated similarities between the emotions predicted by the sentences and the emotions predicted by facial images. The user practiced facial expressions based on the situation given by sentences, and the AI provided the user with numerical feedback based on the similarity between the emotion predicted by sentence and the emotion predicted by facial expression. ResNet34 structure was trained on FER2013 public data to predict emotions from facial images. To predict emotions in sentences, KoBERT model was trained in transfer learning manner using the conversational speech dataset for emotion classification opened to the public by AIHub. The DNN that predicts emotions from the facial images demonstrated 65% accuracy, which is comparable to human emotional classification ability. The DNN that predicts emotions from the sentences achieved 90% accuracy. The performance of the developed AI was evaluated through experiments with changing facial expressions in which an ordinary person was participated.