• 제목/요약/키워드: information of emotion

Search Result 1,326, Processing Time 0.027 seconds

Emotion Analysis System for Social Media using Sentiment Dictionary including newly created word (신조어 감성사전 기반의 소셜미디어 감성분석 시스템)

  • Shin, Panseop;Oh, Hanmin
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.225-226
    • /
    • 2019
  • 오피니언 마이닝은 온라인 문서의 감성을 추출하여 분석하는 기법이다. 별도의 여론조사 없이 감성을 분석 가능하므로, 최근 활발한 연구 분야이다. 그러나 소셜미디어에는 신조어 등이 많이 포함되어 있어 기존 감성분석 시스템으로는 정확한 분석이 어려울 뿐만 아니라, 복합적인 감성에 대한 분석을 내리기에 불리하다. 이에 본 연구에서는 직관적인 감성모델을 제안하고 SNS에서 주목받는 다양한 신조어를 수용한 감성단어사전을 구축한 후, 이를 적용하여 소셜미디어에 나타나는 복합적인 감성을 분석하는 감성분석시스템을 설계한다.

  • PDF

A Study on the Emotion State Classification using Multi-channel EEG (다중채널 뇌파를 이용한 감정상태 분류에 관한 연구)

  • Kang, Dong-Kee;Kim, Heung-Hwan;Kim, Dong-Jun;Lee, Byung-Chae;Ko, Han-Woo
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2815-2817
    • /
    • 2001
  • This study describes the emotion classification using two different feature extraction methods for four-channel EEG signals. One of the methods is linear prediction analysis based on AR model. Another method is cross-correlation coefficients on frequencies of ${\theta}$, ${\alpha}$, ${\beta}$ bands. Using the linear predictor coefficients and the cross-correlation coefficients of frequencies, the emotion classification test for four emotions, such as anger, sad, joy, and relaxation is performed with a neural network. Comparing the results of two methods, it seems that the linear predictor coefficients produce the better results than the cross-correlation coefficients of frequencies for-emotion classification.

  • PDF

Emotion Image Retrieval through Query Emotion Descriptor and Relevance Feedback (질의 감성 표시자와 유사도 피드백을 이용한 감성 영상 검색)

  • Yoo Hun-Woo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.3
    • /
    • pp.141-152
    • /
    • 2005
  • A new emotion-based image retrieval method is proposed in this paper. Query emotion descriptors called query color code and query gray code are designed based on the human evaluation on 13 emotions('like', 'beautiful', 'natural', 'dynamic', 'warm', 'gay', 'cheerful', 'unstable', 'light' 'strong', 'gaudy' 'hard', 'heavy') when 30 random patterns with different color, intensity, and dot sizes are presented. For emotion image retrieval, once a query emotion is selected, associated query color code and query gray code are selected. Next, DB color code and DB gray code that capture color and, intensify and dot size are extracted in each database image and a matching process between two color codes and between two gray codes are peformed to retrieve relevant emotion images. Also, a new relevance feedback method is proposed. The method incorporates human intention in the retrieval process by dynamically updating weights of the query and DB color codes and weights of an intra query color code. For the experiments over 450 images, the number of positive images was higher than that of negative images at the initial query and increased according to the relevance feedback.

Collecting the Information Needs of Skilled and Be-ginner Drivers Based on a User Mental Model for a Cus-tomized AR-HUD Interface

  • Zhang, Han;Lee, Seung Hee
    • Science of Emotion and Sensibility
    • /
    • v.24 no.4
    • /
    • pp.53-68
    • /
    • 2021
  • The continuous development of in-vehicle information systems in recent years has dramatically enriched drivers' driving experience while occupying their cognitive resources to varying degrees, causing driving distraction. Under this complex information system, managing the complexity and priority of information and further improvement in driving safety has become a key issue that needs to be urgently solved by the in-vehicle information system. The new interactive methods incorporating the augmented reality (AR) and head-up display (HUD) technologies into in-vehicle information systems are currently receiving widespread attention. This superimposes various onboard information into an actual driving scene, thereby meeting the needs of complex tasks and improving driving safety. Based on the qualitative research methods of surveys and telephone interviews, this study collects the information needs of the target user groups (i.e., beginners and skilled drivers) and constructs a three-mode information database to provide the basis for a customized AR-HUD interface design.

Neuroaesthetics: A Concise Review of the Evidence Aimed at Aesthetically Sensible Design

  • Choi, Yun Jung;Yoon, So-Yeon
    • Science of Emotion and Sensibility
    • /
    • v.17 no.2
    • /
    • pp.45-54
    • /
    • 2014
  • In recent years, advancing technology and growing interest in neuromarketing and neurobranding have led to foundational research that facilitates a better understanding of consumers' affective responses and unconscious information processing. However, the areas of aesthetics and design have remained largely unaffected by such advances and implications. The purpose of this study is to present a systematic review of the neuroscientific evidence aimed at sensible design for design and marketing researchers interested in exploring neuroaesthetics, an interdisciplinary area by nature. Sciencedirect, EBSCO, and the Google Scholar database were searched in February 2014 to select and review previous studies of aesthetics involving neuroscience. Twenty-eight studies were reviewed and divided into two categories: reward system and emotion. In addition to discussions on previous approaches, future research directions focusing on the process of aesthetic judgments (e.g., design elements, marketing stimuli) are proposed.

Korean Facial Expression Emotion Recognition based on Image Meta Information (이미지 메타 정보 기반 한국인 표정 감정 인식)

  • Hyeong Ju Moon;Myung Jin Lim;Eun Hee Kim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.13 no.3
    • /
    • pp.9-17
    • /
    • 2024
  • Due to the recent pandemic and the development of ICT technology, the use of non-face-to-face and unmanned systems is expanding, and it is very important to understand emotions in communication in non-face-to-face situations. As emotion recognition methods for various facial expressions are required to understand emotions, artificial intelligence-based research is being conducted to improve facial expression emotion recognition in image data. However, existing research on facial expression emotion recognition requires high computing power and a lot of learning time because it utilizes a large amount of data to improve accuracy. To improve these limitations, this paper proposes a method of recognizing facial expressions using age and gender, which are image meta information, as a method of recognizing facial expressions with even a small amount of data. For facial expression emotion recognition, a face was detected using the Yolo Face model from the original image data, and age and gender were classified through the VGG model based on image meta information, and then seven emotions were recognized using the EfficientNet model. The accuracy of the proposed data classification learning model was higher as a result of comparing the meta-information-based data classification model with the model trained with all data.

Developing and Adopting an Artificial Emotion by Technological Approaching Based on Psychological Emotion Model (심리학 기반 감정 모델의 공학적 접근에 의한 인공감정의 제안과 적용)

  • Ham, Jun-Seok;Ryeo, Ji-Hye;Ko, Il-Ju
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2008.06a
    • /
    • pp.331-336
    • /
    • 2008
  • 같은 상황이라도 사람에 따라 느끼는 감정은 다르다. 따라서 감정을 일반화하여 현재의 감정 상태를 정량적으로 표현하는데 는 한계가 있다. 본 논문은 현재의 감정 상태를 나타내기 위해, 인간의 감정을 모델링한 심리학의 감정 모델을 공학적으로 접근하여 심리학기반 공학적 인공감정을 제안한다. 제안된 인공감정은 심리학을 기반으로 감정발생의 인과관계, 성격에 따른 감정의 차이, 시간에 따른 감정의 차이, 연속된 감정자극에 따른 감정의 차이, 감정간의 상호관계에 따른 감정의 차이를 반영하여 구성했다. 현재의 감정 상태를 위치로 나타내기 위해서 감정장을 제안했고, 감정장 상의 위치와 위치에 따른 색깔로 현재의 감정 상태를 표현했다. 감정상태의 변화를 제안된 인공감정을 통해 시각화해보기 위해 셰익스피어의 '햄릿'에서 극중 등장인물인 햄릿의 감정변화를 제안된 인공감정을 통해 시각화 해 보였다.

  • PDF

Emotion Recognition using Short-Term Multi-Physiological Signals

  • Kang, Tae-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.1076-1094
    • /
    • 2022
  • Technology for emotion recognition is an essential part of human personality analysis. To define human personality characteristics, the existing method used the survey method. However, there are many cases where communication cannot make without considering emotions. Hence, emotional recognition technology is an essential element for communication but has also been adopted in many other fields. A person's emotions are revealed in various ways, typically including facial, speech, and biometric responses. Therefore, various methods can recognize emotions, e.g., images, voice signals, and physiological signals. Physiological signals are measured with biological sensors and analyzed to identify emotions. This study employed two sensor types. First, the existing method, the binary arousal-valence method, was subdivided into four levels to classify emotions in more detail. Then, based on the current techniques classified as High/Low, the model was further subdivided into multi-levels. Finally, signal characteristics were extracted using a 1-D Convolution Neural Network (CNN) and classified sixteen feelings. Although CNN was used to learn images in 2D, sensor data in 1D was used as the input in this paper. Finally, the proposed emotional recognition system was evaluated by measuring actual sensors.

Effects of Emotional Information on Visual Perception and Working Memory in Biological Motion (정서 정보가 생물형운동자극의 시지각 및 작업기억에 미치는 영향)

  • Lee, Hannah;Kim, Jejoong
    • Science of Emotion and Sensibility
    • /
    • v.21 no.3
    • /
    • pp.151-164
    • /
    • 2018
  • The appropriate interpretation of social cues is a crucial ability for everyday life. While processing socially relevant information, beyond the low-level physical features of the stimuli to emotional information is known to influence human cognition in various stages, from early perception to later high-level cognition, such as working memory (WM). However, it remains unclear how the influence of each type of emotional information on cognitive processes changes in response to what has occurred in the processing stage. Past studies have largely adopted face stimuli to address this type of research question, but we used a unique class of socially relevant motion stimuli, called biological motion (BM), which depicts various human actions and emotions with moving dots to exhibit the effects of anger, happiness, and neutral emotion on task performance in perceptual and working memory. In this study, participants determined whether two BM stimuli, sequentially presented with a delay between them (WM task) or one immediately after the other (perceptual task), were identical. The perceptual task showed that discrimination accuracies for emotional stimuli (i.e., angry and happy) were lower than those for neutral stimuli, implying that emotional information has a negative impact on early perceptual processes. Alternatively, the results of the WM task showed that the accuracy drop as the interstimulus interval increased was actually lower in emotional BM conditions than in the neutral condition, which suggests that emotional information benefited maintenance. Moreover, anger and happiness had distinct impacts on the performance of perception and WM. Our findings have significance as we provide evidence for the interaction of type of emotion and information-processing stage.

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.