• Title/Summary/Keyword: information of emotion

Search Result 1,326, Processing Time 0.026 seconds

Maximum Entropy-based Emotion Recognition Model using Individual Average Difference (개인별 평균차를 이용한 최대 엔트로피 기반 감성 인식 모델)

  • Park, So-Young;Kim, Dong-Keun;Whang, Min-Cheol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.7
    • /
    • pp.1557-1564
    • /
    • 2010
  • In this paper, we propose a maximum entropy-based emotion recognition model using the individual average difference of emotional signal, because an emotional signal pattern depends on each individual. In order to accurately recognize a user's emotion, the proposed model utilizes the difference between the average of the input emotional signals and the average of each emotional state's signals(such as positive emotional signals and negative emotional signals), rather than only the given input signal. With the aim of easily constructing the emotion recognition model without the professional knowledge of the emotion recognition, it utilizes a maximum entropy model, one of the best-performed and well-known machine learning techniques. Considering that it is difficult to obtain enough training data based on the numerical value of emotional signal for machine learning, the proposed model substitutes two simple symbols such as +(positive number)/-(negative number) for every average difference value, and calculates the average of emotional signals per second rather than the total emotion response time(10 seconds).

Development of facial recognition application for automation logging of emotion log (감정로그 자동화 기록을 위한 표정인식 어플리케이션 개발)

  • Shin, Seong-Yoon;Kang, Sun-Kyoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.4
    • /
    • pp.737-743
    • /
    • 2017
  • The intelligent life-log system proposed in this paper is intended to identify and record a myriad of everyday life information as to the occurrence of various events based on when, where, with whom, what and how, that is, a wide variety of contextual information involving person, scene, ages, emotion, relation, state, location, moving route, etc. with a unique tag on each piece of such information and to allow users to get a quick and easy access to such information. Context awareness generates and classifies information on a tag unit basis using the auto-tagging technology and biometrics recognition technology and builds a situation information database. In this paper, we developed an active modeling method and an application that recognizes expressionless and smile expressions using lip lines to automatically record emotion information.

Personalized Recommendation System using Level of Cosine Similarity of Emotion Word from Social Network (소셜 네트워크에서 감정단어의 단계별 코사인 유사도 기법을 이용한 추천시스템)

  • Kwon, Eungju;Kim, Jongwoo;Heo, Nojeong;Kang, Sanggil
    • Journal of Information Technology and Architecture
    • /
    • v.9 no.3
    • /
    • pp.333-344
    • /
    • 2012
  • This paper proposes a system which recommends movies using information from social network services containing personal interest and taste. Method for establishing data is as follows. The system gathers movies' information from web sites and user's information from social network services such as Facebook and twitter. The data from social network services is categorized into six steps of emotion level for more accurate processing following users' emotional states. Gathered data will be established into vector space model which is ideal for analyzing and deducing the information with the system which is suggested in this paper. The existing similarity measurement method for movie recommendation is presentation of vector information about emotion level and similarity measuring method on the coordinates using Cosine measure. The deducing method suggested in this paper is two-phase arithmetic operation as follows. First, using general cosine measurement, the system establishes movies list. Second, using similarity measurement, system decides recommendable movie list by vector operation from the coordinates. After Comparative Experimental Study on the previous recommendation systems and new one, it turned out the new system from this study is more helpful than existing systems.

A Study on Image Recommendation System based on Speech Emotion Information

  • Kim, Tae Yeun;Bae, Sang Hyun
    • Journal of Integrative Natural Science
    • /
    • v.11 no.3
    • /
    • pp.131-138
    • /
    • 2018
  • In this paper, we have implemented speeches that utilized the emotion information of the user's speech and image matching and recommendation system. To classify the user's emotional information of speech, the emotional information of speech about the user's speech is extracted and classified using the PLP algorithm. After classification, an emotional DB of speech is constructed. Moreover, emotional color and emotional vocabulary through factor analysis are matched to one space in order to classify emotional information of image. And a standardized image recommendation system based on the matching of each keyword with the BM-GA algorithm for the data of the emotional information of speech and emotional information of image according to the more appropriate emotional information of speech of the user. As a result of the performance evaluation, recognition rate of standardized vocabulary in four stages according to speech was 80.48% on average and system user satisfaction was 82.4%. Therefore, it is expected that the classification of images according to the user's speech information will be helpful for the study of emotional exchange between the user and the computer.

A Comparison of Effective Feature Vectors for Speech Emotion Recognition (음성신호기반의 감정인식의 특징 벡터 비교)

  • Shin, Bo-Ra;Lee, Soek-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.10
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.

A study on the arrangement of emotional words for understanding the human's emotion

  • 권규식;이순요;우석찬
    • Proceedings of the ESK Conference
    • /
    • 1993.04a
    • /
    • pp.64-68
    • /
    • 1993
  • The idia of modern product design is translated from the concept of functional importance as the basic function to that of emotional importance as the supplement function. In other words, the interests of the emotion in human performance side based on psychological factors of human are increased as well as the function in technical performance side based on physical factors of product. The standard emotional works for understanding the human's emotion are arranged in this paper. The standard emotional words are composed of words expressing the humaa's emotion. The adjectives adaptable to human's emotional works are collected from Korean dictionaries and arranged in the semantic differential(SD) scale. Next, the words with great marks evaluated by SD method are analyzed by factor analysis(FA) method and characterized as emotional words for understanding the human's emotion. The standard emotional words arranged in this paper are important because they are basic information for the development of product or technology as well as for the matter of emotional measurement technical development.

  • PDF

Flagship Store Experience of Luxury and SPA Brands -Effect on Store Emotion and Loyalty- (럭셔리와 SPA 플래그십 스토어 체험 -점포 감정 및 충성도에 미치는 효과-)

  • Park, Kyungae;Kim, Eun Young
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.40 no.2
    • /
    • pp.258-272
    • /
    • 2016
  • This study estimated a structural model for examining causal relationships among flagship store experiences, store emotion and loyalty, and to compare the structural models between luxury and fast fashion SPA brands. A total of 416 responses were collected from consumers who had experience with a variety of luxury and SPA flagship stores. Findings confirmed that flagship store experience consisted of four factors, such as sensory, affective, intellectual and behavioral experiences. Sensory and behavioral experiences had positive effects on store emotion that influenced loyalty. Affective and cognitive experiences influenced loyalty. All aspects of experiences explained store emotion or loyalty for luxury flagship stores; however, two of experiences (i.e., sensory and affective) were important to determine store emotion or loyalty for SPA flagship stores. The study discussed managerial implications for fashion brands to develop and manage flagship stores.

How facial emotion affects switching cost: Eastern and Western cultural differences (얼굴 표정 정서가 전환 과제 수행에 미치는 영향: 동서양 문화차)

  • Jini Tae;Yeeun Nam;Yoonhyoung Lee;Myeong-ho Sohn;Tae-hoon Kim
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.3
    • /
    • pp.227-241
    • /
    • 2023
  • This study aimed to examine the influence of emotional information on task switching performance from a cross-cultural perspective. Specifically we investigated whether the impact of affective information differs between Koreans and Caucasian when they perform a switching task using pictures that express positive and negative emotions. In this study, Korean and Caucasian college students were presented with either positive or negative faces and asked to perform either an emotion or a gender judgment task based on the color of the picture frame. The results showed that the switching cost from the gender judgment task to the emotion task was significantly larger than the switching cost from the gender task to the emotion task for both Koreans and Caucasians. This asymmetric switching cost was maintained when the previous and current pictures showed the same emotion but disappeared when two images presented different emotions. Regardless of the participant's cultural background, switching costs were greater for emotional tasks where the emotion was directly related to the task than for gender tasks. However, the effect of emotional switching on switching costs varied by the individual's background. Koreans were less sensitive to whether poser's emotion was changed than Americans. These results demonstrate that emotional information affects cognitive task performance and suggest that the effects of emotion may differ depending on the individual's cultural background.

Multiclass Music Classification Approach Based on Genre and Emotion

  • Jonghwa Kim
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.27-32
    • /
    • 2024
  • Reliable and fine-grained musical metadata are required for efficient search of rapidly increasing music files. In particular, since the primary motive for listening to music is its emotional effect, diversion, and the memories it awakens, emotion classification along with genre classification of music is crucial. In this paper, as an initial approach towards a "ground-truth" dataset for music emotion and genre classification, we elaborately generated a music corpus through labeling of a large number of ordinary people. In order to verify the suitability of the dataset through the classification results, we extracted features according to MPEG-7 audio standard and applied different machine learning models based on statistics and deep neural network to automatically classify the dataset. By using standard hyperparameter setting, we reached an accuracy of 93% for genre classification and 80% for emotion classification, and believe that our dataset can be used as a meaningful comparative dataset in this research field.

Effect of Color and Emotional Context on Processing Emotional Information of Biological Motion (색과 정서적 맥락이 생물형운동의 정서정보처리에 미치는 영향)

  • Kim, Jejoong;Kim, Yuri;Jo, Eunui
    • Science of Emotion and Sensibility
    • /
    • v.23 no.3
    • /
    • pp.63-78
    • /
    • 2020
  • It is crucial to process not only social cognitive information but also various emotional information for appropriate social interaction in everyday life. The processing of emotions embedded in social stimuli is affected by various context and external factors and the features of their own. Emotion discrimination tasks using point-light biological motion were conducted in this study to understand the factors influencing emotion processing and their effects. A target biological motion with angry or happy emotion was presented in the first task in red, green, white, or yellow color. A white angry, happy, or neutral "cue" biological motion was displayed simultaneously. Participants judged the emotion of the target relative to the cue stimulus by comparing the target with the cue. The second task used only emotionally neutral stimuli to find effect by the color itself. The results indicated an association between the specific color of the target and emotion. Red facilitated processing anger, whereas green appeared to facilitate happy emotion. The discrimination accuracy was high when the emotions of the cue and the target were identical in general, but the combination of red color and anger yielded different results compared with the rest of the conditions. Some illusory emotional responses associated with the target colors were observed in the second task. We could observe the effects of external factors in this study, affecting the emotional processing using biological motion rather than conventional face stimuli. Possible follow-up studies and clinical research were discussed.