• 제목/요약/키워드: Dimensional emotion

검색결과 122건 처리시간 0.031초

음악 감성의 사용자 조절에 따른 음악의 특성 변형에 관한 연구 (A Study on the Variation of Music Characteristics based on User Controlled Music Emotion)

  • 응웬반로이;허빈;김동림;임영환
    • 한국콘텐츠학회논문지
    • /
    • 제17권3호
    • /
    • pp.421-430
    • /
    • 2017
  • 본 논문은 기존에 연구되었던 음악 감성 모델을 이용하여 음악의 감성을 판단하고 해당 감성 데이터를 사용자가 원하는 감도의 감성으로 변화 시켰을 때 그에 맞는 감성이 표현될 수 있도록 음악의 특성을 변화시키는 것에 대한 논문이다. 기존의 음악 감성 유형만을 판단하는 1차원적 음악 감성 모델을 이용하여 음악의 템포, 역동성, 진폭변화, 밝기, 잡음 등 5가지 요소의 가중치를 계산하여 실제적인 음악의 감성 데이터(X,Y)를 추출하였으며, 예측된 음악의 감성을 다른 감성 값(X',Y')으로 변형시킴으로써 대응되는 5가지 변형 요소의 데이터 계산이 가능하도록 하였다. 이는 밝은 분위기의 음악을 데이터 변형만으로 우울한 분위기의 음악으로 조절이 가능하다는 점에서 향후 음악 감성 변화에 대한 기초모델을 제시하였다.

SNS대상의 지능형 자연어 수집, 처리 시스템 구현을 통한 한국형 감성사전 구축에 관한 연구 (Research on Designing Korean Emotional Dictionary using Intelligent Natural Language Crawling System in SNS)

  • 이종화
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제29권3호
    • /
    • pp.237-251
    • /
    • 2020
  • Purpose The research was studied the hierarchical Hangul emotion index by organizing all the emotions which SNS users are thinking. As a preliminary study by the researcher, the English-based Plutchick (1980)'s emotional standard was reinterpreted in Korean, and a hashtag with implicit meaning on SNS was studied. To build a multidimensional emotion dictionary and classify three-dimensional emotions, an emotion seed was selected for the composition of seven emotion sets, and an emotion word dictionary was constructed by collecting SNS hashtags derived from each emotion seed. We also want to explore the priority of each Hangul emotion index. Design/methodology/approach In the process of transforming the matrix through the vector process of words constituting the sentence, weights were extracted using TF-IDF (Term Frequency Inverse Document Frequency), and the dimension reduction technique of the matrix in the emotion set was NMF (Nonnegative Matrix Factorization) algorithm. The emotional dimension was solved by using the characteristic value of the emotional word. The cosine distance algorithm was used to measure the distance between vectors by measuring the similarity of emotion words in the emotion set. Findings Customer needs analysis is a force to read changes in emotions, and Korean emotion word research is the customer's needs. In addition, the ranking of the emotion words within the emotion set will be a special criterion for reading the depth of the emotion. The sentiment index study of this research believes that by providing companies with effective information for emotional marketing, new business opportunities will be expanded and valued. In addition, if the emotion dictionary is eventually connected to the emotional DNA of the product, it will be possible to define the "emotional DNA", which is a set of emotions that the product should have.

인간심리를 이용한 감성 모델과 영상검색에의 적용 (Emotional Model via Human Psychological Test and Its Application to Image Retrieval)

  • 유헌우;장동식
    • 대한산업공학회지
    • /
    • 제31권1호
    • /
    • pp.68-78
    • /
    • 2005
  • A new emotion-based image retrieval method is proposed in this paper. The research was motivated by Soen's evaluation of human emotion on color patterns. Thirteen pairs of adjective words expressing emotion pairs such as like-dislike, beautiful-ugly, natural-unnatural, dynamic-static, warm-cold, gay-sober, cheerful-dismal, unstablestable, light-dark, strong-weak, gaudy-plain, hard-soft, heavy-light are modeled by 19-dimensional color array and $4{\times}3$ gray matrix in off-line. Once the query is presented in text format, emotion model-based query formulation produces the associated color array and gray matrix. Then, images related to the query are retrieved from the database based on the multiplication of color array and gray matrix, each of which is extracted from query and database image. Experiments over 450 images showed an average retrieval rate of 0.61 for the use of color array alone and an average retrieval rate of 0.47 for the use of gray matrix alone.

로봇의 인간과 유사한 행동을 위한 2차원 무드 모델 제안 (Proposal of 2D Mood Model for Human-like Behaviors of Robot)

  • 김원화;박정우;김우현;이원형;정명진
    • 로봇학회논문지
    • /
    • 제5권3호
    • /
    • pp.224-230
    • /
    • 2010
  • As robots are no longer just working labors in the industrial fields, but stepping into the human's daily lives, interaction and communication between human and robot is becoming essential. For this social interaction with humans, emotion generation of a robot has become necessary, which is a result of very complicated process. Concept of mood has been considered in psychology society as a factor that effects on emotion generation, which is similar to emotion but not the same. In this paper, mood factors for robot considering not only the conditions of the robot itself but also the circumstances of the robot are listed, chosen and finally considered as elements defining a 2-dimensional mood space. Moreover, architecture that combines the proposed mood model and a emotion generation module is given at the end.

Grooming 사용자의 2차원 감성 모델링에 의한 터치폰의 GUI 요소에 대한 연구 (Research on GUI(Graphic User Interaction) factors of touch phone by two dimensional emotion model for Grooming users)

  • 김지혜;황민철;김종화;우진철;김치중;김용우;박영충;정광모
    • 한국감성과학회:학술대회논문집
    • /
    • 한국감성과학회 2009년도 춘계학술대회
    • /
    • pp.55-58
    • /
    • 2009
  • 본 연구는 주관적인 사용자의 감성을 객관적으로 정의하여 2차원 감성 모델에 의한 터치폰의 GUI 디자인 요소에 대한 디자인 가이드라인을 제시하고자 한다. 본 연구는 다음과 같은 단계로 연구를 진행하였다. 첫 번째 단계로 그루밍(Grooming) 사용자들의 라이프 스타일을 조사하여 Norman(2002)에 의거한 감각적, 행태적, 그리고 심볼적 세 가지 레벨의 감성요소를 추출하였다. 두 번째 단계로 Russell(1980)의 28개 감성 어휘와 세 단계 감성과의 관계성을 설문하여 감성모델을 구현하였다. 마지막으로 요인분석을 이용하여 대표 감성 어휘를 도출한 후 감성적 터치폰의 GUI(Graphic User Interaction) 디자인 요소를 제시함으로써 사용자의 감성이 반영된 인간 중심적인 제품 디자인을 위한 가이드라인을 제안한다.

  • PDF

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • 제19권3호
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

물리적 인지적 상황을 고려한 감성 인식 모델링 (Emotion recognition modeling in considering physical and cognitive factors)

  • 송성호;박희환;지용관;박지형;박장현
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2005년도 춘계학술대회 논문집
    • /
    • pp.1937-1943
    • /
    • 2005
  • The technology of emotion recognition is a crucial factor in day of ubiquitous that it provides various intelligent services for human. This paper intends to make the system which recognizes the human emotions based on 2-dimensional model with two bio signals, GSR and HRV. Since it is too difficult to make model the human's bio system analytically, as a statistical method, Hidden Markov Model(HMM) is used, which uses the transition probability among various states and measurable observation variance. As a result of experiments for each emotion, we can get average recognition rates of 64% for first HMM results and 55% for second HMM results

  • PDF

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • 제27권1호
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.

국제정서사진체계 ( IAPS ) 를 사용하여 유발된 정서의 뇌파 연구 (An EEG Study of Emotion Using the International Affective Picture System)

  • 이임갑;김지은;이경화;손진훈
    • 한국감성과학회:학술대회논문집
    • /
    • 한국감성과학회 1997년도 한국감성과학회 연차학술대회논문집
    • /
    • pp.224-227
    • /
    • 1997
  • The International Affective Picture System (IAPS) developed by Lang and colleagues[1] is a world-widely adopted tool in studices relating a variety of physiological indices to subjective emotions induced by the presentation of standardized pictures of which subjective ratings are well established in the three dimensions of pleasure, arousal and dominance. In the present stuey we investigated whether distinctive EEG characteristics for six discrete emotions can be discernible using 12 IAPS pictures that scored highest subjective ratings for one of the 6 categorical emotions, i. e., happiness, sadness, fear, anger, disgust, and surprise (Two slides for each emotion). These pictures as visual stimuli were randomly given to 38 right-handed college students (20-26 years old) with 30 sec of exposure time and 30sec of inter-stimulus interval for each picture while EEG signals were recorded from F3, F4, O1, and O2 referenced to linked ears. The FFT technoque were used to analyze the acquired EEG data. There were significant differences in RP value changes of EEG bands, most prominent in theta, between positive positive and negative emotions, and partial also among negative emotions. This result is in agreement with previous studies[2, 3]. However, it requires further studied to decided whether IAPS could be a useful tool for catigorical approaches to emotion in addition to its traditional uwe, namely dimensional to emotion.

  • PDF

퍼지 유사관계를 이용한 다차원 특징들의 가중치 결정과 감성기반 음악검색 (The Weight Decision of Multi-dimensional Features using Fuzzy Similarity Relations and Emotion-Based Music Retrieval)

  • 임지혜;이준환
    • 한국지능시스템학회논문지
    • /
    • 제21권5호
    • /
    • pp.637-644
    • /
    • 2011
  • 음원이 디지털화 되면서 쉽게 음악을 구매하고 들을 수 있게 되었다. 하지만 많은 음악 중에서 음악가, 장르, 제목, 앨범 타이틀 등 전통적인 음악 정보를 이용하여 사용자들이 자신의 취향에 맞는 음악을 찾는 데는 여전히 어려움이 있다. 이러한 어려움을 해소하기 위해 내용기반 음악검색과 감성기반 음악검색 방법 등이 제안되고 개발되고 있다. 본 논문에서는 이러한 어려움을 해소하기 위한 감성기반 음악 검색방법에서 다차원 벡터형태의 MPEG-7 저수준 오디오 서술자들의 감성기반 검색에서의 중요도를 결정하기 위한 새로운 방법을 제안하였다. 제안된 방법에서는 상호간에 대립되는 감성을 대표되는 음악들의 유사성을 다차원 서술자 관점에서 측정하고 이 유사관계를 러프 근사화와 군집 내/군집 간의 유사성 비율을 이용하여 서술자의 중요성을 결정한다. 중요성을 바탕으로 결정된 가중치는 여러 개의 오디오 서술자들의 유사성을 총체화하는데 이용되며 이를 활용하여 감성기반 음악검색을 수행한다. 제안된 방법은 내용기반 음악 검색을 기반으로 한 감성기반 음악검색 구조에서 실험한 결과 평균 검색 개수측면에서 기존의 휴리스틱 방법보다 좋은 검색 결과를 나타내었다.