• Title/Summary/Keyword: emotion engineering

Search Result 793, Processing Time 0.026 seconds

A Music Recommendation Method Using Emotional States by Contextual Information

  • Kim, Dong-Joo;Lim, Kwon-Mook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.10
    • /
    • pp.69-76
    • /
    • 2015
  • User's selection of music is largely influenced by private tastes as well as emotional states, and it is the unconsciousness projection of user's emotion. Therefore, we think user's emotional states to be music itself. In this paper, we try to grasp user's emotional states from music selected by users at a specific context, and we analyze the correlation between its context and user's emotional state. To get emotional states out of music, the proposed method extracts emotional words as the representative of music from lyrics of user-selected music through morphological analysis, and learns weights of linear classifier for each emotional features of extracted words. Regularities learned by classifier are utilized to calculate predictive weights of virtual music using weights of music chosen by other users in context similar to active user's context. Finally, we propose a method to recommend some pieces of music relative to user's contexts and emotional states. Experimental results shows that the proposed method is more accurate than the traditional collaborative filtering method.

A Research of Optimized Metadata Extraction and Classification of in Audio (미디어에서의 오디오 메타데이터 최적화 추출 및 분류 방안에 대한 연구)

  • Yoon, Min-hee;Park, Hyo-gyeong;Moon, Il-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.147-149
    • /
    • 2021
  • Recently, the rapid growth of the media market and the expectations of users have been increasing. In this research, tags are extracted through media-derived audio and classified into specific categories using artificial intelligence. This category is a type of emotion including joy, anger, sadness, love, hatred, desire, etc. We use JupyterNotebook to conduct the corresponding study, analyze voice data using the LiBROSA library within JupyterNotebook, and use Neural Network using keras and layer models.

  • PDF

Face Emotion Recognition using ResNet with Identity-CBAM (Identity-CBAM ResNet 기반 얼굴 감정 식별 모듈)

  • Oh, Gyutea;Kim, Inki;Kim, Beomjun;Gwak, Jeonghwan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.559-561
    • /
    • 2022
  • 인공지능 시대에 들어서면서 개인 맞춤형 환경을 제공하기 위하여 사람의 감정을 인식하고 교감하는 기술이 많이 발전되고 있다. 사람의 감정을 인식하는 방법으로는 얼굴, 음성, 신체 동작, 생체 신호 등이 있지만 이 중 가장 직관적이면서도 쉽게 접할 수 있는 것은 표정이다. 따라서, 본 논문에서는 정확도 높은 얼굴 감정 식별을 위해서 Convolution Block Attention Module(CBAM)의 각 Gate와 Residual Block, Skip Connection을 이용한 Identity- CBAM Module을 제안한다. CBAM의 각 Gate와 Residual Block을 이용하여 각각의 표정에 대한 핵심 특징 정보들을 강조하여 Context 한 모델로 변화시켜주는 효과를 가지게 하였으며 Skip-Connection을 이용하여 기울기 소실 및 폭발에 강인하게 해주는 모듈을 제안한다. AI-HUB의 한국인 감정 인식을 위한 복합 영상 데이터 세트를 이용하여 총 6개의 클래스로 구분하였으며, F1-Score, Accuracy 기준으로 Identity-CBAM 모듈을 적용하였을 때 Vanilla ResNet50, ResNet101 대비 F1-Score 0.4~2.7%, Accuracy 0.18~2.03%의 성능 향상을 달성하였다. 또한, Guided Backpropagation과 Guided GradCam을 통해 시각화하였을 때 중요 특징점들을 더 세밀하게 표현하는 것을 확인하였다. 결과적으로 이미지 내 표정 분류 Task에서 Vanilla ResNet50, ResNet101을 사용하는 것보다 Identity-CBAM Module을 함께 사용하는 것이 더 적합함을 입증하였다.

A Study on Infra-Technology of RCP Mobility System

  • Kim, Seung-Woo;Choe, Jae-Il;Im, Chan-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1435-1439
    • /
    • 2004
  • Most recently, CP(Cellular Phone) has been one of the most important technologies in the IT(Information Tech-nology) field, and it is situated in a position of great importance industrially and economically. To produce the best CP in the world, a new technological concept and its advanced implementation technique is required, due to the extreme level of competition in the world market. The RT(Robot Technology) has been developed as the next generation of a future technology. Current robots require advanced technology, such as soft computing, human-friendly interface, interaction technique, speech recognition, object recognition etc. unlike the industrial robots of the past. Therefore, this paper explains conceptual research for development of the RCP(Robotic Cellular Phone), a new technological concept, in which a synergy effect is generated by the merging of IT & RT. RCP infra consists of $RCP^{Mobility}$ $RCP^{Interaction}$, $RCP^{Integration}$ technologies. For $RCP^{Mobility}$, human-friendly motion automation and personal service with walking and arming ability are developed. $RCP^{Interaction}$ ability is achieved by modeling an emotion-generating engine and $RCP^{Integration}$ that recognizes environmental and self conditions is developed. By joining intelligent algorithms and CP communication network with the three base modules, a RCP system is constructed. Especially, the RCP mobility system is focused in this paper. $RCP^{Mobility}$ is to apply a mobility technology, which is popular robot technology, to CP and combine human-friendly motion and navigation function to CP. It develops a new technological application system of auto-charging and real-world entertainment function etc. This technology can make a CP companion pet robot. It is an automation of human-friendly motions such as opening and closing of CPs, rotation of antenna, manipulation and wheel-walking. It's target is the implementation of wheel and manipulator functions that can give service to humans with human-friendly motion. So, this paper presents the definition, the basic theory and experiment results of the RCP mobility system. We confirm a good performance of the RCP mobility system through the experiment results.

  • PDF

Developing application depend on emotion extraction from paintings (회화에서 감성 추출에 기반한 어플리케이션 개발 연구)

  • Lee, Taemin;Kang, Dongwann;Cho, Kyung-Ja;Park, SooJin;Yoon, Kyunghyun
    • Journal of Digital Contents Society
    • /
    • v.18 no.6
    • /
    • pp.1033-1040
    • /
    • 2017
  • Artists use artistic features of paintings to provide various emotions in paintings. These features may be simply color and texture, but they can move on to form a composition or a symmetry. Through these features, people can feel various emotions when enjoying paintings. Even though they are using these features, there are paintings that are not readily accessible to non-extractable experts. This is because the analysis of features is not intuitive. In this paper, we want to produce content that matches paintings and music. This helps user to understand painting easily with paintings and matched music.

An Ontological and Rule-based Reasoning for Music Recommendation using Musical Moods (음악 무드를 이용한 온톨로지 기반 음악 추천)

  • Song, Se-Heon;Rho, Seung-Min;Hwang, Een-Jun;Kim, Min-Koo
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.1
    • /
    • pp.108-118
    • /
    • 2010
  • In this paper, we propose Context-based Music Recommendation (COMUS) ontology for modeling user's musical preferences and context and for supporting reasoning about the user's desired emotion and preferences. The COMUS provides an upper Music Ontology that captures concepts about the general properties of music such as title, artists and genre and also provides extensibility for adding domain-specific ontologies, such as Mood and Situation, in a hierarchical manner. The COMUS is music dedicated ontology in OWL constructed by incorporating domain specific classes for music recommendation into the Music Ontology. Using this context ontology, we believe that the use of logical reasoning by checking the consistency of context information, and reasoning over the high-level, implicit context from the low-level, explicit information. As a novelty, our ontology can express detailed and complicated relations among the music, moods and situations, enabling users to find appropriate music for the application. We present some of the experiments we performed as a case-study for music recommendation.

Facial Point Classifier using Convolution Neural Network and Cascade Facial Point Detector (컨볼루셔널 신경망과 케스케이드 안면 특징점 검출기를 이용한 얼굴의 특징점 분류)

  • Yu, Je-Hun;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.241-246
    • /
    • 2016
  • Nowadays many people have an interest in facial expression and the behavior of people. These are human-robot interaction (HRI) researchers utilize digital image processing, pattern recognition and machine learning for their studies. Facial feature point detector algorithms are very important for face recognition, gaze tracking, expression, and emotion recognition. In this paper, a cascade facial feature point detector is used for finding facial feature points such as the eyes, nose and mouth. However, the detector has difficulty extracting the feature points from several images, because images have different conditions such as size, color, brightness, etc. Therefore, in this paper, we propose an algorithm using a modified cascade facial feature point detector using a convolutional neural network. The structure of the convolution neural network is based on LeNet-5 of Yann LeCun. For input data of the convolutional neural network, outputs from a cascade facial feature point detector that have color and gray images were used. The images were resized to $32{\times}32$. In addition, the gray images were made into the YUV format. The gray and color images are the basis for the convolution neural network. Then, we classified about 1,200 testing images that show subjects. This research found that the proposed method is more accurate than a cascade facial feature point detector, because the algorithm provides modified results from the cascade facial feature point detector.

The Research on Prediction of Attentive Hand Movement using EEG Coherence (EEG 코히런스에 의한 집중한 손 동작 예측에 관한 연구)

  • Woo, Jin-Cheol;Whang, Min-Cheol;Kim, Jong-Wha;Kim, Chi-Joong;Kim, Yong-Woo;Kim, Ji-Hye;Kim, Dong-Keun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.29 no.2
    • /
    • pp.189-196
    • /
    • 2010
  • The study is to find relative EEG power spectrum and pattern of coherence discriminating attentive and inattentive hand movements. Eight undergraduate students aged from 20 to 27 who had not hand disability participated in this study. Participants were asked to perform visuo-motor task. EEG was measured at C3 in 10~20 international system and four areas orthogonally directed 2.5cm away from C3. Significant result discriminating movement and rest was found through coherence analysis between movement areas or movement area and non-movement area, but was individually different. Because it was anticipated that major factors caused by the differences among individuals were attributed to the attention of the subjects, relative power of alpha and beta bands was identified. As a result, significant relative powers of alpha and beta bands were found in a group of high coherence level, but were not found in a group of low level. Next, participants were divided into two groups according to relative powers of alpha and beta bands. The comparison between two groups was performed. As a result, the coherence of the alpha band in the attentive group was greater than that of the inattentive group. It was found that the coherence of the beta band in the inattentive group was happening. Therefore, individual differences of coherence were influenced by attention. The significant coherence patterns that could discriminate attentive movement and inattentive movement were found.

Measuring Similarity Between Movies Based on Sentiment of Tweets (트위터를 활용한 감성 기반의 영화 유사도 측정)

  • Kim, Kyoungmin;Kim, Dong-Yun;Lee, Jee-Hyong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.292-297
    • /
    • 2014
  • As a Social Network Service (SNS) has become an integral part of our everyday lives, millions of users can express their opinion and share information regardless of time and place. Hence sentiment analysis using micro-blogs has been studied in various field to know people's opinion on particular topics. Most of previous researches on movie reviews consider only positive and negative sentiment and use it to predict movie rating. As people feel not only positive and negative but also various emotion, the sentiment that people feel while watching a movie need to be classified in more detail to extract more information than personal preference. We measure sentiment distributions of each movie from tweets according to the Thayer's model. Then, we find similar movies by calculating similarity between each sentiment distributions. Through the experiments, we verify that our method using micro-blogs performs better than using only genre information of movies.

A Machine Learning Approach for Stress Status Identification of Early Childhood by Using Bio-Signals (생체신호를 활용한 학습기반 영유아 스트레스 상태 식별 모델 연구)

  • Jeon, Yu-Mi;Han, Tae Seong;Kim, Kwanho
    • The Journal of Society for e-Business Studies
    • /
    • v.22 no.2
    • /
    • pp.1-18
    • /
    • 2017
  • Recently, identification of the extremely stressed condition of children is an essential skill for real-time recognition of a dangerous situation because incidents of children have been dramatically increased. In this paper, therefore, we present a model based on machine learning techniques for stress status identification of a child by using bio-signals such as voice and heart rate that are major factors for presenting a child's emotion. In addition, a smart band for collecting such bio-signals and a mobile application for monitoring child's stress status are also suggested. Specifically, the proposed method utilizes stress patterns of children that are obtained in advance for the purpose of training stress status identification model. Then, the model is used to predict the current stress status for a child and is designed based on conventional machine learning algorithms. The experiment results conducted by using a real-world dataset showed that the possibility of automated detection of a child's stress status with a satisfactory level of accuracy. Furthermore, the research results are expected to be used for preventing child's dangerous situations.