• Title/Summary/Keyword: information of emotion

Search Result 1,326, Processing Time 0.035 seconds

Emotion Recognition Based on Frequency Analysis of Speech Signal

  • Sim, Kwee-Bo;Park, Chang-Hyun;Lee, Dong-Wook;Joo, Young-Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.2 no.2
    • /
    • pp.122-126
    • /
    • 2002
  • In this study, we find features of 3 emotions (Happiness, Angry, Surprise) as the fundamental research of emotion recognition. Speech signal with emotion has several elements. That is, voice quality, pitch, formant, speech speed, etc. Until now, most researchers have used the change of pitch or Short-time average power envelope or Mel based speech power coefficients. Of course, pitch is very efficient and informative feature. Thus we used it in this study. As pitch is very sensitive to a delicate emotion, it changes easily whenever a man is at different emotional state. Therefore, we can find the pitch is changed steeply or changed with gentle slope or not changed. And, this paper extracts formant features from speech signal with emotion. Each vowels show that each formant has similar position without big difference. Based on this fact, in the pleasure case, we extract features of laughter. And, with that, we separate laughing for easy work. Also, we find those far the angry and surprise.

Smart Affect Jewelry based on Multi-modal (멀티 모달 기반의 스마트 감성 주얼리)

  • Kang, Yun-Jeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.7
    • /
    • pp.1317-1324
    • /
    • 2016
  • Utilizing the Arduino platform to express the emotions that reflect the colors expressed the jewelry. Emotional color expression utilizes Plutchik's Wheel of Emotions model was applied to the similarity of emotions and colors. It receives the recognized value from the temperature, lighting, sound, pulse sensor and gyro sensor of a smart jewelery that can be easily accessible from your smartphone processes that recognize and process the emotion applied the rules of inference based on ontology. The emotional feelings color depending on the color looking for the emotion seen in context and applied to the smart LED jewelry. The emotion and the color combination of contextual information extracted from the recognition sensors are reflected in the built-in smart LED Jewelry depending on the emotions of the wearer. Take a light plus the emotion in a smart jewelery can represent the emotions of the situation, the doctor will be able to be a tool of representation.

A Design and Implementation Digital Vessel Bio Emotion Recognition LED Control System (디지털 선박 생체 감성 인식 LED 조명 제어 시스템 설계 및 구현)

  • Song, Byoung-Ho;Oh, Il-Whan;Lee, Seong-Ro
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.102-108
    • /
    • 2011
  • The existing vessels lighting control system has several problems, which are complexity of construction and high cost of establishment and maintenance. In this paper, We designed low cost and high performance lighting control system at digital vessel environment. We proposed a system which recognize the user's emotions after obtaining the biological informations about user's bio information(pulse sensor, blood pressure sensor, blood sugar sensor etc) through wireless sensors controls the LED Lights. This system classified emotions using backpropagation algorithm. We chose 3,000 data sets to train the backpropagation algorithm. As a result, obtained about 88.7% accuracy. And the classified emotions find the most appropriate point in the method of controlling the waves or frequencies to the red, green, blue LED Lamp comparing with the 20-color-emotion models in the HP's 'The meaning of color' and control the brightness or contrast of the LED Lamp. In this method, the system saved about 20% of the electricity consumed.

A Study on Behavioral Factors for the Safety of Ambulance Driving by Coefficiencial Structural Analysis (구급차 안전사고에 대한 공분산 구조분석)

  • Jo, Jeanman;Lee, Tae-Yong
    • The Korean Journal of Emergency Medical Services
    • /
    • v.4 no.1
    • /
    • pp.95-100
    • /
    • 2000
  • This is a study to evaluate the effects of the safety of ambulance driving and traffic accidents and to provide statistic information for the various factors to reduce the ambulance traffic accidents. The major instruments of this study were Korean Self-Analysis Driver Opinionnaire. This Questionnaire contains 8 items which measure drivers' opinions or attitudes: driving courtesy, emotion, traffic law, speed, vehicle condition, the use of drugs, high-risk behavior, human factors. The total of 145 divers were investigated ambulance drivers in Taejon City and others(6 City) from 2000. 5. July to 2000. 11. July. The data were analyzed by the path analysis - with SPSS and AMOS package program. The result are as follows : 1. It have suggested that risk factors of ambulance traffic accident much affected with emotion and speed control on safety ambulance driving(Y(Accident) = $0.88{\times}1$(Emotion Control) + $0.92{\times}2$(Speed) - $0.46{\times}3$(Traffic Law)+E). 2. It have suggested that risk factors of ambulance traffic accident much affected with emotion and speed control on safety ambulance driving(Y(Accident) = $0.398{\times}1$(Emotion Control) + $0.500{\times}2$(Speed) - $0.263{\times}3$(Traffic Law)+E) by coefficiecial structural analysis.

  • PDF

Gesture-Based Emotion Recognition by 3D-CNN and LSTM with Keyframes Selection

  • Ly, Son Thai;Lee, Guee-Sang;Kim, Soo-Hyung;Yang, Hyung-Jeong
    • International Journal of Contents
    • /
    • v.15 no.4
    • /
    • pp.59-64
    • /
    • 2019
  • In recent years, emotion recognition has been an interesting and challenging topic. Compared to facial expressions and speech modality, gesture-based emotion recognition has not received much attention with only a few efforts using traditional hand-crafted methods. These approaches require major computational costs and do not offer many opportunities for improvement as most of the science community is conducting their research based on the deep learning technique. In this paper, we propose an end-to-end deep learning approach for classifying emotions based on bodily gestures. In particular, the informative keyframes are first extracted from raw videos as input for the 3D-CNN deep network. The 3D-CNN exploits the short-term spatiotemporal information of gesture features from selected keyframes, and the convolutional LSTM networks learn the long-term feature from the features results of 3D-CNN. The experimental results on the FABO dataset exceed most of the traditional methods results and achieve state-of-the-art results for the deep learning-based technique for gesture-based emotion recognition.

Development of Deep Learning Models for Multi-class Sentiment Analysis (딥러닝 기반의 다범주 감성분석 모델 개발)

  • Syaekhoni, M. Alex;Seo, Sang Hyun;Kwon, Young S.
    • Journal of Information Technology Services
    • /
    • v.16 no.4
    • /
    • pp.149-160
    • /
    • 2017
  • Sentiment analysis is the process of determining whether a piece of document, text or conversation is positive, negative, neural or other emotion. Sentiment analysis has been applied for several real-world applications, such as chatbot. In the last five years, the practical use of the chatbot has been prevailing in many field of industry. In the chatbot applications, to recognize the user emotion, sentiment analysis must be performed in advance in order to understand the intent of speakers. The specific emotion is more than describing positive or negative sentences. In light of this context, we propose deep learning models for conducting multi-class sentiment analysis for identifying speaker's emotion which is categorized to be joy, fear, guilt, sad, shame, disgust, and anger. Thus, we develop convolutional neural network (CNN), long short term memory (LSTM), and multi-layer neural network models, as deep neural networks models, for detecting emotion in a sentence. In addition, word embedding process was also applied in our research. In our experiments, we have found that long short term memory (LSTM) model performs best compared to convolutional neural networks and multi-layer neural networks. Moreover, we also show the practical applicability of the deep learning models to the sentiment analysis for chatbot.

Speaker-Dependent Emotion Recognition For Audio Document Indexing

  • Hung LE Xuan;QUENOT Georges;CASTELLI Eric
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.92-96
    • /
    • 2004
  • The researches of the emotions are currently great interest in speech processing as well as in human-machine interaction domain. In the recent years, more and more of researches relating to emotion synthesis or emotion recognition are developed for the different purposes. Each approach uses its methods and its various parameters measured on the speech signal. In this paper, we proposed using a short-time parameter: MFCC coefficients (Mel­Frequency Cepstrum Coefficients) and a simple but efficient classifying method: Vector Quantification (VQ) for speaker-dependent emotion recognition. Many other features: energy, pitch, zero crossing, phonetic rate, LPC... and their derivatives are also tested and combined with MFCC coefficients in order to find the best combination. The other models: GMM and HMM (Discrete and Continuous Hidden Markov Model) are studied as well in the hope that the usage of continuous distribution and the temporal behaviour of this set of features will improve the quality of emotion recognition. The maximum accuracy recognizing five different emotions exceeds $88\%$ by using only MFCC coefficients with VQ model. This is a simple but efficient approach, the result is even much better than those obtained with the same database in human evaluation by listening and judging without returning permission nor comparison between sentences [8]; And this result is positively comparable with the other approaches.

  • PDF

Emotion Prediction System using Movie Script and Cinematography (영화 시나리오와 영화촬영기법을 이용한 감정 예측 시스템)

  • Kim, Jinsu
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.12
    • /
    • pp.33-38
    • /
    • 2018
  • Recently, we are trying to predict the emotion from various information and to convey the emotion information that the supervisor wants to inform the audience. In addition, audiences intend to understand the flow of emotions through various information of non-dialogue parts, such as cinematography, scene background, background sound and so on. In this paper, we propose to extract emotions by mixing not only the context of scripts but also the cinematography information such as color, background sound, composition, arrangement and so on. In other words, we propose an emotional prediction system that learns and distinguishes various emotional expression techniques into dialogue and non-dialogue regions, contributes to the completeness of the movie, and quickly applies them to new changes. The precision of the proposed system is improved by about 5.1% and 0.4%, and the recall is improved by about 4.3% and 1.6%, respectively, when compared with the modified n-gram and morphological analysis.

Developing a User Property Metadata to Support Cognitive and Emotional Product Design (인지·감성적 제품설계 지원을 위한 사용자 특성정보 메타데이터 구축)

  • Oh, Kyuhyup;Park, Kwang Il;Kim, Hee-Chan;Kim, Woo Ju;Lee, Soo-Hong;Ji, Young Gu;Jung, Jae-Yoon
    • The Journal of Society for e-Business Studies
    • /
    • v.21 no.4
    • /
    • pp.69-80
    • /
    • 2016
  • Cognitive and emotional product design is becoming crucial because the technology gap decreases more and more. Product design guidelines and the corresponding database are therefore needed to support sensing (e.g. sight, hearing, touch), cognition (e.g. attention, memory) and emotion (e.g. aesthetics, functionality) which users feel differently according to their genders and ages. The user property information which is extracted from various experiments can be used as critical criteria in product design and evaluation, and it is necessary to develop the integrated database of cognition and emotion where to store the user property information. In this research, we design the user property metadata for supporting cognitive and emotional product design and then develop a prototype system. The metadata is designed to reflect the classification of cognition and emotion by investigating and classifying the previous studies related to sensing, cognition and emotion. The user property information is designed in RDF (Resource Description Framework), and a prototype system is developed to store user property information of cognition and emotion based on the designed metadata.