• 제목/요약/키워드: Emotional States

검색결과 229건 처리시간 0.015초

A Music Recommendation Method Using Emotional States by Contextual Information

  • Kim, Dong-Joo;Lim, Kwon-Mook
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권10호
    • /
    • pp.69-76
    • /
    • 2015
  • User's selection of music is largely influenced by private tastes as well as emotional states, and it is the unconsciousness projection of user's emotion. Therefore, we think user's emotional states to be music itself. In this paper, we try to grasp user's emotional states from music selected by users at a specific context, and we analyze the correlation between its context and user's emotional state. To get emotional states out of music, the proposed method extracts emotional words as the representative of music from lyrics of user-selected music through morphological analysis, and learns weights of linear classifier for each emotional features of extracted words. Regularities learned by classifier are utilized to calculate predictive weights of virtual music using weights of music chosen by other users in context similar to active user's context. Finally, we propose a method to recommend some pieces of music relative to user's contexts and emotional states. Experimental results shows that the proposed method is more accurate than the traditional collaborative filtering method.

A Study on the Effects of Cyber Bullying on Cognitive Processing Ability and the Emotional States: Moderating Effect of Social Support of Friends and Parents

  • Yituo Feng;Sundong Kwon
    • Asia pacific journal of information systems
    • /
    • 제30권1호
    • /
    • pp.167-187
    • /
    • 2020
  • College students experience more cyber bullying than youth and cyber bullying on college students may be more harmful than youth. But many studies of cyber bullying have been conducted in youth, but little has been studied for college students. Therefore, this study investigated the negative effects of college students' cyber bullying experience on cognitive processing ability and emotional states. The social support of friends has a buffering effect that prevents stress and reduces the influence on external damage in stressful situations. But the impact of parental social support is controversial. Traditionally, the social support of parents has been claimed to mitigate the negative effects of external damage. Recently, however, it has been argued that parental social support, without considering the independence and autonomy needs of college students, does not alleviate the negative effects. Therefore, this study examined how the social support of friends and parents moderate the negative impact of cyber bullying. The results show that the more college students experience cyber bullying, the lower their cognitive processing ability and emotional states. And, the higher the social support of friends, the lower the harmful impacts of cyber bullying on cognitive processing ability and emotional states. But, the higher the social support of parents, the higher the harmful impacts of cyber bullying on cognitive processing ability and emotional states.

Design of Model to Recognize Emotional States in a Speech

  • Kim Yi-Gon;Bae Young-Chul
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권1호
    • /
    • pp.27-32
    • /
    • 2006
  • Verbal communication is the most commonly used mean of communication. A spoken word carries a lot of informations about speakers and their emotional states. In this paper we designed a model to recognize emotional states in a speech, a first phase of two phases in developing a toy machine that recognizes emotional states in a speech. We conducted an experiment to extract and analyse the emotional state of a speaker in relation with speech. To analyse the signal output we referred to three characteristics of sound as vector inputs and they are the followings: frequency, intensity, and period of tones. Also we made use of eight basic emotional parameters: surprise, anger, sadness, expectancy, acceptance, joy, hate, and fear which were portrayed by five selected students. In order to facilitate the differentiation of each spectrum features, we used the wavelet transform analysis. We applied ANFIS (Adaptive Neuro Fuzzy Inference System) in designing an emotion recognition model from a speech. In our findings, inference error was about 10%. The result of our experiment reveals that about 85% of the model applied is effective and reliable.

Classification of Emotional States of Interest and Neutral Using Features from Pulse Wave Signal

  • Phongsuphap, Sukanya;Sopharak, Akara
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.682-685
    • /
    • 2004
  • This paper investigated a method for classifying emotional states by using pulse wave signal. It focused on finding effective features for emotional state classification. The emptional states considered here consisted of interest and neutral. Classification experiments utilized 65 and 60 samples of interest and neutral states respectively. We have investigated 19 features derived from pulse wave signals by using both time domain and frequency domain analysis methods with 2 classifiers of minimum distance (normalized Euclidean distanece) and ${\kappa}$-Nearest Neighbour. The Leave-one-out cross validation was used as an evaluation mehtod. Based on experimental results, the most efficient features were a combination of 4 features consisting of (i) the mean of the first differences of the smoothed pulse rate time series signal, (ii) the mean of absolute values of the second differences of thel normalized interbeat intervals, (iii) the root mean square successive difference, and (iv) the power in high frequency range in normalized unit, which provided 80.8% average accuracy with ${\kappa}$-Nearest Neighbour classifier.

  • PDF

Inferring Pedestrians' Emotional States through Physiological Responses to Measure Subjective Walkability Indices

  • Kim, Taeeun;Lee, Meesung;Hwang, Sungjoo
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.1245-1246
    • /
    • 2022
  • Walkability is an indicator of how much pedestrians are willing to walk and how well a walking environment is created. As walking can promote pedestrians' mental and physical health, there has been increasing focus on improving walkability in different ways. Thus, plenty of research has been undertaken to measure walkability. When measuring walkability, there are many objective and subjective variables. Subjective variables include a feeling of safety, pleasure, or comfort, which can significantly affect perceived walkability. However, these subjective factors are difficult to measure by making the walkability index more reliant on objective and physical factors. Because many subjective variables are associated with human emotional states, understanding pedestrians' emotional states provides an opportunity to measure the subjective walkability variables more quantitatively. Pedestrians' emotions can be examined through surveys, but there are social and economic difficulties involved when conducting surveys. Recently, an increasing number of studies have employed physiological data to measure pedestrians' stress responses when navigating unpleasant environmental barriers on their walking paths. However, studies investigating the emotional states of pedestrians in the walking environment, including assessing their positive emotions felt, such as pleasure, have rarely been conducted. Using wearable devices, this study examined the various emotional states of pedestrians affected by the walking environment. Specifically, this study aimed to demonstrate the feasibility of monitoring biometric data, such as electrodermal activity (EDA) and heart rate variability (HRV), using wearable devices as an indicator of pedestrians' emotional states-both pleasant-unpleasant and aroused-relaxed states. To this end, various walking environments with different characteristics were set up to collect and analyze the pedestrians' biometric data. Subsequently, the subjects wearing the wearable devices were allowed to walk on the experimental paths as usual. After the experiment, the valence (i.e., pleasant or unpleasant) and arousal (i.e., activated or relaxed) scale of the pedestrians was identified through a bipolar dimension survey. The survey results were compared with many potentially relevant EDA and HRV signal features. The research results revealed the potential for physiological responses to indicate the pedestrians' emotional states, but further investigation is warranted. The research results were expected to provide a method to measure the subjective factors of walkability by measuring emotions and monitoring pedestrians' positive or negative feelings when walking to improve the walking environment. However, due to the lack of samples and other internal and external factors influencing emotions (which need to be studied further), it cannot be comprehensively concluded that the pedestrians' emotional states were affected by the walking environment.

  • PDF

Recognition of Emotion and Emotional Speech Based on Prosodic Processing

  • Kim, Sung-Ill
    • The Journal of the Acoustical Society of Korea
    • /
    • 제23권3E호
    • /
    • pp.85-90
    • /
    • 2004
  • This paper presents two kinds of new approaches, one of which is concerned with recognition of emotional speech such as anger, happiness, normal, sadness, or surprise. The other is concerned with emotion recognition in speech. For the proposed speech recognition system handling human speech with emotional states, total nine kinds of prosodic features were first extracted and then given to prosodic identifier. In evaluation, the recognition results on emotional speech showed that the rates using proposed method increased more greatly than the existing speech recognizer. For recognition of emotion, on the other hands, four kinds of prosodic parameters such as pitch, energy, and their derivatives were proposed, that were then trained by discrete duration continuous hidden Markov models(DDCHMM) for recognition. In this approach, the emotional models were adapted by specific speaker's speech, using maximum a posteriori(MAP) estimation. In evaluation, the recognition results on emotional states showed that the rates on the vocal emotions gradually increased with an increase of adaptation sample number.

감정에 따른 음성의 기본주파수 실현 연구 (A Study of FO's realization in Emotional speech)

  • 박미영;박미경
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2005년도 추계 학술대회 발표논문집
    • /
    • pp.79-85
    • /
    • 2005
  • In this Paper, we are trying to compare the normal speech with emotional speech -happy, sad, and angry states- through the changes of fundamental frequency. Based on the distribution charts of the normal and emotional speech, there are distinctive cues such as range of distribution, average, maximum, minimum, and so on. On the whole, the range of the fundamental frequency is extended in happy and angry states. On the other hand, sad states make the range relatively lessened. Nevertheless, the ranges of the 10 frequency in sad states are wider than the normal speech. In addition, we can verify that ending boundary tones reflect the information of whole speech.

  • PDF

영상에 의해 유발된 부정적 감정 상태에 따른 전두엽 감마대역 신경동기화 (Frontal Gamma-band Hypersynchronization in Response to Negative Emotion Elicited by Films)

  • 김현;최종두;최정우;여동훈;서부경;허성진;김경환
    • 대한의용생체공학회:의공학회지
    • /
    • 제39권3호
    • /
    • pp.124-133
    • /
    • 2018
  • We tried to investigate the changes in cortical activities according to emotional valence states during watching video clips. We examined the neural basis of two emotional states (positive and negative) using spectral power analysis and brain functional connectivity analysis of cortical current density time-series reconstructed from high-density electroencephalograms (EEGs). Fifteen healthy participants viewed a series of thirty-two 2 min emotional video clips. Sixty-four channel EEGs were recorded. Distributed cortical sources were reconstructed using weighted minimum norm estimation. The temporal and spatial characteristics of spectral source powers showing significant differences between positive and negative emotion were examined. Also, correlations between gamma-band activities and affective valence ratings were determined. We observed the changes of cortical current density time-series according to emotional states modulated by video clip. Gamma-band activities showed significant difference between emotional states for thirty seconds at the middle and the latter half of the video clip, mainly in prefrontal area. It was also significantly anti-correlated with the self-ratings of emotional valence. In addition, the gamma-band activities in frontal and temporal areas were strongly phase-synchronized, more strongly for negative emotional states. Cortical activities in frontal and temporal areas showed high spectral power and inter-regional phase synchronization in gamma-band during negative emotional states. It is inferred that the higher amygdala activation induced by negative stimuli resulted in strong emotional effects and caused strong local and global synchronization of neural activities in gamma-band in frontal and temporal areas.

감정 상태가 작은 디스플레이의 정보 탐색에 미치는 효과 (Effects of emotional states on information search pattern on small display)

  • 김혁;한광희
    • 감성과학
    • /
    • 제9권4호
    • /
    • pp.321-329
    • /
    • 2006
  • 최근 들어 인간과 컴퓨터 상호작용 분야에서 기존의 인지적인 측면 이외에 감정의 역할에 대한 관심이 증폭되고 있는 추세이다. 본 연구에서는 사용자가 좁은 화면을 통해 의사 결정을 위한 정보검색을 수행 할 때 감정 상태에 따라 정보 검색 패턴이 달라지는지를 알아보고자 하였다. 본 실험에서 실험참가자들의 감정 상태를 긍정적 혹은 부정적 상태로 유도하기 위해 긍정적인 감정과 부정적인 감정을 유발시킬 수 있는 음악을 듣게 하였고 동시에 자신의 과거 기억으로부터 긍정적인 감정 혹은 부정적인 감정을 유발시킬 수 있는 자서전적 회상 과제를 수행하도록 하였다. 감정 유발 과정이 끝난 뒤 각 실험 참가자들은 작은 디스플레이를 통해 세 나라들에 대한 여행 정보를 검색하여 가장 선호하는 여행지를 선택하였고, 실험참가자들이 탐색한 링크들의 경로 및 각 링크에 머무른 시간 등이 실시간으로 기록되었다 실험결과 긍정적인 감정 상태에서는 중립 상태와 부정적인 감정 상태에 비해 보다 신속한 정보탐색이 이루어지는 것으로 나타났고, 중립 상태와 부정 감정 상태에서는 긍정 감정 상태에 비해 세부 항목에 상대적으로 많은 양의 인지적 자원을 할당하는 것으로 나타났다.

  • PDF

Discrimination of Three Emotions using Parameters of Autonomic Nervous System Response

  • Jang, Eun-Hye;Park, Byoung-Jun;Eum, Yeong-Ji;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • 대한인간공학회지
    • /
    • 제30권6호
    • /
    • pp.705-713
    • /
    • 2011
  • Objective: The aim of this study is to compare results of emotion recognition by several algorithms which classify three different emotional states(happiness, neutral, and surprise) using physiological features. Background: Recent emotion recognition studies have tried to detect human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 217 students participated in this experiment. While three kinds of emotional stimuli were presented to participants, ANS responses(EDA, SKT, ECG, RESP, and PPG) as physiological signals were measured in twice first one for 60 seconds as the baseline and 60 to 90 seconds during emotional states. The obtained signals from the session of the baseline and of the emotional states were equally analyzed for 30 seconds. Participants rated their own feelings to emotional stimuli on emotional assessment scale after presentation of emotional stimuli. The emotion classification was analyzed by Linear Discriminant Analysis(LDA, SPSS 15.0), Support Vector Machine (SVM), and Multilayer perceptron(MLP) using difference value which subtracts baseline from emotional state. Results: The emotional stimuli had 96% validity and 5.8 point efficiency on average. There were significant differences of ANS responses among three emotions by statistical analysis. The result of LDA showed that an accuracy of classification in three different emotions was 83.4%. And an accuracy of three emotions classification by SVM was 75.5% and 55.6% by MLP. Conclusion: This study confirmed that the three emotions can be better classified by LDA using various physiological features than SVM and MLP. Further study may need to get this result to get more stability and reliability, as comparing with the accuracy of emotions classification by using other algorithms. Application: This could help get better chances to recognize various human emotions by using physiological signals as well as be applied on human-computer interaction system for recognizing human emotions.