• Title/Summary/Keyword: Happiness Emotion

Search Result 170, Processing Time 0.029 seconds

Effects of Mood on the Food Preference of Female University Students (지각된 감정이 여대생들의 음식 선호에 미치는 영향)

  • Lee, Eun-Young;Cho, Mi-Sook
    • Journal of the Korean Society of Food Culture
    • /
    • v.23 no.6
    • /
    • pp.713-719
    • /
    • 2008
  • The purpose of this study was to investigate the food preference and attitude according to six emotions in female university students. Also, it was studied whether the desire to food consumption was changed by each mood. The selfreported questionnaire was used to 285 female university students. There were the significant differences in food preference according to emotions. Pizza & pasta, ice cream and cake were preferred during happiness and amusement. In sadness and anger, alcohol was the most preferred food item. There was the preference of beverage, Jjigae & Baikban, ice cream and snack during relaxation. Chocolate showed the highest preference during depression. The taste and flavor was the main preference attributes during all emotions. The self-assessed food intake during happiness, amusement, anger and relaxation was increased but it was decreased during sadness and depression (p<0.001).

Emotion Recognition in Arabic Speech from Saudi Dialect Corpus Using Machine Learning and Deep Learning Algorithms

  • Hanaa Alamri;Hanan S. Alshanbari
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.9-16
    • /
    • 2023
  • Speech can actively elicit feelings and attitudes by using words. It is important for researchers to identify the emotional content contained in speech signals as well as the sort of emotion that resulted from the speech that was made. In this study, we studied the emotion recognition system using a database in Arabic, especially in the Saudi dialect, the database is from a YouTube channel called Telfaz11, The four emotions that were examined were anger, happiness, sadness, and neutral. In our experiments, we extracted features from audio signals, such as Mel Frequency Cepstral Coefficient (MFCC) and Zero-Crossing Rate (ZCR), then we classified emotions using many classification algorithms such as machine learning algorithms (Support Vector Machine (SVM) and K-Nearest Neighbor (KNN)) and deep learning algorithms such as (Convolution Neural Network (CNN) and Long Short-Term Memory (LSTM)). Our Experiments showed that the MFCC feature extraction method and CNN model obtained the best accuracy result with 95%, proving the effectiveness of this classification system in recognizing Arabic spoken emotions.

A Study on the Performance of Music Retrieval Based on the Emotion Recognition (감정 인식을 통한 음악 검색 성능 분석)

  • Seo, Jin Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.3
    • /
    • pp.247-255
    • /
    • 2015
  • This paper presents a study on the performance of the music search based on the automatically recognized music-emotion labels. As in the other media data, such as speech, image, and video, a song can evoke certain emotions to the listeners. When people look for songs to listen, the emotions, evoked by songs, could be important points to consider. However; very little study has been done on the performance of the music-emotion labels to the music search. In this paper, we utilize the three axes of human music perception (valence, activity, tension) and the five basic emotion labels (happiness, sadness, tenderness, anger, fear) in measuring music similarity for music search. Experiments were conducted on both genre and singer datasets. The search accuracy of the proposed emotion-based music search was up to 75 % of that of the conventional feature-based music search. By combining the proposed emotion-based method with the feature-based method, we achieved up to 14 % improvement of search accuracy.

Discrimination of Three Emotions using Parameters of Autonomic Nervous System Response

  • Jang, Eun-Hye;Park, Byoung-Jun;Eum, Yeong-Ji;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.30 no.6
    • /
    • pp.705-713
    • /
    • 2011
  • Objective: The aim of this study is to compare results of emotion recognition by several algorithms which classify three different emotional states(happiness, neutral, and surprise) using physiological features. Background: Recent emotion recognition studies have tried to detect human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 217 students participated in this experiment. While three kinds of emotional stimuli were presented to participants, ANS responses(EDA, SKT, ECG, RESP, and PPG) as physiological signals were measured in twice first one for 60 seconds as the baseline and 60 to 90 seconds during emotional states. The obtained signals from the session of the baseline and of the emotional states were equally analyzed for 30 seconds. Participants rated their own feelings to emotional stimuli on emotional assessment scale after presentation of emotional stimuli. The emotion classification was analyzed by Linear Discriminant Analysis(LDA, SPSS 15.0), Support Vector Machine (SVM), and Multilayer perceptron(MLP) using difference value which subtracts baseline from emotional state. Results: The emotional stimuli had 96% validity and 5.8 point efficiency on average. There were significant differences of ANS responses among three emotions by statistical analysis. The result of LDA showed that an accuracy of classification in three different emotions was 83.4%. And an accuracy of three emotions classification by SVM was 75.5% and 55.6% by MLP. Conclusion: This study confirmed that the three emotions can be better classified by LDA using various physiological features than SVM and MLP. Further study may need to get this result to get more stability and reliability, as comparing with the accuracy of emotions classification by using other algorithms. Application: This could help get better chances to recognize various human emotions by using physiological signals as well as be applied on human-computer interaction system for recognizing human emotions.

Effects of Emotional Information on Visual Perception and Working Memory in Biological Motion (정서 정보가 생물형운동자극의 시지각 및 작업기억에 미치는 영향)

  • Lee, Hannah;Kim, Jejoong
    • Science of Emotion and Sensibility
    • /
    • v.21 no.3
    • /
    • pp.151-164
    • /
    • 2018
  • The appropriate interpretation of social cues is a crucial ability for everyday life. While processing socially relevant information, beyond the low-level physical features of the stimuli to emotional information is known to influence human cognition in various stages, from early perception to later high-level cognition, such as working memory (WM). However, it remains unclear how the influence of each type of emotional information on cognitive processes changes in response to what has occurred in the processing stage. Past studies have largely adopted face stimuli to address this type of research question, but we used a unique class of socially relevant motion stimuli, called biological motion (BM), which depicts various human actions and emotions with moving dots to exhibit the effects of anger, happiness, and neutral emotion on task performance in perceptual and working memory. In this study, participants determined whether two BM stimuli, sequentially presented with a delay between them (WM task) or one immediately after the other (perceptual task), were identical. The perceptual task showed that discrimination accuracies for emotional stimuli (i.e., angry and happy) were lower than those for neutral stimuli, implying that emotional information has a negative impact on early perceptual processes. Alternatively, the results of the WM task showed that the accuracy drop as the interstimulus interval increased was actually lower in emotional BM conditions than in the neutral condition, which suggests that emotional information benefited maintenance. Moreover, anger and happiness had distinct impacts on the performance of perception and WM. Our findings have significance as we provide evidence for the interaction of type of emotion and information-processing stage.

Doing More by Seeing Less: Gritty Applicants are Less Sensitive to Facial Threat Cues

  • Shin, Ji-eun;Lee, Hyeonju
    • Science of Emotion and Sensibility
    • /
    • v.25 no.1
    • /
    • pp.21-28
    • /
    • 2022
  • People differ greatly in their capacity to persist in the face of challenges. Despite significant research, relatively little is known about cognitive factors that might be involved in perseverance. Building upon human threat-management mechanism, we predicted that perseverant people would be characterized by reduced sensitivity (i.e., longer detection latency) to threat cues. Our data from 5,898 job applicants showed that highly perseverant individuals required more time to correctly identify anger in faces, regardless of stimulus type (dynamic or static computer-morphed faces). Such individual differences were not observed in response to other facial expressions (happiness, sadness), and the effect was independent of gender, dispositional anxiety, or conscientiousness. Discussions were centered on the potential role of threat sensitivity in effortful pursuit of goals.

Alexithymia and the Recognition of Facial Emotion in Schizophrenic Patients (정신분열병 환자에서의 감정표현불능증과 얼굴정서인식결핍)

  • Noh, Jin-Chan;Park, Sung-Hyouk;Kim, Kyung-Hee;Kim, So-Yul;Shin, Sung-Woong;Lee, Koun-Seok
    • Korean Journal of Biological Psychiatry
    • /
    • v.18 no.4
    • /
    • pp.239-244
    • /
    • 2011
  • Objectives Schizophrenic patients have been shown to be impaired in both emotional self-awareness and recognition of others' facial emotions. Alexithymia refers to the deficits in emotional self-awareness. The relationship between alexithymia and recognition of others' facial emotions needs to be explored to better understand the characteristics of emotional deficits in schizophrenic patients. Methods Thirty control subjects and 31 schizophrenic patients completed the Toronto Alexithymia Scale-20-Korean version (TAS-20K) and facial emotion recognition task. The stimuli in facial emotion recognition task consist of 6 emotions (happiness, sadness, anger, fear, disgust, and neutral). Recognition accuracy was calculated within each emotion category. Correlations between TAS-20K and recognition accuracy were analyzed. Results The schizophrenic patients showed higher TAS-20K scores and lower recognition accuracy compared with the control subjects. The schizophrenic patients did not demonstrate any significant correlations between TAS-20K and recognition accuracy, unlike the control subjects. Conclusions The data suggest that, although schizophrenia may impair both emotional self-awareness and recognition of others' facial emotions, the degrees of deficit can be different between emotional self-awareness and recognition of others' facial emotions. This indicates that the emotional deficits in schizophrenia may assume more complex features.

The Relationships between Meaning in Life and Happiness among University Students. (대학생의 생의 의미와 행복의 관계)

  • Lee, Ok-Sook;Jang, Sun-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.9
    • /
    • pp.113-122
    • /
    • 2017
  • This study was conducted to understand the factors influencing happiness of college students while focusing on meaning in life and to supply an interventional way for promotion of happiness of college students. The subjects were 205 students in C city and the data were collected from 20-30 October 2016. Data were analyzed based on themeans using an independent t-test, ANOVA, $Scheff{\acute{e}}^{\prime}s$ method, Pearson's correlation coefficients and stepwise multiple regression. Significant differences were found in the happiness grades by sex (p=0.045), age (p=0.019), degree (p=0.038), satisfaction in major (p<0.001), and perceived health condition (p<0.001). Happiness was positively related to self-confidence (p<0.001), self-control (p<0.001), positive emotion (p<0.001), meaning in life (p<0.001), finding meaning in life (p<0.001), and search for meaning in life (p<0.001). Uponmultiple regression analysis, finding meaning in life, perceived health condition, and satisfaction withmajor accounted for 41.0% of the happiness, with finding meaning in life beingthe most influential factor.

Emotion recognition in speech using hidden Markov model (은닉 마르코프 모델을 이용한 음성에서의 감정인식)

  • 김성일;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.3
    • /
    • pp.21-26
    • /
    • 2002
  • This paper presents the new approach of identifying human emotional states such as anger, happiness, normal, sadness, or surprise. This is accomplished by using discrete duration continuous hidden Markov models(DDCHMM). For this, the emotional feature parameters are first defined from input speech signals. In this study, we used prosodic parameters such as pitch signals, energy, and their each derivative, which were then trained by HMM for recognition. Speaker adapted emotional models based on maximum a posteriori(MAP) estimation were also considered for speaker adaptation. As results, the simulation performance showed that the recognition rates of vocal emotion gradually increased with an increase of adaptation sample number.

  • PDF

Fuzzy Model-Based Emotion Recognition Using Color Image (퍼지 모델을 기반으로 한 컬러 영상에서의 감성 인식)

  • Joo, Young-Hoon;Jeong, Keun-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.330-335
    • /
    • 2004
  • In this paper, we propose the technique for recognizing the human emotion by using the color image. To do so, we first extract the skin color region from the color image by using HSI model. Second, we extract the face region from the color image by using Eigenface technique. Third, we find the man's feature points(eyebrows, eye, nose, mouse) from the face image and make the fuzzy model for recognizing the human emotions (surprise, anger, happiness, sadness) from the structural correlation of man's feature points. And then, we infer the human emotion from the fuzzy model. Finally, we have proven the effectiveness of the proposed method through the experimentation.