• Title/Summary/Keyword: Sadness Emotion

Search Result 153, Processing Time 0.032 seconds

Emotion Recognition in Children With Autism Spectrum Disorder: A Comparison of Musical and Visual Cues (음악 단서와 시각 단서 조건에 따른 학령기 자폐스펙트럼장애 아동과 일반아동의 정서 인식 비교)

  • Yoon, Yea-Un
    • Journal of Music and Human Behavior
    • /
    • v.19 no.1
    • /
    • pp.1-20
    • /
    • 2022
  • The purpose of this study was to evaluate how accurately children with autism spectrum disorder (ASD; n = 9) recognized four basic emotions (i.e., happiness, sadness, anger, and fear) following musical or visual cues. Their performance was compared to that of typically developing children (TD; n = 14). All of the participants were between the ages of 7 and 13 years. Four musical cues and four visual cues for each emotion were presented to evaluate the participants' ability to recognize the four basic emotions. The results indicated that there were significant differences between the two groups between the musical and visual cues. In particular, the ASD group demonstrated significantly less accurate recognition of the four emotions compared to the TD group. However, the emotion recognition of both groups was more accurate following the musical cues compared to the visual cues. Finally, for both groups, their greatest recognition accuracy was for happiness following the musical cues. In terms of the visual cues, the ASD group exhibited the greatest recognition accuracy for anger. This initial study support that musical cues can facilitate emotion recognition in children with ASD. Further research is needed to improve our understanding of the mechanisms involved in emotion recognition and the role of sensory cues play in emotion recognition for children with ASD.

A Study on the Development of Emotional Content through Natural Language Processing Deep Learning Model Emotion Analysis (자연어 처리 딥러닝 모델 감정분석을 통한 감성 콘텐츠 개발 연구)

  • Hyun-Soo Lee;Min-Ha Kim;Ji-won Seo;Jung-Yi Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.687-692
    • /
    • 2023
  • We analyze the accuracy of emotion analysis of natural language processing deep learning model and propose to use it for emotional content development. After looking at the outline of the GPT-3 model, about 6,000 pieces of dialogue data provided by Aihub were input to 9 emotion categories: 'joy', 'sadness', 'fear', 'anger', 'disgust', and 'surprise'. ', 'interest', 'boredom', and 'pain'. Performance evaluation was conducted using the evaluation indices of accuracy, precision, recall, and F1-score, which are evaluation methods for natural language processing models. As a result of the emotion analysis, the accuracy was over 91%, and in the case of precision, 'fear' and 'pain' showed low values. In the case of reproducibility, a low value was shown in negative emotions, and in the case of 'disgust' in particular, an error appeared due to the lack of data. In the case of previous studies, emotion analysis was mainly used only for polarity analysis divided into positive, negative, and neutral, and there was a limitation in that it was used only in the feedback stage due to its nature. We expand emotion analysis into 9 categories and suggest its use in the development of emotional content considering it from the planning stage. It is expected that more accurate results can be obtained if emotion analysis is performed by additionally collecting more diverse daily conversations through follow-up research.

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

The Relationship of Developmental Change of Temperament and Problem Behaviors During Infancy: Early Characteristic of Temperament and Developmental Patterns (영아기 기질의 발달적 변화와 영아기 사회적 부적응 행동 간의 관계 : 초기 기질 특성과 기질의 변화 패턴을 중심으로)

  • Kim, Su-chung;Kwak, Keumjoo
    • Korean Journal of Child Studies
    • /
    • v.28 no.6
    • /
    • pp.183-199
    • /
    • 2007
  • This longitudinal study investigated developmental changes in temperament and examined social adjustment problems by early temperamental characteristics and developmental patterns of temperamental change during infancy. Subjects were 153 six-month-old infants and their mothers. Infant temperament and toddler's problem behavior were measured by the Infant Behavior Questionnaire-Revised (Garstein & Rothbart, 2003) and the Toddler Behavior Checklist (Larzelere et al., 1989), respectively. Results showed that distress to limitations, high pleasure, perceptual sensitivity, and approach increased with age, while activity level, cuddliness, and vocal reactivity decreased. Infants with high scores in activity level, fear, sadness, and approach at 6 months showed more problem behaviors at 18 months. Infants showing abrupt developmental change of high pleasure and perceptual sensitivity developed more negative behavior.

  • PDF

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2E
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

Young Children's Perceptions and Responses to Negative Emotions (유아가 인식하는 부정적 정서와 반응)

  • Jeong, Youn Hee;Kim, Heejin
    • Korean Journal of Child Studies
    • /
    • v.23 no.2
    • /
    • pp.31-47
    • /
    • 2002
  • In this study, the perceptions and responses of 136 kindergarten children from middle SES families were recorded in one-to-one interviews about the cause, reasons for expression, and responses to negative emotions. Results showed that children perceived he causes of anger and sadness as 'interpersonal events' and they perceived he cause of fear to be 'fantasy/scary events'. The children tended not to express their negative emotions because they expected negative responses from their peers and mothers, but when they did, the expressed their negative emotions to their mothers rather than to peers. Children responded to the negative emotions of their peers with 'problem-solving focused strategies', but they responded to their mothers' negative emotions with passive strategies, such as 'emotion focused response' and 'avoidance'.

  • PDF

The Development of Moral Emotional Understanding in Preschool Children : The Influence of Offenders' Intentions and Victims' Reactions (유아의 도덕적 정서 이해의 발달 : 가해자 의도와 피해자 반응의 영향)

  • Song, Ha-Na
    • Korean Journal of Child Studies
    • /
    • v.33 no.2
    • /
    • pp.1-12
    • /
    • 2012
  • This study examined the influences of age, offenders' intention, victims' emotional reactions on the moral emotional understanding of preschool children. Eighty eight children aged 4, 5, and 6 participated in this study, and were interviewed using four moral transgression stories. The responses of the children were then analyzed in terms of the levels of moral emotional understanding, from error through to the understanding of secondary emotions. The results indicated that older children showed higher levels of moral emotional understanding than younger children. Additionally, children's moral emotional understanding was higher in situations in which offenders' behaviors were intentional, and in which the victims expressed sadness. The attribution of moral emotions was influenced by victims' emotional reactions only in 6-year-old children. Discussion of these results also included the development of intervention programs for children with aggressive behaviors, as well as a number of suggestions for future study.

The Expression of Negative Emotions During Children's Pretend Play (유아의 상상놀이에서 부정적 정서 표현에 대한 연구)

  • Shin, Yoolim
    • Korean Journal of Child Studies
    • /
    • v.21 no.3
    • /
    • pp.133-142
    • /
    • 2000
  • This study investigated the extent to which negative emotions were portrayed, the ways in which children communicated about negative emotions, and to whom negative emotions were attributed during pretend play. The themes in which negative emotions were embedded were examined. Thirty 4- and 5-year-olds, each paired with a self-chosen peer, were observed and videotaped during a 20-minute play session. Observations presented the following conclusions: Anger and fear were the most frequently occurring negative emotions. Children communicated about negative feelings through emotion action labels and gesture. Children attributed a large proportion of their emotional portrayals to themselves and to play objects. Expression of affective themes embedded in pretend play included anger, fear, sadness, and pain.

  • PDF

Speech emotion recognition based on CNN - LSTM Model (CNN - LSTM 모델 기반 음성 감정인식)

  • Yoon, SangHyeuk;Jeon, Dayun;Park, Neungsoo
    • Annual Conference of KIPS
    • /
    • 2021.11a
    • /
    • pp.939-941
    • /
    • 2021
  • 사람은 표정, 음성, 말 등을 통해 감정을 표출한다. 본 논문에서는 화자의 음성데이터만을 사용하여 감정을 분류하는 방법을 제안한다. 멜 스펙트로그램(Mel-Spectrogram)을 이용하여 음성데이터를 시간에 따른 주파수 영역으로 변화한다. 멜 스펙트로그램으로 변환된 데이터를 CNN을 이용하여 특징 벡터화한 후 Bi-Directional LSTM을 이용하여 화자의 발화 시간 동안 변화되는 감정을 분석한다. 마지막으로 완전 연결 네트워크를 통해 전체 감정을 분류한다. 감정은 Anger, Excitement, Fear, Happiness, Sadness, Neutral로, 총 6가지로 분류하였으며 데이터베이스로는 상명대 연구팀에서 구축한 한국어 음성 감정 데이터베이스를 사용하였다. 실험 결과 논문에서 제안한 CNN-LSTM 모델의 정확도는 88.89%로 측정되었다.

Recognition of the emotional state through the EEG (뇌파를 통한 감정 상태 인식에 관한 연구)

  • Ji, Hoon;Lee, Chung-heon;Park, Mun-Kyu;An, Young-jun;Lee, Dong-hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.958-961
    • /
    • 2015
  • Emotional expression is universal and emotional state impacts important areas in our life. Until now, analyzing the acquired EEG signals under circumstances caused by invoked feelings and efforts to define their emotional state have been made mainly by psychologists based on the results. But, recently emotion-related information was released by research results that it is possible to identify mental activity through measuring and analyzing the brain EEG signals. So, this study has compared and analyzed emotional expressions of human by using brain waves. To get EEG difference for a particular emotion, we showed specific subject images to the people for changing emotions that peace, joy, sadness and stress, etc. After measured EEG signals were converged into frequence domain by FFT signal process, we have showed EEG changes in emotion as a result of the performance analyzing each respective power spectrum of delta, theta, alpha, beta and gamma waves.

  • PDF