• Title/Summary/Keyword: 감정 음성

Search Result 235, Processing Time 0.029 seconds

Quantifying and Analyzing Vocal Emotion of COVID-19 News Speech Across Broadcasters in South Korea and the United States Based on CNN (한국과 미국 방송사의 코로나19 뉴스에 대해 CNN 기반 정량적 음성 감정 양상 비교 분석)

  • Nam, Youngja;Chae, SunGeu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.2
    • /
    • pp.306-312
    • /
    • 2022
  • During the unprecedented COVID-19 outbreak, the public's information needs created an environment where they overwhelmingly consume information on the chronic disease. Given that news media affect the public's emotional well-being, the pandemic situation highlights the importance of paying particular attention to how news stories frame their coverage. In this study, COVID-19 news speech emotion from mainstream broadcasters in South Korea and the United States (US) were analyzed using convolutional neural networks. Results showed that neutrality was detected across broadcasters. However, emotions such as sadness and anger were also detected. This was evident in Korean broadcasters, whereas those emotions were not detected in the US broadcasters. This is the first quantitative vocal emotion analysis of COVID-19 news speech. Overall, our findings provide new insight into news emotion analysis and have broad implications for better understanding of the COVID-19 pandemic.

Discriminative Feature Vector Selection for Emotion Classification Based on Speech. (음성신호기반의 감정분석을 위한 특징벡터 선택)

  • Choi, Ha-Na;Byun, Sung-Woo;Lee, Seok-Pil
    • Proceedings of the KIEE Conference
    • /
    • 2015.07a
    • /
    • pp.1391-1392
    • /
    • 2015
  • 최근 컴퓨터 기술이 발전하고, 컴퓨터의 형태가 다양해지면서 여러 wearable device들이 생겨났다. 이에 따라 휴먼 인터페이스 기술에서 사람의 감정정보가 중요해졌고, 감정인식에 대한 연구들이 많이 진행 되어 왔다. 본 논문에서는 감정분석에 적합한 특징벡터를 제시하고자 한다. 이를 위해 사람의 감정을 보통, 기쁨, 슬픔, 화남 4가지로 분류하고 방송매체를 통하여 잡음 없이 녹음하였다. 특징벡터는 MFCC, LPC, LPCC 3가지를 추출하였고 Bhattacharyya거리 측정을 통하여 분리도를 비교하였다.

  • PDF

음성 합성 및 발성 변환 기술

  • 김종국;이기영;배명진
    • The Magazine of the IEIE
    • /
    • v.31 no.6
    • /
    • pp.52-62
    • /
    • 2004
  • 음성은 인간과 인간의 의사소통 수단으로 가장 편리하게 사용되는 매체이다. 음성 중에는 여러 가지 정보가 포함되어 있지만 가장 기본적이고 중요한 것이 의미정보 즉 언어적 정보이다. 또한 음성에는 누가 말하고 있는가를 나타내는 개인성 정보, 말하는 사람의 감정을 전해주는 정서 정보 등이 있다.(중략)

  • PDF

Implementation of the Timbre-based Emotion Recognition Algorithm for a Healthcare Robot Application (헬스케어 로봇으로의 응용을 위한 음색기반의 감정인식 알고리즘 구현)

  • Kong, Jung-Shik;Kwon, Oh-Sang;Lee, Eung-Hyuk
    • Journal of IKEEE
    • /
    • v.13 no.4
    • /
    • pp.43-46
    • /
    • 2009
  • This paper deals with feeling recognition from people's voice to fine feature vectors. Voice signals include the people's own information and but also people's feelings and fatigues. So, many researches are being progressed to fine the feelings from people's voice. In this paper, We analysis Selectable Mode Vocoder(SMV) that is one of the standard 3GPP2 codecs of ETSI. From the analyzed result, we propose voices features for recognizing feelings. And then, feeling recognition algorithm based on gaussian mixture model(GMM) is proposed. It uses feature vectors is suggested. We verify the performance of this algorithm from changing the mixture component.

  • PDF

전통음악 및 서양음악 가수에 대한 음악 음형대 및 성대진동양상에 대한 연구

  • 홍기환;박병암;양윤수;김현기
    • Proceedings of the KSLP Conference
    • /
    • 1997.11a
    • /
    • pp.254-254
    • /
    • 1997
  • 서양음악을 전공하는 숙달된 가수들은 공연시 효과적인 소리의 울림현상을 생성하여 노래소리가 잘 전달되는데 이는 노래소리에 대한 음악 음형대의 형성에 기인한다. 그러나 판소리의 일반적인 창법은 발성중 적절한 감정의 삽입과 더불어 아주 깨끗한 소리보다는 적절하게 탁음을 조화시키므로서 우리민족의 정서에 맞는 감정을 전달하는 것으로 판소리 가수만이 가지는 독특한 발성법에 의해 성대의 과다한 접촉 및 진동에 의해 병변이 초래되며 심한 경우 발성기관에 병적인 병변이 발생하여 치료를 요하는 경우도 많다. (중략)

  • PDF

Study of Emotion in Speech (감정변화에 따른 음성정보 분석에 관한 연구)

  • 장인창;박미경;김태수;박면웅
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.1123-1126
    • /
    • 2004
  • Recognizing emotion in speech is required lots of spoken language corpus not only at the different emotional statues, but also in individual languages. In this paper, we focused on the changes speech signals in different emotions. We compared the features of speech information like formant and pitch according to the 4 emotions (normal, happiness, sadness, anger). In Korean, pitch data on monophthongs changed in each emotion. Therefore we suggested the suitable analysis techniques using these features to recognize emotions in Korean.

  • PDF

Design of Emotion Recognition Using Speech Signals (음성신호를 이용한 감정인식 모델설계)

  • 김이곤;김서영;하종필
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2001.10a
    • /
    • pp.265-270
    • /
    • 2001
  • Voice is one of the most efficient communication media and it includes several kinds of factors about speaker, context emotion and so on. Human emotion is expressed in the speech, the gesture, the physiological phenomena(the breath, the beating of the pulse, etc). In this paper, the method to have cognizance of emotion from anyone's voice signals is presented and simulated by using neuro-fuzzy model.

  • PDF

An Analysis of Formants Extracted from Emotional Speech and Acoustical Implications for the Emotion Recognition System and Speech Recognition System (독일어 감정음성에서 추출한 포먼트의 분석 및 감정인식 시스템과 음성인식 시스템에 대한 음향적 의미)

  • Yi, So-Pae
    • Phonetics and Speech Sciences
    • /
    • v.3 no.1
    • /
    • pp.45-50
    • /
    • 2011
  • Formant structure of speech associated with five different emotions (anger, fear, happiness, neutral, sadness) was analysed. Acoustic separability of vowels (or emotions) associated with a specific emotion (or vowel) was estimated using F-ratio. According to the results, neutral showed the highest separability of vowels followed by anger, happiness, fear, and sadness in descending order. Vowel /A/ showed the highest separability of emotions followed by /U/, /O/, /I/ and /E/ in descending order. The acoustic results were interpreted and explained in the context of previous articulatory and perceptual studies. Suggestions for the performance improvement of an automatic emotion recognition system and automatic speech recognition system were made.

  • PDF

Acquisition of natural Emotional Voice Through Autobiographical Recall Method (자전적 회상을 통한 자연스런 정서음성정보 수집방법에 관한 연구)

  • Jo, Eun-Kyung;Jo, Cheol-Woo;Min, Kyung-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.2
    • /
    • pp.66-70
    • /
    • 1997
  • In order to obtain natural emotional voice in laboratory, an autobiographical recall method was used and happy, angry, sad and afraid feelings were induced in 16 college students. Three independent judges rated the subject's facial expressions and vocal characteristics. The mood induction results were compared with those from the actor-initiated method. Data analysis showed that recall-induced voices successfully conveyed subtle emotional cues, while actor-induced voices signaled more extreme emotioms. Implications of the autobiographical recall method in emotional voice research and potential problems are discussed.

  • PDF