• Title/Summary/Keyword: 대화 내 감정 인식

Search Result 6, Processing Time 0.017 seconds

Physiological Signal-Based Emotion Recognition in Conversations Using T-SNE (생체신호 기반의 T-SNE 를 활용한 대화 내 감정 인식 )

  • Subeen Leem;Byeongcheon Lee;Jihoon Moon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.703-705
    • /
    • 2023
  • 본 연구는 대화 중 생체신호 데이터를 활용하여 감정 인식 분야에서 더욱 정확하고 범용성이 높은 인식 기술을 제안한다. 이를 위해, 먼저 대화별 길이에 따른 측정값의 개수를 동일하게 조정하고 효과적인 생체신호 데이터의 조합을 비교 및 분석하기 위해 차원 축소 기법인 T-SNE (T-distributed Stochastic Neighbor Embedding)을 활용하여 감정 라벨의 분포를 확인한다. 또한, AutoML (Automated Machine Learning)을 이용하여 축소된 데이터로 감정을 분류 및 각성도와 긍정도를 예측하여 감정을 가장 잘 인식하는 생체신호 데이터의 조합을 발견한다.

Real-time Background Music System for Immersive Dialogue in Metaverse based on Dialogue Emotion (메타버스 대화의 몰입감 증진을 위한 대화 감정 기반 실시간 배경음악 시스템 구현)

  • Kirak Kim;Sangah Lee;Nahyeon Kim;Moonryul Jung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.4
    • /
    • pp.1-6
    • /
    • 2023
  • To enhance immersive experiences for metaverse environements, background music is often used. However, the background music is mostly pre-matched and repeated which might occur a distractive experience to users as it does not align well with rapidly changing user-interactive contents. Thus, we implemented a system to provide a more immersive metaverse conversation experience by 1) developing a regression neural network that extracts emotions from an utterance using KEMDy20, the Korean multimodal emotion dataset 2) selecting music corresponding to the extracted emotions from an utterance by the DEAM dataset where music is tagged with arousal-valence levels 3) combining it with a virtual space where users can have a real-time conversation with avatars.

Emotion Recognition of Speech Using the Wavelet Transform (웨이블렛 변환을 이용한 음성에서의 감정인식)

  • Go, Hyoun-Joo;Lee, Dae-Jong;Chun, Myung-Geun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04b
    • /
    • pp.817-820
    • /
    • 2002
  • 인간과 기계와의 인터페이스에 있어서 궁극적 목표는, 인간과 기계가 마치 사람과 사람이 대화하듯 자연스런 인터페이스가 이루어지도록 하는데 있다. 이에 본 논문에서는 사람의 음성속에 깃든 6개의 기본 감정을 인식하는 알고리듬을 제안하고자 한다. 이를 위하여 뛰어난 주파수 분해능력을 갖고 있는 웨이블렛 필터뱅크를 이용하여 음성을 여러 개의 서브밴드로 나누고 각 밴드에서 특징점을 추출하여 감정을 이식하고 이를 최종적으로 융합, 단일의 인식값을 내는 다중의사 결정 구조를 갖는 알고리듬을 제안하였다. 이를 적용하여 실제 음성 데이타에 적용한 결과 기존의 방법보다 높은 90%이상의 인식률을 얻을 수 있었다.

  • PDF

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

A Study on Social Capital Type of the Juvenile Deliquents (비행청소년의 사회적 자본 인식 유형에 관한 연구)

  • Shin, Geun Hwa
    • 한국사회정책
    • /
    • v.25 no.2
    • /
    • pp.333-366
    • /
    • 2018
  • The purpose of this study is to identify the types of social capital by delinquent adolescents using the Q methodology and to find ways to form social capital. As a result, 33 types of statements about social capital were extracted from 16 juvenile delinquents and five types were derived. Type I was named as 'Friend Supportive Type', Type II as 'Family Friendly', Type III as 'Ability Type', Type IV as 'Social Justice' and Type V as 'School Trust Type'. First, it is necessary to develop a program that uses good friends to improve social capital in peer relations. Second, intervention in the direct relationship between parents' children, that is, communication with friends, children of friends and children who are indirectly formed with children, as well as communication, attention, and communication are required. Third, it is necessary to intervene to control the continuous emotional control ability in daily life. Fourth, there is a need to strengthen the irrationality of the social system and the supervision over the harmful environment. Finally, there is a need to improve the environment to enhance the level of school norms and confidence in the school.

Generating Extreme Close-up Shot Dataset Based On ROI Detection For Classifying Shots Using Artificial Neural Network (인공신경망을 이용한 샷 사이즈 분류를 위한 ROI 탐지 기반의 익스트림 클로즈업 샷 데이터 셋 생성)

  • Kang, Dongwann;Lim, Yang-mi
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.983-991
    • /
    • 2019
  • This study aims to analyze movies which contain various stories according to the size of their shots. To achieve this, it is needed to classify dataset according to the shot size, such as extreme close-up shots, close-up shots, medium shots, full shots, and long shots. However, a typical video storytelling is mainly composed of close-up shots, medium shots, full shots, and long shots, it is not an easy task to construct an appropriate dataset for extreme close-up shots. To solve this, we propose an image cropping method based on the region of interest (ROI) detection. In this paper, we use the face detection and saliency detection to estimate the ROI. By cropping the ROI of close-up images, we generate extreme close-up images. The dataset which is enriched by proposed method is utilized to construct a model for classifying shots based on its size. The study can help to analyze the emotional changes of characters in video stories and to predict how the composition of the story changes over time. If AI is used more actively in the future in entertainment fields, it is expected to affect the automatic adjustment and creation of characters, dialogue, and image editing.