• Title/Summary/Keyword: 감정인

Search Result 4,507, Processing Time 0.039 seconds

A study on non-face-to-face art appreciation system using emotion key (감정 키를 활용한 비대면 미술감상 시스템 연구)

  • Kim, Hyeong-Gyun
    • Journal of Digital Convergence
    • /
    • v.20 no.2
    • /
    • pp.57-62
    • /
    • 2022
  • This study was conducted with the purpose of listening to the explanations of artworks in the non-face-to-face class and confirming the learner's feelings as a result of the class. The proposed system listens to the explanation of the artwork, inputs the learner's emotions with a dedicated key, and expresses the result in music. To this end, the direction of the non-face-to-face art appreciation class model using the emotion key was set, and based on this, a system for non-face-to-face art appreciation was constructed. The learner will use the 'smart device using the emotion key' proposed in this study to listen to the explanation of the artwork and to input the emotion for the question presented. Through the proposed system, learners can express their emotional state in online art classes, and instructors receive the results of class participation and use them in various ways for educational analysis.

Korean Emotional Speech and Facial Expression Database for Emotional Audio-Visual Speech Generation (대화 영상 생성을 위한 한국어 감정음성 및 얼굴 표정 데이터베이스)

  • Baek, Ji-Young;Kim, Sera;Lee, Seok-Pil
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.71-77
    • /
    • 2022
  • In this paper, a database is collected for extending the speech synthesis model to a model that synthesizes speech according to emotions and generating facial expressions. The database is divided into male and female data, and consists of emotional speech and facial expressions. Two professional actors of different genders speak sentences in Korean. Sentences are divided into four emotions: happiness, sadness, anger, and neutrality. Each actor plays about 3300 sentences per emotion. A total of 26468 sentences collected by filming this are not overlap and contain expression similar to the corresponding emotion. Since building a high-quality database is important for the performance of future research, the database is assessed on emotional category, intensity, and genuineness. In order to find out the accuracy according to the modality of data, the database is divided into audio-video data, audio data, and video data.

Development of Emotion Recognition Model Using Audio-video Feature Extraction Multimodal Model (음성-영상 특징 추출 멀티모달 모델을 이용한 감정 인식 모델 개발)

  • Jong-Gu Kim;Jang-Woo Kwon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.4
    • /
    • pp.221-228
    • /
    • 2023
  • Physical and mental changes caused by emotions can affect various behaviors, such as driving or learning behavior. Therefore, recognizing these emotions is a very important task because it can be used in various industries, such as recognizing and controlling dangerous emotions while driving. In this paper, we attempted to solve the emotion recognition task by implementing a multimodal model that recognizes emotions using both audio and video data from different domains. After extracting voice from video data using RAVDESS data, features of voice data are extracted through a model using 2D-CNN. In addition, the video data features are extracted using a slowfast feature extractor. And the information contained in the audio and video data, which have different domains, are combined into one feature that contains all the information. Afterwards, emotion recognition is performed using the combined features. Lastly, we evaluate the conventional methods that how to combine results from models and how to vote two model's results and a method of unifying the domain through feature extraction, then combining the features and performing classification using a classifier.

Film Editing as Emotion Communication (영화편집론, 감정 커뮤니케이션)

  • Kim, Jong-Guk
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.584-591
    • /
    • 2014
  • This article discussed emotion and communication at the film editing theory. By editing as a form of communication, audiences response to the new facts of the story or the new shot as its details and then it uses their ability to induce. Lev Kuleshov, by the famous Mozhukin experiment, intended to show that montage draw spectator's inferences on emotion and association beyond content of the individual shots. The continuous editing technologies such as 180-degree rule, matching eye-view and behavior, 30-degree rule, and continuity of sound, light and color, enhance emotion. Point-of-view editing is the main device to maximize the film's emotion. Point-of-view editing serving the purpose of the film narration is a powerful means to practice and to persuade emotion communication.

A Study on the Database Improvement for the Real Estate Appraisal Information System (부동산 감정평가정보체계 DB구축 개선방안에 관한 연구)

  • Lee Jae-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.8
    • /
    • pp.94-104
    • /
    • 2006
  • Building the Public Appraisal Information System(PAIS) has been a critical issue because real estate valuation process depends on the various information and analysis. However, there has been little research in this area. As a result, there is not any comprehensive plan for the PAIS constructed by the MOCT(Ministry of Construction and Transportation), and its database quality has various Problems. Therefore, this paper investigated the database of the public appraisal information system and suggested development strategies as follow: preparing a comprehensive plan for data collection, establishment of data management standards, additional collection of the attribute and spatial data, linkage PAIS and other information system.

  • PDF

A Case Study on Emotional Expression Technology of Interactive Character (인터랙티브 캐릭터의 감정표현 기술 사례분석)

  • Ahn, Seong-Hye;Song, Su-Mi;Sung, Min-Young;Paek, Sun-Wook
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.197-203
    • /
    • 2009
  • The users need the communication that interaction is possible in digital communication environment. It is necessity to develop the Interactive Character that various emotional expression was possible while a user-centered emotional display tool was necessary. It is a fundamental research that is going to develop the Interactive Character that individualized emotional expression is possible. In other words, I have the purpose that is going to show the aromaticness of the technology development to express feelings. Therefore, I am going to analyze whether no matter how much technology to express feelings mainly on a face expression is incarnated through an example. And, I am going to show the development direction of the Interactive Character as an emotional display tool through this.

  • PDF

Automated Emotional Tagging of Lifelog Data with Wearable Sensors (웨어러블 센서를 이용한 라이프로그 데이터 자동 감정 태깅)

  • Park, Kyung-Wha;Kim, Byoung-Hee;Kim, Eun-Sol;Jo, Hwi-Yeol;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.6
    • /
    • pp.386-391
    • /
    • 2017
  • In this paper, we propose a system that automatically assigns user's experience-based emotion tags from wearable sensor data collected in real life. Four types of emotional tags are defined considering the user's own emotions and the information which the user sees and listens to. Based on the collected wearable sensor data from multiple sensors, we have trained a machine learning-based tagging system that combines the known auxiliary tools from the existing affective computing research and assigns emotional tags. In order to show the usefulness of this multi-modality-based emotion tagging system, quantitative and qualitative comparison with the existing single-modality-based emotion recognition approach are performed.

GMM-based Emotion Recognition Using Speech Signal (음성 신호를 사용한 GMM기반의 감정 인식)

  • 서정태;김원구;강면구
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.235-241
    • /
    • 2004
  • This paper studied the pattern recognition algorithm and feature parameters for speaker and context independent emotion recognition. In this paper, KNN algorithm was used as the pattern matching technique for comparison, and also VQ and GMM were used for speaker and context independent recognition. The speech parameters used as the feature are pitch. energy, MFCC and their first and second derivatives. Experimental results showed that emotion recognizer using MFCC and its derivatives showed better performance than that using the pitch and energy parameters. For pattern recognition algorithm. GMM-based emotion recognizer was superior to KNN and VQ-based recognizer.

Emotion Prediction of Document using Paragraph Analysis (문단 분석을 통한 문서 내의 감정 예측)

  • Kim, Jinsu
    • Journal of Digital Convergence
    • /
    • v.12 no.12
    • /
    • pp.249-255
    • /
    • 2014
  • Recently, creation and sharing of information make progress actively through the SNS(Social Network Service) such as twitter, facebook and so on. It is necessary to extract the knowledge from aggregated information and data mining is one of the knowledge based approach. Especially, emotion analysis is a recent subdiscipline of text classification, which is concerned with massive collective intelligence from an opinion, policy, propensity and sentiment. In this paper, We propose the emotion prediction method, which extracts the significant key words and related key words from SNS paragraph, then predicts the emotion using these extracted emotion features.

Emotional Expression System Based on Dynamic Emotion Space (동적 감성 공간에 기반한 감성 표현 시스템)

  • Sim Kwee-Bo;Byun Kwang-Sub;Park Chang-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.18-23
    • /
    • 2005
  • It is difficult to define and classify human emotion. These vague human emotion appear not in single emotion, but in combination of various emotion. And among them, a remarkable emotion is expressed. This paper proposes a emotional expression algorithm using dynamic emotion space, which give facial expression in similar with vague human emotion. While existing avatar express several predefined emotions from database, our emotion expression system can give unlimited various facial expression by expressing emotion based on dynamically changed emotion space. In order to see whether our system practically give complex and various human expression, we perform real implementation and experiment and verify the efficacy of emotional expression system based on dynamic emotion space.