• Title/Summary/Keyword: 감정 기록

Search Result 96, Processing Time 0.025 seconds

Emotion Recognition Method using Physiological Signals and Gesture (생체 신호와 몸짓을 이용한 감성인식 방법)

  • Kim, Ho-Deok;Yang, Hyeon-Chang;Park, Chang-Hyeon;Sim, Gwi-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.25-28
    • /
    • 2007
  • Electroencephalograhic(EEG)는 심리학의 영역에서 인간 두뇌의 활동을 측정 기록하는데 오래 전부터 사용하였다. 과학의 발달함에 따라 점차적으로 인간의 두뇌에서 감정을 조절하는 기본적인 영역들이 밝혀지고 있다. 그래서 인간의 감정을 조절하는 인간의 두뇌 활동 영역들을 EEG를 이용하여 측정하였다. 본 논문에서는 EEG의 신호들과 몸짓을 이용해서 감정을 인식하였다. 특히, 기존에 생체신호나 몸짓 중 한 가지만을 이용하여 각각 실험해서 감성을 인식하였지만, 본 논문에서는 EEG 신호와 몸짓을 동시에 이용해서 피 실험자의 감성을 인식하는 실험을 하였다. 실험결과 기존의 생체신호나 몸짓 한 가지만을 가지고 실험했을 때의 인식률 보다 더 높은 인식률을 보임을 알 수 있었다. 그리고 생체신호와 몸짓들의 특징 신호들은 강화학습의 개념을 이용한 IFS(Interactive Feature Selection)를 이용하여 특징 선택을 하였다.

  • PDF

BERT and LLM-based emotion analysis-empathy diary service for care leavers (자립준비청년을 위한 BERT 및 LLM 기반 감정 분석-공감 일기 서비스)

  • Hyun-soo Kim;Gyeong-min Lee;Hwan-Il Jung;Seon-Hye Jang;Tae-soo Jun
    • Annual Conference of KIPS
    • /
    • 2024.10a
    • /
    • pp.1098-1099
    • /
    • 2024
  • 본 논문은 자립준비청년(보호종료아동)을 대상으로, BERT와 대규모 언어 모델(LLM)을 이용한 감정 분석 일기 서비스를 제안한다. 사용자가 자신의 감정을 기록하고 분석하여 심리 상태를 지속 관리할 수 있도록 하며, 공감 메시지와 공유 기능을 통해 정서적 지지를 제공한다.

Research on Classification of Human Emotions Using EEG Signal (뇌파신호를 이용한 감정분류 연구)

  • Zubair, Muhammad;Kim, Jinsul;Yoon, Changwoo
    • Journal of Digital Contents Society
    • /
    • v.19 no.4
    • /
    • pp.821-827
    • /
    • 2018
  • Affective computing has gained increasing interest in the recent years with the development of potential applications in Human computer interaction (HCI) and healthcare. Although momentous research has been done on human emotion recognition, however, in comparison to speech and facial expression less attention has been paid to physiological signals. In this paper, Electroencephalogram (EEG) signals from different brain regions were investigated using modified wavelet energy features. For minimization of redundancy and maximization of relevancy among features, mRMR algorithm was deployed significantly. EEG recordings of a publically available "DEAP" database have been used to classify four classes of emotions with Multi class Support Vector Machine. The proposed approach shows significant performance compared to existing algorithms.

문자의 역사

  • Ra, Gyeong-Jun
    • 프린팅코리아
    • /
    • s.20
    • /
    • pp.155-159
    • /
    • 2004
  • 의사소통은 인간의 기본 욕구 중 하나다. 우리 선조들이 생활주변에서 얻어지는 갖가지 재료를 가지고 의사전달의 표현도구로 삼았던 것은 바로 이러한 근원적인 욕망을 충족시키고자 하는 데서 비롯되었다. 인류가 발전함에 따라 감정과 생각을 좀 더 쉽고 정확하게 전달하고자 하는, 한 단계 더 심화된 욕구가 발동되었고 그에 따라 표현방법도 다양하게 모색되었다. 더 나아가 표출된 의사소통 내용을 기록해 두어 오래 보존하고자 하는 욕망도 생겨났는데, 앞으로 우리가 함께 더듬어 볼 한국인쇄술의 발명 역사도 결국은 이러한 맥락과 상통되고 있는 셈이다.

  • PDF

KoBERT-based for parents with disabilities Implementation of Emotion Analysis Communication Platform (장애아 부모를 위한 KoBERT 기반 감정분석 소통 플랫폼 구현)

  • Jae-Hyung Ha;Ji-Hye Huh;Won-Jib Kim;Jung-Hun Lee;Woo-Jung Park
    • Annual Conference of KIPS
    • /
    • 2023.11a
    • /
    • pp.1014-1015
    • /
    • 2023
  • 많은 장애아 부모들은 양육에 대한 스트레스, 미래에 대한 걱정으로 심리적으로 상당한 중압감을 느낀다. 이에 비해 매년 증가하는 장애인 수에 비해 장애아 부모 및 가족의 심리적·정신적 문제를 해결하기 위한 프로그램이 부족하다.[1] 이를 해결하고자 본 논문에서는 감정분석 소통 플랫폼을 제안한다. 제안하는 플랫폼은 KoBERT 모델을 fine-tunning 하여 사용자의 일기 속 감정을 분석하여 장애아를 둔 부모 및 가족 간의 소통을 돕는다. 성능평가는 제안하는 플랫폼의 주요 기능인 KoBERT 기반 감정분석의 성능을 확인하기위해 텍스트 분류 모델로 널리 사용되고 있는 LSTM, Bi-LSTM, GRU 모델 별 성능지표들과 비교 분석한다. 성능 평가결과 KoBERT 의 정확도가 다른 분류군의 정확도보다 평균 31.4% 높은 성능을 보였고, 이 외의 지표에서도 비교적 높은 성능을 기록했다.

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

Emotion Recognition Method using Physiological Signals and Gestures (생체 신호와 몸짓을 이용한 감정인식 방법)

  • Kim, Ho-Duck;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.3
    • /
    • pp.322-327
    • /
    • 2007
  • Researchers in the field of psychology used Electroencephalographic (EEG) to record activities of human brain lot many years. As technology develope, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study emotion recognition method which uses one of physiological signals and gestures in the existing research. In this paper, we use together physiological signals and gestures for emotion recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both physiological signals and gestures gets high recognition rates better than using physiological signals or gestures. Both physiological signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on a reinforcement learning.

Implementation of Pet Management System including Deep Learning-based Breed and Emotion Recognition SNS (딥러닝 기반 품종 및 감정인식 SNS를 포함하는 애완동물 관리 시스템 구현)

  • Inhwan Jung;Kitae Hwang;Jae-Moon Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.45-50
    • /
    • 2023
  • As the ownership of pets has steadily increased in recent years, the need for an effective pet management system has grown. In this study, we propose a pet management system with a deep learning-based emotion recognition SNS. The system detects emotions through pet facial expressions using a convolutional neural network (CNN) and shares them with a user community through SNS. Through SNS, pet owners can connect with other users, share their experiences, and receive support and advice for pet management. Additionally, the system provides comprehensive pet management, including tracking pet health and vaccination and reservation reminders. Furthermore, we added a function to manage and share pet walking records so that pet owners can share their walking experiences with other users. This study demonstrates the potential of utilizing AI technology to improve pet management systems and enhance the well-being of pets and their owners.

Diary Application Design Based Augmented Reality Using Tree(metaphor) (나무를 메타포로 하는 증강현실 기반 일상다이어리 어플리케이션 기획 및 설계)

  • Kim, Yoo-bin;Roh, Jong-hee;Lee, Ye-Won;Lee, Hyo-Jeong;Park, Jung Kyu;Park, Su e
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.201-204
    • /
    • 2017
  • People live in their everyday life busy with studies, part-time jobs, and searching for an ideal job. In their busy routine, they try to find time for themselves and expose their emotions through diverse social network services(SNS). We made a service that we can plant a virtual tree in places we daily visit and go by. You can keep note on the virtual tree and look through the past records. It is a reality based mobile application service that can be used like a diary.In this project we chose the tree as the metaphor and tried to express time passing in a specific place. As our memory is a part of our daily life, we emphasized the meaning of space important.

  • PDF

Analyzing the Characteristics of Evidence Use and Decision-making Difficulties of Gifted Elementary Science Students in SSI Discussions (SSI 수업에서 초등 과학 영재의 추론 유형별 근거 활용의 특징과 의사결정의 어려움 분석)

  • Jang, Hyoungwoon;Jang, Shinho
    • Journal of Korean Elementary Science Education
    • /
    • v.42 no.3
    • /
    • pp.421-433
    • /
    • 2023
  • This study examined the reasoning of gifted elementary science students in a socioscientific issues (SSI) classroom discussion on COVID-19-related trash disposal challenges. This study aimed to understand the characteristics of evidence use and decision-making difficulties in each type of SSI-related reasoning. To this end, the transcripts of 17 gifted students of elementary science discussing SSIs in a classroom were analyzed within the framework of informal reasoning. The analysis framework was categorized into three types according to the primary influence involved in reasoning: rational, emotional, and intuitive. The analysis showed that students exhibited four categories of evidence use in SSI reasoning. First, in the rational reasoning category, students deemed and recorded scientific knowledge, numbers, and statistics as objective evidence. However, students who experienced difficulty in investigating such scientific data were less likely to have factored them in subsequent decisions. Second, in the emotional reasoning category, students' solutions varied considerably depending on the perspective they empathized with and reasoned from. Differences in their views led to conflicting perspectives on SSIs and consequent disagreement. Third, in the intuitive reasoning category, students disagreed with the opinions of their peers but did not explain their positions precisely. Intuitive reasoning also created challenges as students avoided problem-solving in the discussion and did not critically examine their opinions. Fourth, a mixed category of reasoning emerged: intuition combined with rationality or emotion. When combined with emotion, intuitive reasoning was characterized by deep empathy arising from personal experience, and when combined with rationality, the result was only an impulsive reaction. These findings indicate that research on student understanding and faculty knowledge of SSIs discussed in classrooms should consider the difficulties in informal reasoning and decision-making.