• Title/Summary/Keyword: Artificial Emotion Model

Search Result 52, Processing Time 0.022 seconds

Development of Elementary School AI Education Contents using Entry Text Model Learning (엔트리 텍스트 모델 학습을 활용한 초등 인공지능 교육 내용 개발)

  • Kim, Byungjo;Kim, Hyenbae
    • Journal of The Korean Association of Information Education
    • /
    • v.26 no.1
    • /
    • pp.65-73
    • /
    • 2022
  • In this study, by using Entry text model learning, educational contents for artificial intelligence education of elementary school students are developed and applied to actual classes. Based on the elementary and secondary artificial intelligence content table, the achievement standards of practical software education and artificial intelligence education will be reconstructed.. Among text, images, and sounds capable of machine learning, "production of emotion recognition programs using text model learning" will be selected as the educational content, which can be easily understood while reducing data preparation time for elementary school students. Entry artificial intelligence is selected as an education platform to develop artificial intelligence education contents that create emotion recognition programs using text model learning and apply them to actual elementary school classes. Based on the contents of this study, As a result of class application, students showed positive responses and interest in the entry AI class. it is suggested that quantitative research on the effectiveness of classes for elementary school students is necessary as a follow-up study.

Convolutional Neural Network Model Using Data Augmentation for Emotion AI-based Recommendation Systems

  • Ho-yeon Park;Kyoung-jae Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.57-66
    • /
    • 2023
  • In this study, we propose a novel research framework for the recommendation system that can estimate the user's emotional state and reflect it in the recommendation process by applying deep learning techniques and emotion AI (artificial intelligence). To this end, we build an emotion classification model that classifies each of the seven emotions of angry, disgust, fear, happy, sad, surprise, and neutral, respectively, and propose a model that can reflect this result in the recommendation process. However, in the general emotion classification data, the difference in distribution ratio between each label is large, so it may be difficult to expect generalized classification results. In this study, since the number of emotion data such as disgust in emotion image data is often insufficient, correction is made through augmentation. Lastly, we propose a method to reflect the emotion prediction model based on data through image augmentation in the recommendation systems.

Emotion Engine for Digital Character (디지털 캐릭터를 위한 감성엔진)

  • Kim Ji-Hwan;Cho Sung-Hyun;Choi Jong-Hak;Yang Jung-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.06b
    • /
    • pp.208-210
    • /
    • 2006
  • 최근 온라인 게임을 비롯하여 영화, 애니메이션 등 가상현실에서 캐릭터가 중심적인 역할을 하게 되었고 좀 더 능동적이고 사람에 가까운 캐릭터 개발이 필요하게 되었다. 이러한 요구 중에서 본 논문에서는 감성기반 캐릭터에 초점을 맞추었고 Emotion Al사의 Artificial Emotion Engine Model과 OCC Model를 바탕으로 각 캐릭터의 특성을 반영하고 캐릭터간의 상호 작용을 바탕으로 감성을 도출해 낼 수 있는 Emotion Engine의 Architecture를 제시한다.

  • PDF

A Study on Explainable Artificial Intelligence-based Sentimental Analysis System Model

  • Song, Mi-Hwa
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.142-151
    • /
    • 2022
  • In this paper, a model combined with explanatory artificial intelligence (xAI) models was presented to secure the reliability of machine learning-based sentiment analysis and prediction. The applicability of the proposed model was tested and described using the IMDB dataset. This approach has an advantage in that it can explain how the data affects the prediction results of the model from various perspectives. In various applications of sentiment analysis such as recommendation system, emotion analysis through facial expression recognition, and opinion analysis, it is possible to gain trust from users of the system by presenting more specific and evidence-based analysis results to users.

Engine of computational Emotion model for emotional interaction with human (인간과 감정적 상호작용을 위한 '감정 엔진')

  • Lee, Yeon Gon
    • Science of Emotion and Sensibility
    • /
    • v.15 no.4
    • /
    • pp.503-516
    • /
    • 2012
  • According to the researches of robot and software agent until now, computational emotion model is dependent on system, so it is hard task that emotion models is separated from existing systems and then recycled into new systems. Therefore, I introduce the Engine of computational Emotion model (shall hereafter appear as EE) to integrate with any robots or agents. This is the engine, ie a software for independent form from inputs and outputs, so the EE is Emotion Generation to control only generation and processing of emotions without both phases of Inputs(Perception) and Outputs(Expression). The EE can be interfaced with any inputs and outputs, and produce emotions from not only emotion itself but also personality and emotions of person. In addition, the EE can be existed in any robot or agent by a kind of software library, or be used as a separate system to communicate. In EE, emotions is the Primary Emotions, ie Joy, Surprise, Disgust, Fear, Sadness, and Anger. It is vector that consist of string and coefficient about emotion, and EE receives this vectors from input interface and then sends its to output interface. In EE, each emotions are connected to lists of emotional experiences, and the lists consisted of string and coefficient of each emotional experiences are used to generate and process emotional states. The emotional experiences are consisted of emotion vocabulary understanding various emotional experiences of human. This study EE is available to use to make interaction products to response the appropriate reaction of human emotions. The significance of the study is on development of a system to induce that person feel that product has your sympathy. Therefore, the EE can help give an efficient service of emotional sympathy to products of HRI, HCI area.

  • PDF

Optimized patch feature extraction using CNN for emotion recognition (감정 인식을 위해 CNN을 사용한 최적화된 패치 특징 추출)

  • Irfan Haider;Aera kim;Guee-Sang Lee;Soo-Hyung Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.510-512
    • /
    • 2023
  • In order to enhance a model's capability for detecting facial expressions, this research suggests a pipeline that makes use of the GradCAM component. The patching module and the pseudo-labeling module make up the pipeline. The patching component takes the original face image and divides it into four equal parts. These parts are then each input into a 2Dconvolutional layer to produce a feature vector. Each picture segment is assigned a weight token using GradCAM in the pseudo-labeling module, and this token is then merged with the feature vector using principal component analysis. A convolutional neural network based on transfer learning technique is then utilized to extract the deep features. This technique applied on a public dataset MMI and achieved a validation accuracy of 96.06% which is showing the effectiveness of our method.

Efficient Emotion Classification Method Based on Multimodal Approach Using Limited Speech and Text Data (적은 양의 음성 및 텍스트 데이터를 활용한 멀티 모달 기반의 효율적인 감정 분류 기법)

  • Mirr Shin;Youhyun Shin
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.174-180
    • /
    • 2024
  • In this paper, we explore an emotion classification method through multimodal learning utilizing wav2vec 2.0 and KcELECTRA models. It is known that multimodal learning, which leverages both speech and text data, can significantly enhance emotion classification performance compared to methods that solely rely on speech data. Our study conducts a comparative analysis of BERT and its derivative models, known for their superior performance in the field of natural language processing, to select the optimal model for effective feature extraction from text data for use as the text processing model. The results confirm that the KcELECTRA model exhibits outstanding performance in emotion classification tasks. Furthermore, experiments using datasets made available by AI-Hub demonstrate that the inclusion of text data enables achieving superior performance with less data than when using speech data alone. The experiments show that the use of the KcELECTRA model achieved the highest accuracy of 96.57%. This indicates that multimodal learning can offer meaningful performance improvements in complex natural language processing tasks such as emotion classification.

Deep Reinforcement Learning-Based Cooperative Robot Using Facial Feedback (표정 피드백을 이용한 딥강화학습 기반 협력로봇 개발)

  • Jeon, Haein;Kang, Jeonghun;Kang, Bo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.264-272
    • /
    • 2022
  • Human-robot cooperative tasks are increasingly required in our daily life with the development of robotics and artificial intelligence technology. Interactive reinforcement learning strategies suggest that robots learn task by receiving feedback from an experienced human trainer during a training process. However, most of the previous studies on Interactive reinforcement learning have required an extra feedback input device such as a mouse or keyboard in addition to robot itself, and the scenario where a robot can interactively learn a task with human have been also limited to virtual environment. To solve these limitations, this paper studies training strategies of robot that learn table balancing tasks interactively using deep reinforcement learning with human's facial expression feedback. In the proposed system, the robot learns a cooperative table balancing task using Deep Q-Network (DQN), which is a deep reinforcement learning technique, with human facial emotion expression feedback. As a result of the experiment, the proposed system achieved a high optimal policy convergence rate of up to 83.3% in training and successful assumption rate of up to 91.6% in testing, showing improved performance compared to the model without human facial expression feedback.

An Artificial Emotion Model for Expression of Game Character (감정요소가 적용된 게임 캐릭터의 표현을 위한 인공감정 모델)

  • Kim, Ki-Il;Yoon, Jin-Hong;Park, Pyoung-Sun;Kim, Mi-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.411-416
    • /
    • 2008
  • The development of games has brought about the birth of game characters that are visually very realistic. At present, one sees much enthusiasm for giving the characters emotions through such devices as avatars and emoticons. However, in a freely changing environment of games, the devices merely allow for the expression of the value derived from a first input rather than creating expressions of emotion that actively respond to their surroundings. As such, there are as of yet no displays of deep emotions among game characters. In light of this, the present article proposes the 'CROSS(Character Reaction on Specific Situation) Model AE Engine' for game characters in order to develop characters that will actively express action and emotion within the environment of the changing face of games. This is accomplished by classifying the emotional components applicable to game characters based on the OCC model, which is one of the most well known cognitive psychological models. Then, the situation of game playing analysis of the commercialized RPG game is systematized by ontology.

  • PDF

Development of a driver's emotion detection model using auto-encoder on driving behavior and psychological data

  • Eun-Seo, Jung;Seo-Hee, Kim;Yun-Jung, Hong;In-Beom, Yang;Jiyoung, Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.3
    • /
    • pp.35-43
    • /
    • 2023
  • Emotion recognition while driving is an essential task to prevent accidents. Furthermore, in the era of autonomous driving, automobiles are the subject of mobility, requiring more emotional communication with drivers, and the emotion recognition market is gradually spreading. Accordingly, in this research plan, the driver's emotions are classified into seven categories using psychological and behavioral data, which are relatively easy to collect. The latent vectors extracted through the auto-encoder model were also used as features in this classification model, confirming that this affected performance improvement. Furthermore, it also confirmed that the performance was improved when using the framework presented in this paper compared to when the existing EEG data were included. Finally, 81% of the driver's emotion classification accuracy and 80% of F1-Score were achieved only through psychological, personal information, and behavioral data.