• Title/Summary/Keyword: information of emotion

Search Result 1,326, Processing Time 0.033 seconds

Towards Next Generation Multimedia Information Retrieval by Analyzing User-centered Image Access and Use (이용자 중심의 이미지 접근과 이용 분석을 통한 차세대 멀티미디어 검색 패러다임 요소에 관한 연구)

  • Chung, EunKyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.51 no.4
    • /
    • pp.121-138
    • /
    • 2017
  • As information users seek multimedia with a wide variety of information needs, information environments for multimedia have been developed drastically. More specifically, as seeking multimedia with emotional access points has been popular, the needs for indexing in terms of abstract concepts including emotions have grown. This study aims to analyze the index terms extracted from Getty Image Bank. Five basic emotion terms, which are sadness, love, horror, happiness, anger, were used when collected the indexing terms. A total 22,675 index terms were used for this study. The data are three sets; entire emotion, positive emotion, and negative emotion. For these three data sets, co-word occurrence matrices were created and visualized in weighted network with PNNC clusters. The entire emotion network demonstrates three clusters and 20 sub-clusters. On the other hand, positive emotion network and negative emotion network show 10 clusters, respectively. The results point out three elements for next generation of multimedia retrieval: (1) the analysis on index terms for emotions shown in people on image, (2) the relationship between connotative term and denotative term and possibility for inferring connotative terms from denotative terms using the relationship, and (3) the significance of thesaurus on connotative term in order to expand related terms or synonyms for better access points.

Emotion-Based Control Model (제어 기반 감성 모델)

  • Ko, Sung-Bum;Lim, Gi-Young
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.05a
    • /
    • pp.199-202
    • /
    • 2001
  • We, Human beings, use both powers of reason and emotion simultaneously, which surely help us to obtain flexible adaptability against the dynamic environment. We assert that this principle can be applied into the general system. That is, it would be possible to improve the adaptability by covering a digital oriented information processing system with an analog oriented emotion layer. In this paper, we proposed a vertical slicing model with an emotion layer in it. And we showed that the emotion-based control allows us to improve the adaptability of a system at least under some conditions.

  • PDF

COMPUTATIONAL MODELING OF KANSEI PROCESSES FOR HUMAN-CENTERED INFORMATION TECHNOLOGY

  • Kato, Toshikazu
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2003.05a
    • /
    • pp.101-106
    • /
    • 2003
  • This paper introduces the basic concept of computational modeling of perception processes for multimedia data. Such processes are modeled as hierarchical inter-and relationships amongst information in physical, physiological, psychological and cognitive layers in perception. Based on our framework, this paper gives the , algorithms for content-based retrieval for multimedia database systems.

  • PDF

Automatic Human Emotion Recognition from Speech and Face Display - A New Approach (인간의 언어와 얼굴 표정에 통하여 자동적으로 감정 인식 시스템 새로운 접근법)

  • Luong, Dinh Dong;Lee, Young-Koo;Lee, Sung-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.231-234
    • /
    • 2011
  • Audiovisual-based human emotion recognition can be considered a good approach for multimodal humancomputer interaction. However, the optimal multimodal information fusion remains challenges. In order to overcome the limitations and bring robustness to the interface, we propose a framework of automatic human emotion recognition system from speech and face display. In this paper, we develop a new approach for fusing information in model-level based on the relationship between speech and face expression to detect automatic temporal segments and perform multimodal information fusion.

Development of Context Awareness and Service Reasoning Technique for Handicapped People (멀티 모달 감정인식 시스템 기반 상황인식 서비스 추론 기술 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.34-39
    • /
    • 2009
  • As a subjective recognition effect, human's emotion has impulsive characteristic and it expresses intentions and needs unconsciously. These are pregnant with information of the context about the ubiquitous computing environment or intelligent robot systems users. Such indicators which can aware the user's emotion are facial image, voice signal, biological signal spectrum and so on. In this paper, we generate the each result of facial and voice emotion recognition by using facial image and voice for the increasing convenience and efficiency of the emotion recognition. Also, we extract the feature which is the best fit information based on image and sound to upgrade emotion recognition rate and implement Multi-Modal Emotion recognition system based on feature fusion. Eventually, we propose the possibility of the ubiquitous computing service reasoning method based on Bayesian Network and ubiquitous context scenario in the ubiquitous computing environment by using result of emotion recognition.

Visualization using Emotion Information in Movie Script (영화 스크립트 내 감정 정보를 이용한 시각화)

  • Kim, Jinsu
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.11
    • /
    • pp.69-74
    • /
    • 2018
  • Through the convergence of Internet technology and various information technologies, it is possible to collect and process vast amount of information and to exchange various knowledge according to user's personal preference. Especially, there is a tendency to prefer intimate contents connected with the user's preference through the flow of emotional changes contained in the movie media. Based on the information presented in the script, the user seeks to visualize the flow of the entire emotion, the flow of emotions in a specific scene, or a specific scene in order to understand it more quickly. In this paper, after obtaining the raw data from the movie web page, it transforms it into a standardized scenario format after refining process. After converting the refined data into an XML document to easily obtain various information, various sentences are predicted by inputting each paragraph into the emotion prediction system. We propose a system that can easily understand the change of the emotional state between the characters in the whole or a specific part of the various emotions required by the user by mixing the predicted emotions flow and the amount of information included in the script.

Ranking Tag Pairs for Music Recommendation Using Acoustic Similarity

  • Lee, Jaesung;Kim, Dae-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.3
    • /
    • pp.159-165
    • /
    • 2015
  • The need for the recognition of music emotion has become apparent in many music information retrieval applications. In addition to the large pool of techniques that have already been developed in machine learning and data mining, various emerging applications have led to a wealth of newly proposed techniques. In the music information retrieval community, many studies and applications have concentrated on tag-based music recommendation. The limitation of music emotion tags is the ambiguity caused by a single music tag covering too many subcategories. To overcome this, multiple tags can be used simultaneously to specify music clips more precisely. In this paper, we propose a novel technique to rank the proper tag combinations based on the acoustic similarity of music clips.

Emotion-Based Dynamic Crowd Simulation (인간의 감정에 기반한 동적 군중 시뮬레이션)

  • Moon Chan-Il;Han Sang-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.3
    • /
    • pp.87-93
    • /
    • 2004
  • In this paper we present a hybrid model that enables dynamic regrouping based on emotion in determining the behavioral pattern of crowds in order to enhance the reality of crowd simulation in virtual environments such as games. Emotion determination rules are defined and they are used for dynamic human regrouping to simulate the movement of characters through crowds realistically. Our experiments show more natural simulation of crowd behaviors as results of this research.

  • PDF

Adjusting Personality Types to Character with Affective Features in Artificial Emotion (캐릭터의 성격 유형별로 정서적 특징을 반영한 인공감정)

  • Yeo, Ji-Hye;Ham, Jun-Seok;Jo, Yu-Young;Lee, Kyoung-Mi;Ko, Il-Ju
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2011.01a
    • /
    • pp.133-136
    • /
    • 2011
  • 본 논문은 선행 연구된 인공감정 모델에 성격 유형별로 정서적 특징을 반영한 캐릭터의 인공감정을 설계하여 적용하는 것을 목적으로 한다. 캐릭터의 인공감정은 성격 유형별로 정서적 특징을 생성에서 소멸까지의 시간과 크기로 표현할 수 있다. 따라서 정서의 유지시간, 정서적 경험을 한번 했을 때 느낄 수 있는 최대 크기와 정서를 표현하는 시점을 두 단계로 나누어 정서적 특성을 MBTI 성격 유형에 따라 적용한다. 이렇게 설계된 인공감정은 실제 캐릭터의 적용해보고 성격 유형에 따라 감정 표현과 변화를 분석한다.

  • PDF