• Title/Summary/Keyword: crossmodal perception

Search Result 2, Processing Time 0.018 seconds

Development of Multiple-modality Psychophysical Scaling System for Evaluating Subjective User Perception of the Participatory Multimedia System (참여형 멀티미디어 시스템 사용자 감성평가를 위한 다차원 심물리학적 척도 체계)

  • Na, Jong-Gwan;Park, Min-Yong
    • Journal of the Ergonomics Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.89-99
    • /
    • 2004
  • A comprehensive psychophysical scaling system, multiple-modality magnitude estimation system (MMES) has been designed to measure subjective multidimensional human perception. Unlike paper-based magnitude estimation systems, the MMES has an additional auditory peripheral cue that varies with corresponding visual magnitude. As the simplest, purely psychological case, bimodal divided-attention conditions were simulated to establish the superiority of the MMES. Subjects were given brief presentations of pairs of simultaneous stimuli consisting of visual line-lengths and auditory white-noise levels. In the visual or auditory focused-attention conditions, only the line-lengths or the noise levels perceived should be reported respectively. On the other hand, in the divided-attention conditions, both the line-lengths and the noise levels should be reported. There were no significant differences among the different attention conditions. Human performance was better when the proportion of magnitude in stimulus pairs were identically presented. The additional auditory cues in the MMES improved the correlations between the magnitude of stimuli and MMES values in the divided-attention conditions.

Crossmodal Perception of Mismatched Emotional Expressions by Embodied Agents (에이전트의 표정과 목소리 정서의 교차양상지각)

  • Cho, Yu-Suk;Suk, Ji-He;Han, Kwang-Hee
    • Science of Emotion and Sensibility
    • /
    • v.12 no.3
    • /
    • pp.267-278
    • /
    • 2009
  • Today an embodied agent generates a large amount of interest because of its vital role for human-human interactions and human-computer interactions in virtual world. A number of researchers have found that we can recognize and distinguish between various emotions expressed by an embodied agent. In addition many studies found that we respond to simulated emotions in a similar way to human emotion. This study investigates interpretation of mismatched emotions expressed by an embodied agent (e.g. a happy face with a sad voice); whether audio-visual channel integration occurs or one channel dominates when participants judge the emotion. The study employed a 4 (visual: happy, sad, warm, cold) $\times$ 4 (audio: happy, sad, warm, cold) within-subjects repeated measure design. The results suggest that people perceive emotions not depending on just one channel but depending on both channels. Additionally facial expression (happy face vs. sad face) makes a difference in influence of two channels; Audio channel has more influence in interpretation of emotions when facial expression is happy. People were able to feel other emotion which was not expressed by face or voice from mismatched emotional expressions, so there is a possibility that we may express various and delicate emotions with embodied agent by using only several kinds of emotions.

  • PDF