• Title/Summary/Keyword: audio visual interaction

Search Result 29, Processing Time 0.019 seconds

Human-Robot Interaction in Real Environments by Audio-Visual Integration

  • Kim, Hyun-Don;Choi, Jong-Suk;Kim, Mun-Sang
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.1
    • /
    • pp.61-69
    • /
    • 2007
  • In this paper, we developed not only a reliable sound localization system including a VAD(Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA(Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.

Audio-visual Interaction and Design-education in the Age of Multimedia (시청각 상호작용과 멀티미디어 시대의 디자인교육)

  • 서계숙
    • Archives of design research
    • /
    • v.14 no.3
    • /
    • pp.49-58
    • /
    • 2001
  • The communication designer in the multimedia age should have a consideration that the sound also can be an important expression element to transmit ones thought as well as the visual sense like color, shape and motion. As you already know, someones thought can be better understood when transmitted to others using the visual and auditory senses together than using the visual or auditory sense alone. The meeting of sight and hearing bases on the synesthesia. For example, low sound reminds a person of dark color and high sound usually reminds light color. And a percussion instrument reminds a person a circle and melody reminds a line. The visual and auditory senses In communication of the multimedia age should act as an independent expression element gotten out of the synchronizing that just serves simple sight and related sound at the same time. Interaction between sight and hearing arose a different emotion that cannot be happened just by one element alone. So, the programs of communication design education in this multimedia age should fulfill the requirement that it can develop ones expression ability through the understanding of interaction between sight and hearing. In this study, we suggest the education programs Classified into following categories; audio visual Gestaltung, audio visual moving graphics, audio visual design.

  • PDF

A Novel Integration Scheme for Audio Visual Speech Recognition

  • Pham, Than Trung;Kim, Jin-Young;Na, Seung-You
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.8
    • /
    • pp.832-842
    • /
    • 2009
  • Automatic speech recognition (ASR) has been successfully applied to many real human computer interaction (HCI) applications; however, its performance tends to be significantly decreased under noisy environments. The invention of audio visual speech recognition (AVSR) using an acoustic signal and lip motion has recently attracted more attention due to its noise-robustness characteristic. In this paper, we describe our novel integration scheme for AVSR based on a late integration approach. Firstly, we introduce the robust reliability measurement for audio and visual modalities using model based information and signal based information. The model based sources measure the confusability of vocabulary while the signal is used to estimate the noise level. Secondly, the output probabilities of audio and visual speech recognizers are normalized respectively before applying the final integration step using normalized output space and estimated weights. We evaluate the performance of our proposed method via Korean isolated word recognition system. The experimental results demonstrate the effectiveness and feasibility of our proposed system compared to the conventional systems.

Temporal-perceptual Judgement of Visuo-Auditory Stimulation (시청각 자극의 시간적 인지 판단)

  • Yu, Mi;Lee, Sang-Min;Piao, Yong-Jun;Kwon, Tae-Kyu;Kim, Nam-Gyun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.24 no.1 s.190
    • /
    • pp.101-109
    • /
    • 2007
  • In situations of spatio-temporal perception about visuo-auditory stimulus, researches propose optimal integration hypothesis that perceptual process is optimized to the interaction of the senses for the precision of perception. So, when the visual information considered generally dominant over any other sense is ambiguous, the information of the other sense like auditory stimulus influences the perceptual process in interaction with visual information. Thus, we performed two different experiments to certain the conditions of the interacting senses and influence of the condition. We consider the interaction of the visuo-auditory stimulation in the free space, the color of visual stimulus and sex difference of testee with normal people. In first experiment, 12 participants were asked to judge the change in the frequency of audio-visual stimulation using a visual flicker and auditory flutter stimulation in the free space. When auditory temporal cues were presented, the change in the frequency of the visual stimulation was associated with a perceived change in the frequency of the auditory stimulation as the results of the previous studies using headphone. In second experiment, 30 male and 30 female were asked to judge the change in the frequency of audio-visual stimulation using a color of visual flicker and auditory flutter stimulation. In the color condition using red and green. Both male and female testees showed same perceptual tendency. male and female testees showed same perceptual tendency however, in case of female, the standard deviation is larger than that of male. This results implies that audio-visual asymmetry effects are influenced by the cues of visual and auditory information, such as the orientation between auditory and visual stimulus, the color of visual stimulus.

Robust Person Identification Using Optimal Reliability in Audio-Visual Information Fusion

  • Tariquzzaman, Md.;Kim, Jin-Young;Na, Seung-You;Choi, Seung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.3E
    • /
    • pp.109-117
    • /
    • 2009
  • Identity recognition in real environment with a reliable mode is a key issue in human computer interaction (HCI). In this paper, we present a robust person identification system considering score-based optimal reliability measure of audio-visual modalities. We propose an extension of the modified reliability function by introducing optimizing parameters for both of audio and visual modalities. For degradation of visual signals, we have applied JPEG compression to test images. In addition, for creating mismatch in between enrollment and test session, acoustic Babble noises and artificial illumination have been added to test audio and visual signals, respectively. Local PCA has been used on both modalities to reduce the dimension of feature vector. We have applied a swarm intelligence algorithm, i.e., particle swarm optimization for optimizing the modified convection function's optimizing parameters. The overall person identification experiments are performed using VidTimit DB. Experimental results show that our proposed optimal reliability measures have effectively enhanced the identification accuracy of 7.73% and 8.18% at different illumination direction to visual signal and consequent Babble noises to audio signal, respectively, in comparison with the best classifier system in the fusion system and maintained the modality reliability statistics in terms of its performance; it thus verified the consistency of the proposed extension.

Constructing a Noise-Robust Speech Recognition System using Acoustic and Visual Information (청각 및 시가 정보를 이용한 강인한 음성 인식 시스템의 구현)

  • Lee, Jong-Seok;Park, Cheol-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.719-725
    • /
    • 2007
  • In this paper, we present an audio-visual speech recognition system for noise-robust human-computer interaction. Unlike usual speech recognition systems, our system utilizes the visual signal containing speakers' lip movements along with the acoustic signal to obtain robust speech recognition performance against environmental noise. The procedures of acoustic speech processing, visual speech processing, and audio-visual integration are described in detail. Experimental results demonstrate the constructed system significantly enhances the recognition performance in noisy circumstances compared to acoustic-only recognition by using the complementary nature of the two signals.

A Study on the Elements of Interface Design of Audio-based Social Networking Service (오디오 기반 SNS의 인터페이스 디자인 요소 연구)

  • Kim, Yeon-Soo;Choe, Jong-Hoon
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.2
    • /
    • pp.143-150
    • /
    • 2022
  • Audio-based SNS also needs a visual guide to reach the contents desired by the users. Therefore, this study investigates visual interface design elements that influence the experience of using audio contents in audio-based SNS. Prior researches have identified that the generally acknowledged interface design elements are important for the usability of audio contents. Through the analysis of the currently launched audio-based SNS, the influence of general interface elements were again confirmed, and via the analysis of other audio content services, a new interface evaluation element was explored. Accordingly, with five general interface evaluation elements-layout, color, icon, typography, graphic image, multimedia elements are newly defined and proposed as crucial factors in evaluating the UI of audio-based SNS.

Development of a Real-Time Driving Simulator for Vehicle System Development and Human Factor Study (차량 시스템 개발 및 운전자 인자 연구를 위한 실시간 차량 시뮬레이터의 개발)

  • 이승준
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.7 no.7
    • /
    • pp.250-257
    • /
    • 1999
  • Driving simulators are used effectively for human factor study, vehicle system development and other purposes by enabling to reproduce actural driving conditions in a safe and tightly controlled enviornment. Interactive simulation requries appropriate sensory and stimulus cuing to the driver . Sensory and stimulus feedback can include visual , auditory, motion, and proprioceptive cues. A fixed-base driving simulator has been developed in this study for vehicle system developmnet and human factor study . The simulator consists of improved and synergistic subsystems (a real-time vehicle simulation system, a visual/audio system and a control force loading system) based on the motion -base simulator, KMU DS-Ⅰ developed for design and evaluation of a full-scale driving simulator and for driver-vehicle interaction.

  • PDF

A study on the Development of a Driving Simulator for Reappearance of Vehicle Motion (I) (차량 주행 감각 재현을 위한 운전 시뮬레이터 개발에 관한 연구 (I))

  • Park, Min-Kyu;Lee, Min-Cheol;Son, Kwon;Yoo, Wan-Suk;Han, Myung-Chul;Lee, Jang-Myung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.6
    • /
    • pp.90-99
    • /
    • 1999
  • A vehicle driving simulator is a virtual reality device which a human being feels as if the one drives a vehicle actually. The driving simulator is used effectively for studying interaction of a driver-vehicle and developing vehicle system of a new concept. The driving simulator consists of a vehicle motion bed system, motion controller, visual and audio system, vehicle dynamic analysis system, cockpit system, and etc. In it is paper, the main procedures to develop the driving simulator are classified by five parts. First, a motion bed system and a motion controller, which can track a reference trajectory, are developed. Secondly, a performance evaluation of the motion bed system for the driving simulator is carried out using LVDTs and accelerometers. Thirdly, a washout algorithm to realize a motion of an actual vehicle in the driving simulator is developed. The algorithm changes the motion space of a vehicle into the workspace of the driving simulator. Fourthly, a visual and audio system for feeling higher realization is developed. Finally, an integration system to communicate and monitor between sub systems is developed.

  • PDF