• 제목/요약/키워드: Audio-visual integration

검색결과 28건 처리시간 0.021초

A Novel Integration Scheme for Audio Visual Speech Recognition

  • Pham, Than Trung;Kim, Jin-Young;Na, Seung-You
    • 한국음향학회지
    • /
    • 제28권8호
    • /
    • pp.832-842
    • /
    • 2009
  • Automatic speech recognition (ASR) has been successfully applied to many real human computer interaction (HCI) applications; however, its performance tends to be significantly decreased under noisy environments. The invention of audio visual speech recognition (AVSR) using an acoustic signal and lip motion has recently attracted more attention due to its noise-robustness characteristic. In this paper, we describe our novel integration scheme for AVSR based on a late integration approach. Firstly, we introduce the robust reliability measurement for audio and visual modalities using model based information and signal based information. The model based sources measure the confusability of vocabulary while the signal is used to estimate the noise level. Secondly, the output probabilities of audio and visual speech recognizers are normalized respectively before applying the final integration step using normalized output space and estimated weights. We evaluate the performance of our proposed method via Korean isolated word recognition system. The experimental results demonstrate the effectiveness and feasibility of our proposed system compared to the conventional systems.

Human-Robot Interaction in Real Environments by Audio-Visual Integration

  • Kim, Hyun-Don;Choi, Jong-Suk;Kim, Mun-Sang
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권1호
    • /
    • pp.61-69
    • /
    • 2007
  • In this paper, we developed not only a reliable sound localization system including a VAD(Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA(Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.

청각 및 시가 정보를 이용한 강인한 음성 인식 시스템의 구현 (Constructing a Noise-Robust Speech Recognition System using Acoustic and Visual Information)

  • 이종석;박철훈
    • 제어로봇시스템학회논문지
    • /
    • 제13권8호
    • /
    • pp.719-725
    • /
    • 2007
  • In this paper, we present an audio-visual speech recognition system for noise-robust human-computer interaction. Unlike usual speech recognition systems, our system utilizes the visual signal containing speakers' lip movements along with the acoustic signal to obtain robust speech recognition performance against environmental noise. The procedures of acoustic speech processing, visual speech processing, and audio-visual integration are described in detail. Experimental results demonstrate the constructed system significantly enhances the recognition performance in noisy circumstances compared to acoustic-only recognition by using the complementary nature of the two signals.

비교사 토론 인덱싱을 위한 시청각 콘텐츠 분석 기반 클러스터링 (Audio-Visual Content Analysis Based Clustering for Unsupervised Debate Indexing)

  • 금지수;이현수
    • 한국음향학회지
    • /
    • 제27권5호
    • /
    • pp.244-251
    • /
    • 2008
  • 본 연구에서는 시청각 정보를 이용한 비교사 토론 인덱싱 방법을 제안한다. 제안하는 방법은 BIC (Bayesian Information Criterion)에 의한 음성 클러스터링 결과와 거리기반 함수에 의한 영상 클러스터링 결과를 결합한다. 시청각 정보의 결합은 음성 또는 영상 정보를 개별적으로 사용하여 클러스터링할 때 나타나는 문제점을 줄일 수 있고, 토론 데이터의 효과적인 내용 기반의 분석이 가능하다. 제안하는 방법의 성능 평가를 위해 서로 다른 5종류의 토론 데이터에 대해 음성, 영상 정보를 개별적으로 사용할 때와 두 가지 정보를 동시에 사용할 때의 성능 평가를 수행하였다. 실험 결과 음성과 영상 정보를 결합한 방법이 음성, 영상 정보를 개별적으로 사용할 때 보다 토론 인덱싱에 효과적임을 확인하였다.

A 3D Audio-Visual Animated Agent for Expressive Conversational Question Answering

  • Martin, J.C.;Jacquemin, C.;Pointal, L.;Katz, B.
    • 한국정보컨버전스학회:학술대회논문집
    • /
    • 한국정보컨버전스학회 2008년도 International conference on information convergence
    • /
    • pp.53-56
    • /
    • 2008
  • This paper reports on the ACQA(Animated agent for Conversational Question Answering) project conducted at LIMSI. The aim is to design an expressive animated conversational agent(ACA) for conducting research along two main lines: 1/ perceptual experiments(eg perception of expressivity and 3D movements in both audio and visual channels): 2/ design of human-computer interfaces requiring head models at different resolutions and the integration of the talking head in virtual scenes. The target application of this expressive ACA is a real-time question and answer speech based system developed at LIMSI(RITEL). The architecture of the system is based on distributed modules exchanging messages through a network protocol. The main components of the system are: RITEL a question and answer system searching raw text, which is able to produce a text(the answer) and attitudinal information; this attitudinal information is then processed for delivering expressive tags; the text is converted into phoneme, viseme, and prosodic descriptions. Audio speech is generated by the LIMSI selection-concatenation text-to-speech engine. Visual speech is using MPEG4 keypoint-based animation, and is rendered in real-time by Virtual Choreographer (VirChor), a GPU-based 3D engine. Finally, visual and audio speech is played in a 3D audio and visual scene. The project also puts a lot of effort for realistic visual and audio 3D rendering. A new model of phoneme-dependant human radiation patterns is included in the speech synthesis system, so that the ACA can move in the virtual scene with realistic 3D visual and audio rendering.

  • PDF

바이모달 음성인식의 음성정보와 입술정보 결합방법 비교 (Comparison of Integration Methods of Speech and Lip Information in the Bi-modal Speech Recognition)

  • 박병구;김진영;최승호
    • 한국음향학회지
    • /
    • 제18권4호
    • /
    • pp.31-37
    • /
    • 1999
  • 잡음환경에서 음성인식 시스템의 성능을 향상시키기 위해서 영상정보와 음성정보를 이용한 바이모달(bimodal)음성인식이 제안되어왔다. 영상정보와 음성정보의 결합방식에는 크게 분류하여 인식 전 결합방식과 인식 후 결합방식이 있다. 인식 전 결합방식에서는 고정된 입술파라미터 중요도를 이용한 결합방법과 음성의 신호 대 잡음비 정보에 따라 가변 입술 파라미터 중요도를 이용하여 결합하는 방법을 비교하였고, 인식 후 결합방식에서는 영상정보와 음성정보를 독립적으로 결합하는 방법, 음성 최소거리 경로정보를 영상인식에 이용 결합하는 방법, 영상 최소거리 경로정보를 음성인식에 이용 결합하는 방법, 그리고 음성의 신호 대 잡음비 정보를 이용하여 결합하는 방법을 비교했다. 6가지 결합방법 중 인식 전 결합방법인 파라미터 중요도를 이용한 결합방법이 가장 좋은 인식결과를 보였다.

  • PDF

의미적 속성을 가진 시.청각자극의 SOA가 시청각 통합 현상에 미치는 영향 -중복 표적 효과와 시각 우세성 효과를 중심으로- (The Influence of SOA between the Visual and Auditory Stimuli with Semantic Properties on Integration of Audio-Visual Senses -Focus on the Redundant Target Effect and Visual Dominance Effect-)

  • 김보성;이영창;임동훈;김현우;민윤기
    • 감성과학
    • /
    • 제13권3호
    • /
    • pp.475-484
    • /
    • 2010
  • 본 연구는 의미적 속성을 가진 시각과 청각자극 간의 SOA(stimulus onset asynchrony)가 시청각 통합 현상에 미치는 영향을 살펴보고자 하였다. 시청각 통합 현상 중 표적을 의미하는 자극의 양상이 두 개 이상인 경우 표적에 반응이 빠르고 정확한 중복 표적 효과(redundant target effect)와 청각 자극에 비해 시각 자극에 대한 반응이 빠르고 정확한 시각 우세성 효과(visual dominance effect)를 중심으로 살펴보기 위해서 시각과 청각 단일 양상 표적 조건과 다중 양상 표적 조건을 구성하여 조건들의 반응시간과 정확률을 살펴보았다. 그 결과, 시 청각자극의 SOA가 변하더라도 중복 표적 효과는 이에 영향을 받지 않는 것으로 나타났으며, 두 자극 간의 SOA가 100ms 이상인 조건에서는 청각자극의 우세현상이 나타났다. 이러한 결과는 중복 표적 효과의 경우 두 양상자극 간의 SOA가 변하더라도 안정적으로 지속된다는 점을 시사하며, 청각자극의 경우 시각자극에 비해 약 100ms 이상의 시간적 이득조건이 마련되었을 때에 비로소 우세한 정보처리의 행동결과가 도출될 수 있음을 시사한다.

  • PDF

Comparison of McGurk Effect across Three Consonant-Vowel Combinations in Kannada

  • Devaraju, Dhatri S;U, Ajith Kumar;Maruthy, Santosh
    • Journal of Audiology & Otology
    • /
    • 제23권1호
    • /
    • pp.39-48
    • /
    • 2019
  • Background and Objectives: The influence of visual stimulus on the auditory component in the perception of auditory-visual (AV) consonant-vowel syllables has been demonstrated in different languages. Inherent properties of unimodal stimuli are known to modulate AV integration. The present study investigated how the amount of McGurk effect (an outcome of AV integration) varies across three different consonant combinations in Kannada language. The importance of unimodal syllable identification on the amount of McGurk effect was also seen. Subjects and Methods: Twenty-eight individuals performed an AV identification task with ba/ga, pa/ka and ma/ṇa consonant combinations in AV congruent, AV incongruent (McGurk combination), audio alone and visual alone condition. Cluster analysis was performed using the identification scores for the incongruent stimuli, to classify the individuals into two groups; one with high and the other with low McGurk scores. The differences in the audio alone and visual alone scores between these groups were compared. Results: The results showed significantly higher McGurk scores for ma/ṇa compared to ba/ga and pa/ka combinations in both high and low McGurk score groups. No significant difference was noted between ba/ga and pa/ka combinations in either group. Identification of /ṇa/ presented in the visual alone condition correlated negatively with the higher McGurk scores. Conclusions: The results suggest that the final percept following the AV integration is not exclusively explained by the unimodal identification of the syllables. But there are other factors which may also contribute to making inferences about the final percept.

Comparison of McGurk Effect across Three Consonant-Vowel Combinations in Kannada

  • Devaraju, Dhatri S;U, Ajith Kumar;Maruthy, Santosh
    • 대한청각학회지
    • /
    • 제23권1호
    • /
    • pp.39-48
    • /
    • 2019
  • Background and Objectives: The influence of visual stimulus on the auditory component in the perception of auditory-visual (AV) consonant-vowel syllables has been demonstrated in different languages. Inherent properties of unimodal stimuli are known to modulate AV integration. The present study investigated how the amount of McGurk effect (an outcome of AV integration) varies across three different consonant combinations in Kannada language. The importance of unimodal syllable identification on the amount of McGurk effect was also seen. Subjects and Methods: Twenty-eight individuals performed an AV identification task with ba/ga, pa/ka and ma/ṇa consonant combinations in AV congruent, AV incongruent (McGurk combination), audio alone and visual alone condition. Cluster analysis was performed using the identification scores for the incongruent stimuli, to classify the individuals into two groups; one with high and the other with low McGurk scores. The differences in the audio alone and visual alone scores between these groups were compared. Results: The results showed significantly higher McGurk scores for ma/ṇa compared to ba/ga and pa/ka combinations in both high and low McGurk score groups. No significant difference was noted between ba/ga and pa/ka combinations in either group. Identification of /ṇa/ presented in the visual alone condition correlated negatively with the higher McGurk scores. Conclusions: The results suggest that the final percept following the AV integration is not exclusively explained by the unimodal identification of the syllables. But there are other factors which may also contribute to making inferences about the final percept.