• Title/Summary/Keyword: speaker tracking

Search Result 23, Processing Time 0.03 seconds

Smart Virtual Sound Rendering System for Digital TV (지능형 입체음향 TV)

  • Kim, Sun-Min;Kong, Dong-Geon
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2008.04a
    • /
    • pp.939-946
    • /
    • 2008
  • 본 논문은 시청자의 위치에 최적화된 입체음향을 제공하는 TV 개발에 관한 것으로 2 개의 TV 스피커만으로 5.1 채널 스피커가 주는 입체음향 효과를 제공해준다. 기존의 Speaker Virtualizer 기술은 시청자가 특정 위치(Sweet Spot)를 벗어나면 입체음향 성능이 현저히 저하된다. 반면, 본 논문에서 제안하는 Adaptive Virtualizer 기술은 초음파가 장착된 리모콘을 사용하여 시청자의 위치를 인식하고 인식된 시청자의 위치 정보를 활용하여 청취위치에 해당하는 HRTF로부터 설계된 Filter를 Update 하고 두 스피커의 출력레벨 및 시간지연 값을 보정함으로써 최적의 입체음향을 재현한다. 본 논문에서는 실시간 구현을 위해 Speaker Virtualizer의 계산량을 최소화하는 기술을 제안하고 다양한 청취 위치에 해당하는 Filter를 설계하고 설계된 Filter를 효율적으로 Update 하는 Adaptive Virtualizer 기술을 제안한다. 또한, 초음파를 이용한 시청자 위치 인식 기술 및 전체 시스템 통합 기술을 제시한다.

  • PDF

Active Audition System based on 2-Dimensional Microphone Array (2차원 마이크로폰 배열에 의한 능동 청각 시스템)

  • Lee, Chang-Hun;Kim, Yong-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2003.11b
    • /
    • pp.175-178
    • /
    • 2003
  • This paper describes a active audition system for robot-human interface in real environment. We propose a strategy for a robust sound localization and for -talking speech recognition(60-300cm) based on 2-dimensional microphone array. We consider spatial features, the relation of position and interaural time differences, and realize speaker tracking system using fuzzy inference profess based on inference rules generated by its spatial features.

  • PDF

Human-Robot Interaction in Real Environments by Audio-Visual Integration

  • Kim, Hyun-Don;Choi, Jong-Suk;Kim, Mun-Sang
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.1
    • /
    • pp.61-69
    • /
    • 2007
  • In this paper, we developed not only a reliable sound localization system including a VAD(Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA(Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.

Widerange Microphone System for Lecture using FMCW Radar Sensor (FMCW 레이더 센서 기반의 강의용 광역 마이크 시스템)

  • Oh, Woojin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.4
    • /
    • pp.611-614
    • /
    • 2021
  • In this paper, we propose a widerange array microphone for lecturer tracked with Frequency Modulated Continuous Waveform (FMCW) radar sensor. Time Difference-of-Arrival (TDoA) is often used as audio tracking, but the tracking accuracy is poor because the frequency of the voice is low and the relative frequency change is large. FMCW radar has a simple structure and is used to detect obstacles for vehicles, and the resolution can be archived to several centimeter. It is shown that the sensor is useful for detecting a speaker in open area such as a lecture, and we propose an wide range 4-element array microphone beamforming system. Through some experiments, the proposed system is able to adequately track the location and showed a 8.6dB improvement over the selection of the best microphone.

Voice Command-based Prediction and Follow of Human Path of Mobile Robots in AI Space

  • Tae-Seok Jin
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.2_1
    • /
    • pp.225-230
    • /
    • 2023
  • This research addresses sound command based human tracking problems for autonomous cleaning mobile robot in a networked AI space. To solve the problem, the difference among the traveling times of the sound command to each of three microphones has been used to calculate the distance and orientation of the sound from the cleaning mobile robot, which carries the microphone array. The cross-correlation between two signals has been applied for detecting the time difference between two signals, which provides reliable and precise value of the time difference compared to the conventional methods. To generate the tracking direction to the sound command, fuzzy rules are applied and the results are used to control the cleaning mobile robot in a real-time. Finally the experiment results show that the proposed algorithm works well, even though the mobile robot knows little about the environment.

A study imitating human auditory system for tracking the position of sound source (인간의 청각 시스템을 응용한 음원위치 추정에 관한 연구)

  • Bae, Jeen-Man;Cho, Sun-Ho;Park, Chong-Kuk
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.878-881
    • /
    • 2003
  • To acquire an appointed speaker's clear voice signal from inspect-camera, picture-conference or hands free microphone eliminating interference noises needs to be preceded speaker's position automatically. Presumption of sound source position's basic algorithm is about measuring TDOA(Time Difference Of Arrival) from reaching same signals between two microphones. This main project uses ADF(Adaptive Delay Filter) [4] and CPS(Cross Power Spectrum) [5] which are one of the most important analysis of TDOA. From these analysis this project proposes presumption of real time sound source position and improved model NI-ADF which makes possible to presume both directions of sound source position. NI-ADF noticed that if auditory sense of humankind reaches above to some specified level in specified frequency, it will accept sound through activated nerve. NI-ADF also proposes practicable algorithm, the presumption of real time sound source position including both directions, that when microphone loads to some specified system, it will use sounds level difference from external system related to sounds of diffraction phenomenon. In accordance with the project, when existing both direction adaptation filter's algorithm measures sound source, it increases more than twice number by measuring one way. Preserving this weak point, this project proposes improved algorithm to presume real time in both directions.

  • PDF

A Novel Two-Level Pitch Detection Approach for Speaker Tracking in Robot Control

  • Hejazi, Mahmoud R.;Oh, Han;Kim, Hong-Kook;Ho, Yo-Sung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.89-92
    • /
    • 2005
  • Using natural speech commands for controlling a human-robot is an interesting topic in the field of robotics. In this paper, our main focus is on the verification of a speaker who gives a command to decide whether he/she is an authorized person for commanding. Among possible dynamic features of natural speech, pitch period is one of the most important ones for characterizing speech signals and it differs usually from person to person. However, current techniques of pitch detection are still not to a desired level of accuracy and robustness. When the signal is noisy or there are multiple pitch streams, the performance of most techniques degrades. In this paper, we propose a two-level approach for pitch detection which in compare with standard pitch detection algorithms, not only increases accuracy, but also makes the performance more robust to noise. In the first level of the proposed approach we discriminate voiced from unvoiced signals based on a neural classifier that utilizes cepstrum sequences of speech as an input feature set. Voiced signals are then further processed in the second level using a modified standard AMDF-based pitch detection algorithm to determine their pitch periods precisely. The experimental results show that the accuracy of the proposed system is better than those of conventional pitch detection algorithms for speech signals in clean and noisy environments.

  • PDF

Real Time Speaker Close-Up and Tracking System Using the Lip Varying Informations (입술 움직임 변화량을 이용한 실시간 화자의 클로즈업 및 트레킹 시스템 구현)

  • 양운모;장언동;윤태승;곽내정;안재형
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2002.05d
    • /
    • pp.547-552
    • /
    • 2002
  • 본 논문에서는 다수의 사람이 존재하는 입력영상에서 입술 움직임 정보를 이용한 실시간 화자의 클로즈업(close-up) 시스템을 구현한다. 칼라 CCD 카메라를 통해 입력되는 동영상에서 화자를 검출한 후 입술 움직임 정보를 이용하여 다른 한 대의 카메라로 화자를 클로즈업한다. 구현된 시스템은 얼굴색 정보와 형태 정보를 이용하여 각 사람의 얼굴 및 입술 영역을 검출한 후, 입술 영역 변화량을 이용하여 화자를 검출한다. 검출된 화자를 클로즈업하기 위하여 PTZ(Pan/Tilt/Zoom) 카메라를 사용하였으며, RS-232C 시리얼 포트를 이용하여 카메라를 제어한다. 실험결과 3인 이상의 입력 동영상에서 정확하게 화자를 검출할 수 있으며, 움직이는 화자의 얼굴 트레킹이 가능하다.

  • PDF

Sensibility Classification Algorithm of EEGs using Multi-template Method (다중 템플릿 방법을 이용한 뇌파의 감성 분류 알고리즘)

  • Kim Dong-Jun
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.12
    • /
    • pp.834-838
    • /
    • 2004
  • This paper proposes an algorithm for EEG pattern classification using the Multi-template method, which is a kind of speaker adaptation method for speech signal processing. 10-channel EEG signals are collected in various environments. The linear prediction coefficients of the EEGs are extracted as the feature parameter of human sensibility. The human sensibility classification algorithm is developed using neural networks. Using EEGs of comfortable or uncomfortable seats, the proposed algorithm showed about 75% of classification performance in subject-independent test. In the tests using EEG signals according to room temperature and humidity variations, the proposed algorithm showed good performance in tracking of pleasantness changes and the subject-independent tests produced similar performances with subject-dependent ones.

Design of Smart Device Assistive Emergency WayFinder Using Vision Based Emergency Exit Sign Detection

  • Lee, Minwoo;Mariappan, Vinayagam;Mfitumukiza, Joseph;Lee, Junghoon;Cho, Juphil;Cha, Jaesang
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.1
    • /
    • pp.101-106
    • /
    • 2017
  • In this paper, we present Emergency exit signs are installed to provide escape routes or ways in buildings like shopping malls, hospitals, industry, and government complex, etc. and various other places for safety purpose to aid people to escape easily during emergency situations. In case of an emergency situation like smoke, fire, bad lightings and crowded stamped condition at emergency situations, it's difficult for people to recognize the emergency exit signs and emergency doors to exit from the emergency building areas. This paper propose an automatic emergency exit sing recognition to find exit direction using a smart device. The proposed approach aims to develop an computer vision based smart phone application to detect emergency exit signs using the smart device camera and guide the direction to escape in the visible and audible output format. In this research, a CAMShift object tracking approach is used to detect the emergency exit sign and the direction information extracted using template matching method. The direction information of the exit sign is stored in a text format and then using text-to-speech the text synthesized to audible acoustic signal. The synthesized acoustic signal render on smart device speaker as an escape guide information to the user. This research result is analyzed and concluded from the views of visual elements selecting, EXIT appearance design and EXIT's placement in the building, which is very valuable and can be commonly referred in wayfinder system.