• Title/Summary/Keyword: speaker detection

Search Result 108, Processing Time 0.022 seconds

Implementation of a Robust Speech Recognizer in Noisy Car Environment Using a DSP (DSP를 이용한 자동차 소음에 강인한 음성인식기 구현)

  • Chung, Ik-Joo
    • Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.67-77
    • /
    • 2008
  • In this paper, we implemented a robust speech recognizer using the TMS320VC33 DSP. For this implementation, we had built speech and noise database suitable for the recognizer using spectral subtraction method for noise removal. The recognizer has an explicit structure in aspect that a speech signal is enhanced through spectral subtraction before endpoints detection and feature extraction. This helps make the operation of the recognizer clear and build HMM models which give minimum model-mismatch. Since the recognizer was developed for the purpose of controlling car facilities and voice dialing, it has two recognition engines, speaker independent one for controlling car facilities and speaker dependent one for voice dialing. We adopted a conventional DTW algorithm for the latter and a continuous HMM for the former. Though various off-line recognition test, we made a selection of optimal conditions of several recognition parameters for a resource-limited embedded recognizer, which led to HMM models of the three mixtures per state. The car noise added speech database is enhanced using spectral subtraction before HMM parameter estimation for reducing model-mismatch caused by nonlinear distortion from spectral subtraction. The hardware module developed includes a microcontroller for host interface which processes the protocol between the DSP and a host.

  • PDF

Human-Robot Interaction in Real Environments by Audio-Visual Integration

  • Kim, Hyun-Don;Choi, Jong-Suk;Kim, Mun-Sang
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.1
    • /
    • pp.61-69
    • /
    • 2007
  • In this paper, we developed not only a reliable sound localization system including a VAD(Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA(Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.

Application of Block On-Line Blind Source Separation to Acoustic Echo Cancellation

  • Ngoc, Duong Q.K.;Park, Chul;Nam, Seung-Hyon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.1E
    • /
    • pp.17-24
    • /
    • 2008
  • Blind speech separation (BSS) is well-known as a powerful technique for speech enhancement in many real world environments. In this paper, we propose a new application of BSS - acoustic echo cancellation (AEC) in a car environment. For this purpose, we develop a block-online BSS algorithm which provides robust separation than a batch version in changing environments with moving speakers. Simulation results using real world recordings show that the block-online BSS algorithm is very robust to speaker movement. When combined with AEC, simulation results using real audio recording in a car confirm the expectation that BSS improves double talk detection and echo suppression.

Vocal Effort Detection Based on Spectral Information Entropy Feature and Model Fusion

  • Chao, Hao;Lu, Bao-Yun;Liu, Yong-Li;Zhi, Hui-Lai
    • Journal of Information Processing Systems
    • /
    • v.14 no.1
    • /
    • pp.218-227
    • /
    • 2018
  • Vocal effort detection is important for both robust speech recognition and speaker recognition. In this paper, the spectral information entropy feature which contains more salient information regarding the vocal effort level is firstly proposed. Then, the model fusion method based on complementary model is presented to recognize vocal effort level. Experiments are conducted on isolated words test set, and the results show the spectral information entropy has the best performance among the three kinds of features. Meanwhile, the recognition accuracy of all vocal effort levels reaches 81.6%. Thus, potential of the proposed method is demonstrated.

Lie Detection Technique using Video from the Ratio of Change in the Appearance

  • Hossain, S.M. Emdad;Fageeri, Sallam Osman;Soosaimanickam, Arockiasamy;Kausar, Mohammad Abu;Said, Aiman Moyaid
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.165-170
    • /
    • 2022
  • Lying is nuisance to all, and all liars knows it is nuisance but still keep on lying. Sometime people are in confusion how to escape from or how to detect the liar when they lie. In this research we are aiming to establish a dynamic platform to identify liar by using video analysis especially by calculating the ratio of changes in their appearance when they lie. The platform will be developed using a machine learning algorithm along with the dynamic classifier to classify the liar. For the experimental analysis the dataset to be processed in two dimensions (people lying and people tell truth). Both parameter of facial appearance will be stored for future identification. Similarly, there will be standard parameter to be built for true speaker and liar. We hope this standard parameter will be able to diagnosed a liar without a pre-captured data.

Pitch Period Detection Algorithm Using Modified AMDF (변형된 AMDF를 이용한 피치 주기 검출 알고리즘)

  • Seo Hyun-Soo;Bae Sang-Bum;Kim Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.1
    • /
    • pp.23-28
    • /
    • 2006
  • Pitch period that is a important factor in speech signal processing is used in various applications such as speech recognition, speaker identification, speech analysis and synthesis. So many pitch detection algorithms have been studied until now. AMDF which is one of pitch period detection algorithms chooses the time interval from valley point to valley point as pitch period. In selection of valley point to detect pitch period, complexity of the algorithm is increased. So in this paper we proposed the simple algorithm using rotation transform of AMDF that detects global minimum valley point as pitch period of speech signal and compared it with existing methods through simulation.

A Study on Pitch Period Detection of Speech Signal Using Modified AMDF (변형된 AMDF를 이용한 음성 신호의 피치 주기 검출에 관한 연구)

  • Seo, Hyun-Soo;Bae, Sang-Bum;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.515-519
    • /
    • 2005
  • Pitch period that is a important factor in speech signal processing is used in various applications such as speech recognition, speaker identification, speech analysis and synthesis. So many pitch detection algoritms have been studied until now. AMDF which is one of pitch period detection algorithms chooses the time interval from valley point to valley point as pitch period. In selection of valley point to detect pitch period, complexity of the algoritm is increased. So in this paper we proposed the simple algorithm using modified AMDF that detects global minimum valley point as pitch period of speech signal and compared existing methods with it through simulation.

  • PDF

Spatio-Temporal Residual Networks for Slide Transition Detection in Lecture Videos

  • Liu, Zhijin;Li, Kai;Shen, Liquan;Ma, Ran;An, Ping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.4026-4040
    • /
    • 2019
  • In this paper, we present an approach for detecting slide transitions in lecture videos by introducing the spatio-temporal residual networks. Given a lecture video which records the digital slides, the speaker, and the audience by multiple cameras, our goal is to find keyframes where slide content changes. Since temporal dependency among video frames is important for detecting slide changes, 3D Convolutional Networks has been regarded as an efficient approach to learn the spatio-temporal features in videos. However, 3D ConvNet will cost much training time and need lots of memory. Hence, we utilize ResNet to ease the training of network, which is easy to optimize. Consequently, we present a novel ConvNet architecture based on 3D ConvNet and ResNet for slide transition detection in lecture videos. Experimental results show that the proposed novel ConvNet architecture achieves the better accuracy than other slide progression detection approaches.

A Study on the Robust Pitch Period Detection Algorithm in Noisy Environments (소음환경에 강인한 피치주기 검출 알고리즘에 관한 연구)

  • Seo Hyun-Soo;Bae Sang-Bum;Kim Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2006.05a
    • /
    • pp.481-484
    • /
    • 2006
  • Pitch period detection algorithms are applied to various speech signal processing fields such as speech recognition, speaker identification, speech analysis and synthesis. Furthermore, many pitch detection algorithms of time and frequency domain have been studied until now. AMDF(average magnitude difference function) ,which is one of pitch period detection algorithms, chooses a time interval from the valley point to the valley point as the pitch period. AMDF has a fast computation capacity, but in selection of valley point to detect pitch period, complexity of the algorithm is increased. In order to apply pitch period detection algorithms to the real world, they have robust prosperities against generated noise in the subway environment etc. In this paper we proposed the modified AMDF algorithm which detects the global minimum valley point as the pitch period of speech signals and used speech signals of noisy environments as test signals.

  • PDF

Speech Activity Decision with Lip Movement Image Signals (입술움직임 영상신호를 고려한 음성존재 검출)

  • Park, Jun;Lee, Young-Jik;Kim, Eung-Kyeu;Lee, Soo-Jong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.1
    • /
    • pp.25-31
    • /
    • 2007
  • This paper describes an attempt to prevent the external acoustic noise from being misrecognized as the speech recognition target. For this, in the speech activity detection process for the speech recognition, it confirmed besides the acoustic energy to the lip movement image signal of a speaker. First of all, the successive images are obtained through the image camera for PC. The lip movement whether or not is discriminated. And the lip movement image signal data is stored in the shared memory and shares with the recognition process. In the meantime, in the speech activity detection Process which is the preprocess phase of the speech recognition. by conforming data stored in the shared memory the acoustic energy whether or not by the speech of a speaker is verified. The speech recognition processor and the image processor were connected and was experimented successfully. Then, it confirmed to be normal progression to the output of the speech recognition result if faced the image camera and spoke. On the other hand. it confirmed not to output of the speech recognition result if did not face the image camera and spoke. That is, if the lip movement image is not identified although the acoustic energy is inputted. it regards as the acoustic noise.