• Title/Summary/Keyword: speech signal processing

Search Result 331, Processing Time 0.033 seconds

Speech Feature Extraction Using Auditory Model (청각모델을 이용한 음성신호의 특징 추출 방법에 관한 연구)

  • Park, Kyu-Hong;Kim, Young-Ho;Jung, Sang-Kuk;Rho, Seung-Yong
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2259-2261
    • /
    • 1998
  • Auditory Models that are capable of achieving human performance would provide a basis for realizing effective speech processing systems. Perceptual invariance to adverse signal conditions (noise, microphone and channel distortions, room reverberations) may provide a basis for robust speech recognition and speech coder with high efficiency. Auditory model that simulates the part of auditory periphery up through the auditory nerve level and new distance measure that is defined as angle between vectors are described.

  • PDF

Voice Activity Detection Algorithm Using Speech Periodicity and QSNR in Noisy Environment (음성의 주기성과 QSNR을 이용한 잡음환경에서의 음성검출 알고리즘)

  • Jeong, Ju-Hyun;Song, Hwa-Jeon;Kim, Hyung-Soon
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.59-62
    • /
    • 2005
  • Voice activity detection (VAD) is important in many areas of speech processing technology. Speech/nonspeech discrimination in noisy environments is a difficult task because the feature parameters used for the VAD are sensitive to the surrounding environments. Thus the VAD performance is severely degraded at low signal-to-noise ratios (SNRs). In this paper, a new VAD algorithm is proposed based on the degree of voicing and Quantile SNR (QSNR). These two feature parameters are more robust than other features such as energy and spectral entropy in noisy environments. The effectiveness of proposed algorithm is evaluated under the diverse noisy environments in the Aurora2 DB. According to out experiment, the proposed VAD outperforms the ETSI Advanced Frontend VAD.

  • PDF

A Study on Multi-Pulse Speech Coding Method by using Selected Information in a Frequency Domain (주파수 영역의 선택정보를 이용한 멀티펄스 음성부호화 방식에 관한 연구)

  • Lee See-Woo
    • Journal of Internet Computing and Services
    • /
    • v.7 no.4
    • /
    • pp.57-66
    • /
    • 2006
  • In this paper, I propose a new method of Multi-Pulse Speech Coding(FBD-MPC: Frequency Band Division MPC) by using TSIUVC(Transition Segment Including UnVoiced Consonant) searching, extraction and approximation-synthesis method in a frequency domain. As, a result. the extraction rates of TSIUVC are 84.8%(plosive), 94.9%(fricative) and 92.3%(affricative) in female voice, 88%(plosive), 94.9%(fricative) and 92.3%(affricative) in male voice respectively. Also, I obtain a high quality approximation-synthesis waveforms within TSIUVC by using frequency information of 0.547kHz below and 2.813kHz above. I evaluate MPC by using switching information of voiced/unvoiced and FBD-MPC by using switching information of voiced/Silence/TSIUVC. As, a result, I knew that synthesis speech of FBD-MPC was better in speech quality than synthesis speech of the MPC.

  • PDF

Signal Processing of Disordered Speech (장애음성 신호처리)

  • 조철우
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.647-650
    • /
    • 1999
  • 본 논문에서는 음성신호처리 기법을 이용하여 장애음성을 진단, 개선하는 데 필요한 다양한 신호처리방법에 대하여 다루고자 한다. 음성장애중 성대장애를 중심으로 신호에 나타나는 현상과 이를 이용한 신호처리 방법들을 소개하며 응용사례로 음성을 이용한 성대질환의 진단에 관한 내용을 소개한다.

  • PDF

Double Talk Processing using Blind Signal Separation in Acoustic Echo Canceller (음향반향제거기에서 암묵신호분리를 이용한 동시통화처리)

  • Lee, Haengwoo
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.1
    • /
    • pp.43-50
    • /
    • 2016
  • This paper is on an acoustic echo canceller solving the double-talk problem by using the blind signal separation technology. The acoustic echo canceller may be deteriorated or diverged during the double-talk period. So we use the blind signal separation to detect the double talking by separating the near-end speech signal from the mixed microphone signal. The blind signal separation extracts the near-end signal from dual microphones by the iterative computations using the 2nd order statistical character in the closed reverberation environment. By this method, the acoustic echo canceller operates irrespective of the double-talking. We verified performances of the proposed acoustic echo canceller in the computer simulations. The results show that the acoustic echo canceller with this algorithm detects the double-talk periods well, and then operates stably without diverging of the coefficients after ending the double-talking. The merits are in the simplicity and stability.

Fundamental Signal Processing in NonUniformly Sampled Speech Signal (비균일 표본화된 음성 신호에서의 기본적인 신호처리)

  • 임재열
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1995.06a
    • /
    • pp.235-238
    • /
    • 1995
  • 극점에서 비균일 표본화된 음성 신호는 크기열과 간격열의 이중구조로 표현되어, 균일 표본화된 신호에 근거한 기존의 신호처리 방법을 그대로 적용할 수 없다. 본 논문에서는 비균일 표본화된 음성 신호에서 에너지, 크기, 영교차율, 함수의 관계를 직접 유도하고, 특징을 살펴보아 비규닝ㄹ 펴본화된 음성신호에서도 균일 표본화된 신호에 해당하는 에너지, 크기, 영교차율과 같은 전처리과정 파라미터의추정이 가능함을 확인한다.

  • PDF

A Study on the Automatic Howling Signal Detection Algorithm for Speech Sound Reinforcement (음성 확성을 위한 하울링 신호 자동 검출기법 연구)

  • Kim, Kyung-Taek;Kim, Dong-Gyu;Roh, Yong-Wan;Hong, Kwang-Seok
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2005.11a
    • /
    • pp.246-249
    • /
    • 2005
  • 음향 시스템에 있어서 하울링 현상은 음성 레벨을 제한함으로써 음성의 명료도를 저하시키는 주된 요인이다. 그리고 이를 해결하기 위한 방법으로 하울링 주파수 대역의 게인을 낮추어 음향신호의 피드백을 최소화 하는 것이 일반적이기 때문에 하울링 주파수를 찾아내는 것이 하울링 제어에 있어서 가장 핵심적인 요소가 된다. 그래서 본 논문에서는 하울링 주파수를 자동으로 검출할 수 있는 기법을 제시하였다. 이는 외부로부터 입력된 오디오신호가 하울링 신호 특성을 만족하는 정도를 ‘하울링 지수’라는 파라메터로 정의한 후 이를 기준으로 하울링 발생여부를 판단하고 하울링으로 판별된 신호의 최대 진폭을 갖는 주파수를 하울링 주파수로 출력하는 기법이다. 본 하울링 신호 자동 검출기법의 내용을 검증하기 위하여 하울링 자동 검출 프로그램을 제작하여 실험을 수행한 결과 전체 하울링 신호의 95% 이상을 검출할 수 있었다.

  • PDF

A Novel Computer Human Interface to Remotely Pick up Moving Human's Voice Clearly by Integrating ]Real-time Face Tracking and Microphones Array

  • Hiroshi Mizoguchi;Takaomi Shigehara;Yoshiyasu Goto;Hidai, Ken-ichi;Taketoshi Mishima
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.75-80
    • /
    • 1998
  • This paper proposes a novel computer human interface, named Virtual Wireless Microphone (VWM), which utilizes computer vision and signal processing. It integrates real-time face tracking and sound signal processing. VWM is intended to be used as a speech signal input method for human computer interaction, especially for autonomous intelligent agent that interacts with humans like as digital secretary. Utilizing VWM, the agent can clearly listen human master's voice remotely as if a wireless microphone was put just in front of the master.

  • PDF

Speech Feature based Double-talk Detector for Acoustic Echo Cancellation (반향제거를 위한 음성특징 기반의 동시통화 검출 기법)

  • Park, Jun-Eun;Lee, Yoon-Jae;Kim, Ki-Hyeon;Ko, Han-Seok
    • Journal of IKEEE
    • /
    • v.13 no.2
    • /
    • pp.132-139
    • /
    • 2009
  • In this paper, a speech feature based double-talk detector method is proposed for an acoustic echo cancellation in hands-free communication system. The double-talk detector is an important element, since it controls the update of the adaptive filter for an acoustic echo cancellation. In previous research, the double talk detector is considered in the signal processing stage without taking the speech characteristics into account. However, in the proposed method, speech features which are used for the speech recognition is used for the discriminative features between the far-end and near-end speech. We obtained a substantial improvement over the previous double-talk detector methods using the only signal in time domain.

  • PDF

Emotion recognition in speech using hidden Markov model (은닉 마르코프 모델을 이용한 음성에서의 감정인식)

  • 김성일;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.3
    • /
    • pp.21-26
    • /
    • 2002
  • This paper presents the new approach of identifying human emotional states such as anger, happiness, normal, sadness, or surprise. This is accomplished by using discrete duration continuous hidden Markov models(DDCHMM). For this, the emotional feature parameters are first defined from input speech signals. In this study, we used prosodic parameters such as pitch signals, energy, and their each derivative, which were then trained by HMM for recognition. Speaker adapted emotional models based on maximum a posteriori(MAP) estimation were also considered for speaker adaptation. As results, the simulation performance showed that the recognition rates of vocal emotion gradually increased with an increase of adaptation sample number.

  • PDF