• Title/Summary/Keyword: Speech Separation

Search Result 89, Processing Time 0.017 seconds

Application of Shape Analysis Techniques for Improved CASA-Based Speech Separation (CASA 기반 음성분리 성능 향상을 위한 형태 분석 기술의 응용)

  • Lee, Yun-Kyung;Kwon, Oh-Wook
    • MALSORI
    • /
    • no.65
    • /
    • pp.153-168
    • /
    • 2008
  • We propose a new method to apply shape analysis techniques to a computational auditory scene analysis (CASA)-based speech separation system. The conventional CASA-based speech separation system extracts speech signals from a mixture of speech and noise signals. In the proposed method, we complement the missing speech signals by applying the shape analysis techniques such as labelling and distance function. In the speech separation experiment, the proposed method improves signal-to-noise ratio by 6.6 dB. When the proposed method is used as a front-end of speech recognizers, it improves recognition accuracy by 22% for the speech-shaped stationary noise condition and 7.2% for the two-talker noise condition at the target-to-masker ratio than or equal to -3 dB.

  • PDF

Single-Channel Speech Separation Using the Time-Frequency Smoothed Soft Mask Filter (시간-주파수 스무딩이 적용된 소프트 마스크 필터를 이용한 단일 채널 음성 분리)

  • Lee, Yun-Kyung;Kwon, Oh-Wook
    • MALSORI
    • /
    • no.67
    • /
    • pp.195-216
    • /
    • 2008
  • This paper addresses the problem of single-channel speech separation to extract the speech signal uttered by the speaker of interest from a mixture of speech signals. We propose to apply time-frequency smoothing to the existing statistical single-channel speech separation algorithms: The soft mask and the minimum-mean-square-error (MMSE) algorithms. In the proposed method, we use the two smoothing later. One is the uniform mask filter whose filter length is uniform at the time-Sequency domain, and the other is the met-scale filter whose filter length is met-scaled at the time domain. In our speech separation experiments, the uniform mask filter improves speaker-to-interference ratio (SIR) by 2.1dB and 1dB for the soft mask algorithm and the MMSE algorithm, respectively, whereas the mel-scale filter achieves 1.1dB and 0.8dB for the same algorithms.

  • PDF

An Introduction to Energy-Based Blind Separating Algorithm for Speech Signals

  • Mahdikhani, Mahdi;Kahaei, Mohammad Hossein
    • ETRI Journal
    • /
    • v.36 no.1
    • /
    • pp.175-178
    • /
    • 2014
  • We introduce the Energy-Based Blind Separating (EBS) algorithm for extremely fast separation of mixed speech signals without loss of quality, which is performed in two stages: iterative-form separation and closed-form separation. This algorithm significantly improves the separation speed simply due to incorporating only some specific frequency bins into computations. Simulation results show that, on average, the proposed algorithm is 43 times faster than the independent component analysis (ICA) for speech signals, while preserving the separation quality. Also, it outperforms the fast independent component analysis (FastICA), the joint approximate diagonalization of eigenmatrices (JADE), and the second-order blind identification (SOBI) algorithm in terms of separation quality.

Single-Channel Speech Separation Using Phase Model-Based Soft Mask (위상 모델 기반의 소프트 마스크를 이용한 단일 채널 음성분리)

  • Lee, Yun-Kyung;Kwon, Oh-Wook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.2
    • /
    • pp.141-147
    • /
    • 2010
  • In this paper, we propose a new speech separation algorithm to extract and enhance the target speech signals from mixed speech signals by utilizing both magnitude and phase information. Since the previous statistical modeling algorithms assume that the log power spectrum values of the mixed speech signals are independent in the temporal and frequency domain, discontinuities occur in the resultant separated speech signals. To reduce the discontinuities, we apply a smoothing filter in the time-frequency domain. To further improve speech separation performance, we propose a statistical model based on both magnitude and phase information of speech signals. Experimental results show that the proposed algorithm improve signal-to-interference ratio (SIR) by 1.5 dB compared with the previous magnitude-only algorithms.

Speech Recognition in Noise Environment by Independent Component Analysis and Spectral Enhancement (독립 성분 분석과 스펙트럼 향상에 의한 잡음 환경에서의 음성인식)

  • Choi Seung-Ho
    • MALSORI
    • /
    • no.48
    • /
    • pp.81-91
    • /
    • 2003
  • In this paper, we propose a speech recognition method based on independent component analysis (ICA) and spectral enhancement techniques. While ICA tris to separate speech signal from noisy speech using multiple channels, some noise remains by its algorithmic limitations. Spectral enhancement techniques can compensate for lack of ICA's signal separation ability. From the speech recognition experiments with instantaneous and convolved mixing environments, we show that the proposed approach gives much improved recognition accuracies than conventional methods.

  • PDF

Independent Component Analysis Based on Frequency Domain Approach Model for Speech Source Signal Extraction (음원신호 추출을 위한 주파수영역 응용모델에 기초한 독립성분분석)

  • Choi, Jae-Seung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.5
    • /
    • pp.807-812
    • /
    • 2020
  • This paper proposes a blind speech source separation algorithm using a microphone to separate only the target speech source signal in an environment in which various speech source signals are mixed. The proposed algorithm is a model of frequency domain representation based on independent component analysis method. Accordingly, for the purpose of verifying the validity of independent component analysis in the frequency domain for two speech sources, the proposed algorithm is executed by changing the type of speech sources to perform speech sources separation to verify the improvement effect. It was clarified from the experimental results by the waveform of this experiment that the two-channel speech source signals can be clearly separated compared to the original waveform. In addition, in this experiments, the proposed algorithm improves the speech source separation performance compared to the existing algorithms, from the experimental results using the target signal to interference energy ratio.

Speech Enhancement Using Blind Signal Separation Combined With Null Beamforming

  • Nam Seung-Hyon;Jr. Rodrigo C. Munoz
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.4E
    • /
    • pp.142-147
    • /
    • 2006
  • Blind signal separation is known as a powerful tool for enhancing noisy speech in many real world environments. In this paper, it is demonstrated that the performance of blind signal separation can be further improved by combining with a null beamformer (NBF). Cascading the blind source separation with null beamforming is equivalent to the decomposition of the received signals into the direct parts and reverberant parts. Investigation of beam patterns of the null beamformer and blind signal separation reveals that directional null of NBF reduces mainly direct parts of the unwanted signals whereas blind signal separation reduces reverberant parts. Further, it is shown that the decomposition of received signals can be exploited to solve the local stability problem. Therefore, faster and improved separation can be obtained by removing the direct parts first by null beamforming. Simulation results using real office recordings confirm the expectation.

Robust Non-negative Matrix Factorization with β-Divergence for Speech Separation

  • Li, Yinan;Zhang, Xiongwei;Sun, Meng
    • ETRI Journal
    • /
    • v.39 no.1
    • /
    • pp.21-29
    • /
    • 2017
  • This paper addresses the problem of unsupervised speech separation based on robust non-negative matrix factorization (RNMF) with ${\beta}$-divergence, when neither speech nor noise training data is available beforehand. We propose a robust version of non-negative matrix factorization, inspired by the recently developed sparse and low-rank decomposition, in which the data matrix is decomposed into the sum of a low-rank matrix and a sparse matrix. Efficient multiplicative update rules to minimize the ${\beta}$-divergence-based cost function are derived. A convolutional extension of the proposed algorithm is also proposed, which considers the time dependency of the non-negative noise bases. Experimental speech separation results show that the proposed convolutional RNMF successfully separates the repeating time-varying spectral structures from the magnitude spectrum of the mixture, and does so without any prior training.

A Frequency-Domain Normalized MBD Algorithm with Unidirectional Filters for Blind Speech Separation

  • Kim Hye-Jin;Nam Seung-Hyon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.2E
    • /
    • pp.54-60
    • /
    • 2005
  • A new multichannel blind deconvolution algorithm is proposed for speech mixtures. It employs unidirectional filters and normalization of gradient terms in the frequency domain. The proposed algorithm is shown to be approximately nonholonomic. Thus it provides improved convergence and separation performances without whitening effect for nonstationary sources such as speech and audio signals. Simulations using real world recordings confirm superior performances over existing algorithms and its usefulness for real applications.

Post-Processing of IVA-Based 2-Channel Blind Source Separation for Solving the Frequency Bin Permutation Problem (IVA 기반의 2채널 암묵적신호분리에서 주파수빈 뒤섞임 문제 해결을 위한 후처리 과정)

  • Chu, Zhihao;Bae, Keunsung
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.211-216
    • /
    • 2013
  • The IVA(Independent Vector Analysis) is a well-known FD-ICA method used to solve the frequency permutation problem. It generally works quite well for blind source separation problems, but still needs some improvements in the frequency bin permutation problem. This paper proposes a post-processing method which can improve the source separation performance with the IVA by fixing the remaining frequency permutation problem. The proposed method makes use of the correlation coefficient of power ratio between frequency bins for separated signals with the IVA-based 2-channel source separation. Experimental results verified that the proposed method could fix the remaining frequency permutation problem in the IVA and improve the speech quality of the separated signals.