• 제목/요약/키워드: Speech Separation

검색결과 89건 처리시간 0.019초

CASA 기반 음성분리 성능 향상을 위한 형태 분석 기술의 응용 (Application of Shape Analysis Techniques for Improved CASA-Based Speech Separation)

  • 이윤경;권오욱
    • 대한음성학회지:말소리
    • /
    • 제65호
    • /
    • pp.153-168
    • /
    • 2008
  • We propose a new method to apply shape analysis techniques to a computational auditory scene analysis (CASA)-based speech separation system. The conventional CASA-based speech separation system extracts speech signals from a mixture of speech and noise signals. In the proposed method, we complement the missing speech signals by applying the shape analysis techniques such as labelling and distance function. In the speech separation experiment, the proposed method improves signal-to-noise ratio by 6.6 dB. When the proposed method is used as a front-end of speech recognizers, it improves recognition accuracy by 22% for the speech-shaped stationary noise condition and 7.2% for the two-talker noise condition at the target-to-masker ratio than or equal to -3 dB.

  • PDF

시간-주파수 스무딩이 적용된 소프트 마스크 필터를 이용한 단일 채널 음성 분리 (Single-Channel Speech Separation Using the Time-Frequency Smoothed Soft Mask Filter)

  • 이윤경;권오욱
    • 대한음성학회지:말소리
    • /
    • 제67호
    • /
    • pp.195-216
    • /
    • 2008
  • This paper addresses the problem of single-channel speech separation to extract the speech signal uttered by the speaker of interest from a mixture of speech signals. We propose to apply time-frequency smoothing to the existing statistical single-channel speech separation algorithms: The soft mask and the minimum-mean-square-error (MMSE) algorithms. In the proposed method, we use the two smoothing later. One is the uniform mask filter whose filter length is uniform at the time-Sequency domain, and the other is the met-scale filter whose filter length is met-scaled at the time domain. In our speech separation experiments, the uniform mask filter improves speaker-to-interference ratio (SIR) by 2.1dB and 1dB for the soft mask algorithm and the MMSE algorithm, respectively, whereas the mel-scale filter achieves 1.1dB and 0.8dB for the same algorithms.

  • PDF

An Introduction to Energy-Based Blind Separating Algorithm for Speech Signals

  • Mahdikhani, Mahdi;Kahaei, Mohammad Hossein
    • ETRI Journal
    • /
    • 제36권1호
    • /
    • pp.175-178
    • /
    • 2014
  • We introduce the Energy-Based Blind Separating (EBS) algorithm for extremely fast separation of mixed speech signals without loss of quality, which is performed in two stages: iterative-form separation and closed-form separation. This algorithm significantly improves the separation speed simply due to incorporating only some specific frequency bins into computations. Simulation results show that, on average, the proposed algorithm is 43 times faster than the independent component analysis (ICA) for speech signals, while preserving the separation quality. Also, it outperforms the fast independent component analysis (FastICA), the joint approximate diagonalization of eigenmatrices (JADE), and the second-order blind identification (SOBI) algorithm in terms of separation quality.

위상 모델 기반의 소프트 마스크를 이용한 단일 채널 음성분리 (Single-Channel Speech Separation Using Phase Model-Based Soft Mask)

  • 이윤경;권오욱
    • 한국음향학회지
    • /
    • 제29권2호
    • /
    • pp.141-147
    • /
    • 2010
  • 본 논문은 혼합 음성 신호로부터 크기와 위상 정보를 모두 고려하여 목표 음성 신호를 추출하고 향상하는 음성 분리 알고리듬을 제안한다.기존 연구에서는 혼합된 음성 신호의 로그 전력 스펙트럼 값이 시간-주파수 영역에서 서로 독립이라고 가정한 통계적 모델을 적용하기 때문에 음성 분리 결과 파형에 불연속을 야기한다. 본 논문에서는 이러한 불연속을 감소시키기 위하여 시간-주파수 영역에서의 스무딩 필터를 적용한다. 음성 분리 성능을 더욱 향상시키기 위하여 음성 신호의 크기와 함께 위상 정보를 고려하는 통계적 모델을 제안한다. 실혐 결과, 제안된 알고리즘이 기존의 크기 정보만을 사용한 알고리즘에 비하여 1.5 dB의 화자대간섭비 (SIR)를 개선하는 것으로 나타난다.

독립 성분 분석과 스펙트럼 향상에 의한 잡음 환경에서의 음성인식 (Speech Recognition in Noise Environment by Independent Component Analysis and Spectral Enhancement)

  • 최승호
    • 대한음성학회지:말소리
    • /
    • 제48호
    • /
    • pp.81-91
    • /
    • 2003
  • In this paper, we propose a speech recognition method based on independent component analysis (ICA) and spectral enhancement techniques. While ICA tris to separate speech signal from noisy speech using multiple channels, some noise remains by its algorithmic limitations. Spectral enhancement techniques can compensate for lack of ICA's signal separation ability. From the speech recognition experiments with instantaneous and convolved mixing environments, we show that the proposed approach gives much improved recognition accuracies than conventional methods.

  • PDF

음원신호 추출을 위한 주파수영역 응용모델에 기초한 독립성분분석 (Independent Component Analysis Based on Frequency Domain Approach Model for Speech Source Signal Extraction)

  • 최재승
    • 한국전자통신학회논문지
    • /
    • 제15권5호
    • /
    • pp.807-812
    • /
    • 2020
  • 본 논문은 여러 음원신호가 혼합된 환경에서 목적으로 하는 음원신호만을 분리하기 위하여 마이크로폰을 사용한 블라인드 음원분리 알고리즘을 제안한다. 제안하는 알고리즘은 독립성분분석 방법을 기반으로 한 주파수영역 표현모델이다. 따라서 2 음원에 대한 주파수영역 독립성분분석의 실제 환경에서의 유효성 검증을 목적으로, 음원의 종류를 변경하여 주파수영역 독립성분분석을 실행하여 음원분리를 실시하여 그 향상효과를 검증한다. 파형에 의한 실험결과로부터 원래의 파형과 비교하여 2채널의 음원신호를 깨끗하게 분리할 수 있음을 명확히 하였다. 또한 목표 신호 대 간섭 에너지비율을 사용하여 비교한 실험 결과로부터 본 논문에서 제안한 알고리즘의 음원분리 성능이 기존의 알고리즘에 비하여 성능이 향상되었다는 것을 알 수 있었다.

Speech Enhancement Using Blind Signal Separation Combined With Null Beamforming

  • Nam Seung-Hyon;Jr. Rodrigo C. Munoz
    • The Journal of the Acoustical Society of Korea
    • /
    • 제25권4E호
    • /
    • pp.142-147
    • /
    • 2006
  • Blind signal separation is known as a powerful tool for enhancing noisy speech in many real world environments. In this paper, it is demonstrated that the performance of blind signal separation can be further improved by combining with a null beamformer (NBF). Cascading the blind source separation with null beamforming is equivalent to the decomposition of the received signals into the direct parts and reverberant parts. Investigation of beam patterns of the null beamformer and blind signal separation reveals that directional null of NBF reduces mainly direct parts of the unwanted signals whereas blind signal separation reduces reverberant parts. Further, it is shown that the decomposition of received signals can be exploited to solve the local stability problem. Therefore, faster and improved separation can be obtained by removing the direct parts first by null beamforming. Simulation results using real office recordings confirm the expectation.

Robust Non-negative Matrix Factorization with β-Divergence for Speech Separation

  • Li, Yinan;Zhang, Xiongwei;Sun, Meng
    • ETRI Journal
    • /
    • 제39권1호
    • /
    • pp.21-29
    • /
    • 2017
  • This paper addresses the problem of unsupervised speech separation based on robust non-negative matrix factorization (RNMF) with ${\beta}$-divergence, when neither speech nor noise training data is available beforehand. We propose a robust version of non-negative matrix factorization, inspired by the recently developed sparse and low-rank decomposition, in which the data matrix is decomposed into the sum of a low-rank matrix and a sparse matrix. Efficient multiplicative update rules to minimize the ${\beta}$-divergence-based cost function are derived. A convolutional extension of the proposed algorithm is also proposed, which considers the time dependency of the non-negative noise bases. Experimental speech separation results show that the proposed convolutional RNMF successfully separates the repeating time-varying spectral structures from the magnitude spectrum of the mixture, and does so without any prior training.

A Frequency-Domain Normalized MBD Algorithm with Unidirectional Filters for Blind Speech Separation

  • Kim Hye-Jin;Nam Seung-Hyon
    • The Journal of the Acoustical Society of Korea
    • /
    • 제24권2E호
    • /
    • pp.54-60
    • /
    • 2005
  • A new multichannel blind deconvolution algorithm is proposed for speech mixtures. It employs unidirectional filters and normalization of gradient terms in the frequency domain. The proposed algorithm is shown to be approximately nonholonomic. Thus it provides improved convergence and separation performances without whitening effect for nonstationary sources such as speech and audio signals. Simulations using real world recordings confirm superior performances over existing algorithms and its usefulness for real applications.

IVA 기반의 2채널 암묵적신호분리에서 주파수빈 뒤섞임 문제 해결을 위한 후처리 과정 (Post-Processing of IVA-Based 2-Channel Blind Source Separation for Solving the Frequency Bin Permutation Problem)

  • 추쯔하오;배건성
    • 말소리와 음성과학
    • /
    • 제5권4호
    • /
    • pp.211-216
    • /
    • 2013
  • The IVA(Independent Vector Analysis) is a well-known FD-ICA method used to solve the frequency permutation problem. It generally works quite well for blind source separation problems, but still needs some improvements in the frequency bin permutation problem. This paper proposes a post-processing method which can improve the source separation performance with the IVA by fixing the remaining frequency permutation problem. The proposed method makes use of the correlation coefficient of power ratio between frequency bins for separated signals with the IVA-based 2-channel source separation. Experimental results verified that the proposed method could fix the remaining frequency permutation problem in the IVA and improve the speech quality of the separated signals.