• Title/Summary/Keyword: 음성 향상

Search Result 1,461, Processing Time 0.03 seconds

A Study on Speech Enhancement Method Based on the New Spectral Subtraction with Subband Estimation (새로운 서브밴드 추정-스펙트럼 차감법에 기반한 음성향상방법에 관한 연구)

  • 주상현;김수남;김기두
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.10B
    • /
    • pp.1360-1366
    • /
    • 2001
  • 이 논문에서는, 잡음환경에서의 음성 향상을 위해서 일반적인 주파수 차감법에 기반한 새로운 형태의 방법을 제안한다. 기존의 방법들이 각각의 주파수 성분에 대해 잡음 및 음성스펙트럼을 추정하는데 비해, 본 논문에서는 주파수 대역을 여러 개의 서브밴드로 대역을 나누어 각각의 서브밴드에 대해서 잡음 및 음성의 스펙트럼을 추정한다. 본 논문에서는 잡음 스펙트럼을 추정하기 위하여 최소추적(Minima Tracking) 방법을 선택하였고, 필터링 방법으로는 스펙트럼 차감법에 기반한 Mel-Scaled 필터뱅크를 이용한 새로운 방법을 제안하였다. 모의실험결과, 기존의 방법들에 비해 음성구간에서의 SNR의 향상정도는 입력 SNR이 -10∼4dB의 범위에서 향상된 결과를 얻었다. 또한 전 구간에 대해서도 다른 알고리즘들 보다 향상된 결과를 얻었다.

  • PDF

Vector Quantization based Speech Recognition Performance Improvement using Maximum Log Likelihood in Gaussian Distribution (가우시안 분포에서 Maximum Log Likelihood를 이용한 벡터 양자화 기반 음성 인식 성능 향상)

  • Chung, Kyungyong;Oh, SangYeob
    • Journal of Digital Convergence
    • /
    • v.16 no.11
    • /
    • pp.335-340
    • /
    • 2018
  • Commercialized speech recognition systems that have an accuracy recognition rates are used a learning model from a type of speaker dependent isolated data. However, it has a problem that shows a decrease in the speech recognition performance according to the quantity of data in noise environments. In this paper, we proposed the vector quantization based speech recognition performance improvement using maximum log likelihood in Gaussian distribution. The proposed method is the best learning model configuration method for increasing the accuracy of speech recognition for similar speech using the vector quantization and Maximum Log Likelihood with speech characteristic extraction method. It is used a method of extracting a speech feature based on the hidden markov model. It can improve the accuracy of inaccurate speech model for speech models been produced at the existing system with the use of the proposed system may constitute a robust model for speech recognition. The proposed method shows the improved recognition accuracy in a speech recognition system.

A study on combination of loss functions for effective mask-based speech enhancement in noisy environments (잡음 환경에 효과적인 마스크 기반 음성 향상을 위한 손실함수 조합에 관한 연구)

  • Jung, Jaehee;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.3
    • /
    • pp.234-240
    • /
    • 2021
  • In this paper, the mask-based speech enhancement is improved for effective speech recognition in noise environments. In the mask-based speech enhancement, enhanced spectrum is obtained by multiplying the noisy speech spectrum by the mask. The VoiceFilter (VF) model is used as the mask estimation, and the Spectrogram Inpainting (SI) technique is used to remove residual noise of enhanced spectrum. In this paper, we propose a combined loss to further improve speech enhancement. In order to effectively remove the residual noise in the speech, the positive part of the Triplet loss is used with the component loss. For the experiment TIMIT database is re-constructed using NOISEX92 noise and background music samples with various Signal to Noise Ratio (SNR) conditions. Source to Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short-Time Objective Intelligibility (STOI) are used as the metrics of performance evaluation. When the VF was trained with the mean squared error and the SI model was trained with the combined loss, SDR, PESQ, and STOI were improved by 0.5, 0.06, and 0.002 respectively compared to the system trained only with the mean squared error.

On a Study of Relation Between Glottal Spectrum and Speaker Identification Parameter (Glottal Spectrum 과 화자식별 Parameter와의 상관 관계에 관한 연구)

  • 이윤주;신동성;배명진
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.793-796
    • /
    • 2001
  • 음성인식 시스템은 인간의 의사소통 수단인 음성을 기계가 인지할 수 있게 하는 것이다. 이러한 음성 인식 알고리즘 개발은 현재 활발히 진행되고 있다. 올바른 음성인식 시스템의 구현을 위해서는 높은 인식률 구현과 적은 처리시간이 요구된다. 또한 인식률 향상을 위해서는 그 구현 알고리즘이 복잡해지고 이에 따라 많은 처리 시간이 요구된다. 본 논문에서는 성문 특성에 따른 Glottal Spectrum에 적응적인 필터계수를 적용하여 인식률 향상을 도모하였다. 제안한 알고리즘을 모의 실험한 결과 전체 인식률이 2% 향상되었다.

  • PDF

Speech Enhancement the Neural Network Filer (신경망필처를 이용한 음질향상)

  • 김종우;공성근
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.10 no.4
    • /
    • pp.324-329
    • /
    • 2000
  • 본 논문에서는 잡음환경에서의 음질향상(Speed Ehnacement) 시스템 구현을 목적으로 한다. 이를 위한 적응필터로서 LSM(Least Mean square)알고리즘 FIR필터를 적용한다. 또 정밀 필터로서 다충신경망(MLP, Multi-Layer Perceptorn) 필터를 적용한다. 잡음환경에서의 음성신호 복원 및 음질향상 시스템은 잡음에 의해 왜곡된 음성신호에서 잡음성분만을 제거함으로써 음성신호를 복원하는 시스템이다. 신경망 필터는 오차 역전과 학습 알고리즘에 의해 오차를 최소화 하는 방향으로 필터의 피라미터를 수정한다. 제안한 필터로 잡음환경에서의 음성신호복원 시스템을 구서오하고, 실험을 필터의 성능을 확인한다.

  • PDF

The research on the MEMS device improvement which is necessary for the noise environment in the speech recognition rate improvement (잡음 환경에서 음성 인식률 향상에 필요한 MEMS 장치 개발에 관한 연구)

  • Yang, Ki-Woong;Lee, Hyung-keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.12
    • /
    • pp.1659-1666
    • /
    • 2018
  • When the input sound is mixed voice and sound, it can be seen that the voice recognition rate is lowered due to the noise, and the speech recognition rate is improved by improving the MEMS device which is the H / W device in order to overcome the S/W processing limit. The MEMS microphone device is a device for inputting voice and is implemented in various shapes and used. Conventional MEMS microphones generally exhibit excellent performance, but in a special environment such as noise, there is a problem that the processing performance is deteriorated due to a mixture of voice and sound. To overcome these problems, we developed a newly designed MEMS device that can detect the voice characteristics of the initial input device.

A study on skip-connection with time-frequency self-attention for improving speech enhancement based on complex-valued spectrum (복소 스펙트럼 기반 음성 향상의 성능 향상을 위한 time-frequency self-attention 기반 skip-connection 기법 연구)

  • Jaehee Jung;Wooil Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.2
    • /
    • pp.94-101
    • /
    • 2023
  • A deep neural network composed of encoders and decoders, such as U-Net, used for speech enhancement, concatenates the encoder to the decoder through skip-connection. Skip-connection helps reconstruct the enhanced spectrum and complement the lost information. The features of the encoder and the decoder connected by the skip-connection are incompatible with each other. In this paper, for complex-valued spectrum based speech enhancement, Self-Attention (SA) method is applied to skip-connection to transform the feature of encoder to be compatible with the features of decoder. SA is a technique in which when generating an output sequence in a sequence-to-sequence tasks the weighted average of input is used to put attention on subsets of input, showing that noise can be effectively eliminated by being applied in speech enhancement. The three models using encoder and decoder features to apply SA to skip-connection are studied. As experimental results using TIMIT database, the proposed methods show improvements in all evaluation metrics compared to the Deep Complex U-Net (DCUNET) with skip-connection only.

Robust Speech Reinforcement Based on Gain-Modification incorporating Speech Absence Probability (음성 부재 확률을 이용한 음성 강화 이득 수정 기법)

  • Choi, Jae-Hun;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.175-182
    • /
    • 2010
  • In this paper, we propose a robust speech reinforcement technique to enhance the intelligibility of the degraded speech signal under the ambient noise environments based on soft decision scheme incorporating a speech absence probability (SAP) with speech reinforcement gains. Since the ambient noise significantly decreases the intelligibility of the speech signal, the speech reinforcement approach to amplify the estimated clean speech signal from the background noise environments for improving the intelligibility and clarity of the corrupted speech signal was proposed. In order to estimate the robust reinforcement gain rather than the conventional speech reinforcement method between speech active periods and nonspeech periods or transient intervals, we propose the speech reinforcement algorithm based on soft decision applying the SAP to the estimation of speech reinforcement gains. The performances of the proposed algorithm are evaluated by the Comparison Category Rating (CCR) of the measurement for subjective determination of transmission quality in ITU-T P.800 under various ambient noise environments and show better performances compared with the conventional method.

An Improved Speech Absence Probability Estimation based on Environmental Noise Classification (환경잡음분류 기반의 향상된 음성부재확률 추정)

  • Son, Young-Ho;Park, Yun-Sik;An, Hong-Sub;Lee, Sang-Min
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.7
    • /
    • pp.383-389
    • /
    • 2011
  • In this paper, we propose a improved speech absence probability estimation algorithm by applying environmental noise classification for speech enhancement. The previous speech absence probability required to seek a priori probability of speech absence was derived by applying microphone input signal and the noise signal based on the estimated value of a posteriori SNR threshold. In this paper, the proposed algorithm estimates the speech absence probability using noise classification algorithm which is based on Gaussian mixture model in order to apply the optimal parameter each noise types, unlike the conventional fixed threshold and smoothing parameter. Performance of the proposed enhancement algorithm is evaluated by ITU-T P.862 PESQ (perceptual evaluation of speech quality) and composite measure under various noise environments. It is verified that the proposed algorithm yields better results compared to the conventional speech absence probability estimation algorithm.

Speech enhancement method based on feature compensation gain for effective speech recognition in noisy environments (잡음 환경에 효과적인 음성인식을 위한 특징 보상 이득 기반의 음성 향상 기법)

  • Bae, Ara;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.1
    • /
    • pp.51-55
    • /
    • 2019
  • This paper proposes a speech enhancement method utilizing the feature compensation gain for robust speech recognition performances in noisy environments. In this paper we propose a speech enhancement method utilizing the feature compensation gain which is obtained from the PCGMM (Parallel Combined Gaussian Mixture Model)-based feature compensation method employing variational model composition. The experimental results show that the proposed method significantly outperforms the conventional front-end algorithms and our previous research over various background noise types and SNR (Signal to Noise Ratio) conditions in mismatched ASR (Automatic Speech Recognition) system condition. The computation complexity is significantly reduced by employing the noise model selection technique with maintaining the speech recognition performance at a similar level.