• Title/Summary/Keyword: Noise speech data

Search Result 144, Processing Time 0.023 seconds

Blind Noise Separation Method of Convolutive Mixed Signals (컨볼루션 혼합신호의 암묵 잡음분리방법)

  • Lee, Haeng-Woo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.3
    • /
    • pp.409-416
    • /
    • 2022
  • This paper relates to the blind noise separation method of time-delayed convolutive mixed signals. Since the mixed model of acoustic signals in a closed space is multi-channel, a convolutive blind signal separation method is applied and time-delayed data samples of the two microphone input signals is used. For signal separation, the mixing coefficient is calculated using an inverse model rather than directly calculating the separation coefficient, and the coefficient update is performed by repeated calculations based on secondary statistical properties to estimate the speech signal. Many simulations were performed to verify the performance of the proposed blind signal separation. As a result of the simulation, noise separation using this method operates safely regardless of convolutive mixing, and PESQ is improved by 0.3 points compared to the general adaptive FIR filter structure.

Acoustic Characteristics on the Adolescent Period Aged from 16 to 18 Years (16~18세 청소년기 음성의 음향음성학적 특성)

  • Ko, Hye-Ju;Kang, Min-Jae;Kwon, Hyuk-Jae;Choi, Yaelin;Lee, Mi-Geum;Choi, Hong-Shik
    • Phonetics and Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.81-90
    • /
    • 2013
  • During adolescence the mutational period is characterized by the changes in the laryngeal structure, the length of the vocal cords, and a tone of voice. Usually, adolescents at 15 or 16 reach the voice of adults but the mutational period is sometimes delayed. Therefore, studies on the voice of adolescents between 16 ~ 18 right after the mutational period are required. Accordingly, this paper attempted to provide basic data about the normal standard for patients with voice disorders during this period by evaluating the vocal characteristics of males and females between 16 ~ 18 with an objective device bycomparing and analyzing them by sex and age. The study was conducted on a total of 60 subjects composed of each 10 subjects of each age. The vocal analysis was conducted by MPT (Maximum Phonation Time) measurement, sustained vowels and sentence reading. As for /a/ sustained vowels, fundamental frequency, hereinafter referred to as $F_0$, jitter, shimmer, noise-to-harmonic ratio, hereinafter referred to as NHR were measured by using the Multi-dimensional voice program (MDVP) among the Multi-Speech program of Computerized Speech Lab (Kay Elemetrics). The sentence reading, mean $F_0$, maximum $F_0$ and minimum $F_0$ were measured using the Real-Time Pitch (RTP) Model 5121 among the Multi-Speech program of Computerized Speech Lab (Kay Elemetrics). As a result, according to sex, there were statistically significant differences in $F_0$, jitter, shimmer, mean $F_0$, maximum $F_0$, and minimum $F_0$; and according to age, there were statistically significant differences in MPT. In conclusion, the voice of the adolescents between 16 ~ 18 reached the maturity levels of adults but the voice quality which can be considered on the scale of voice disorders showed transition to the voice of an adult during the mutational period.

Acoustic Qualities of Phonation in Hearing-impaired Male Adults (청각장애 성인 남성의 음성 특성)

  • Sehr, Kyoung-Hee
    • MALSORI
    • /
    • no.65
    • /
    • pp.37-49
    • /
    • 2008
  • The purposes of this experiment were to compare and analyze some voice parameters of the hearing impaired male adults and to suggest a basic data on the speech intervention for the hearing impaired. Voice analysis of four sustained vowels(/a/, /i/, /${\partial}$/, /u/, fundamental Sequency(F0), jitter percent, shimmer percent, and Noise to Harmonic Ratio(NHR) was conducted for the deaf young male adults using a sign laguage(N=5, aged 16-20) and the normal hearing young male adults(N=10, aged 18-20) by using MDVP(Multi-Dimensional Voice Program) in CSL. F0, jitter, and shimmer in the deaf group were significantly higher than those in the normal hearing group. The average of F0 was 151 Hz, which was lower than the results of the previous studies, and there were no significant differences among the sustained vowels. In both groups, the values of the voice parameters were stable on the /a/ or /${\partial}$/, those closed to the standard scores.

  • PDF

The acoustic realization of the Korean sibilant fricative contrast in Seoul and Daegu

  • Holliday, Jeffrey J.
    • Phonetics and Speech Sciences
    • /
    • v.4 no.1
    • /
    • pp.67-74
    • /
    • 2012
  • The neutralization of /$s^h$/ and /$s^*$/ in Gyeongsang dialects is a culturally salient stereotype that has received relatively little attention in the phonetic literature. The current study is a more extensive acoustic comparison of the sibilant fricative productions of Seoul and Gyeongsang dialect speakers. The data presented here suggest that, at least for young Seoul and Daegu speakers, there are few inter-dialectal differences in sibilant fricative production. These conclusions are supported by the output of mixed effects logistic regression models that used aspiration duration, spectral mean of the frication noise, and H1-H2 of the following vowel to predict fricative type in each dialect. The clearest dialect difference was that Daegu speakers' /$s^h$/ and /$s^*$/ productions had overall shorter aspiration durations than those of Seoul speakers, suggesting the opposite of the traditional "/$s^*$/ produced as [$s^h$]" stereotype of Gyeongsang dialects. Further work is needed to investigate whether /$s^h/-/s^*$/ neutralization in Daegu is perceptual rather than acoustic in nature.

Speech Recognition Performance Improvement using a convergence of GMM Phoneme Unit Parameter and Vocabulary Clustering (GMM 음소 단위 파라미터와 어휘 클러스터링을 융합한 음성 인식 성능 향상)

  • Oh, SangYeob
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.8
    • /
    • pp.35-39
    • /
    • 2020
  • DNN error is small compared to the conventional speech recognition system, DNN is difficult to parallel training, often the amount of calculations, and requires a large amount of data obtained. In this paper, we generate a phoneme unit to estimate the GMM parameters with each phoneme model parameters from the GMM to solve the problem efficiently. And it suggests ways to improve performance through clustering for a specific vocabulary to effectively apply them. To this end, using three types of word speech database was to have a DB build vocabulary model, the noise processing to extract feature with Warner filters were used in the speech recognition experiments. Results using the proposed method showed a 97.9% recognition rate in speech recognition. In this paper, additional studies are needed to improve the problems of improved over fitting.

Normalization of Spectral Magnitude and Cepstral Transformation for Compensation of Lombard Effect (롬바드 효과의 보정을 위한 스펙트럼 크기의 정규화와 켑스트럼 변환)

  • Chi, Sang-Mun;Oh, Yung-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.4
    • /
    • pp.83-92
    • /
    • 1996
  • This paper describes Lombard effect compensation and noise suppression so as to reduce speech recognition error in noisy environments. Lombard effect is represented by the variation of spectral envelope of energy normalized word and the variation of overall vocal intensity. The variation of spectral envelope can be compensated by linear transformation in cepstral domain. The variation of vocal intensity is canceled by spectral magnitude normalization. Spectral subtraction is use to suppress noise contamination, and band-pass filtering is used to emphasize dynamic features. To understand Lombard effect and verify the effectiveness of the proposed method, speech data are collected in simulated noisy environments. Recognition experiments were conducted with contamination by noise from automobile cabins, an exhibition hall, telephone booths in down town, crowded streets, and computer rooms. From the experiments, the effectiveness of the proposed method has been confirmed.

  • PDF

Voiced-Unvoiced-Silence Detection Algorithm using Perceptron Neural Network (퍼셉트론 신경회로망을 사용한 유성음, 무성음, 묵음 구간의 검출 알고리즘)

  • Choi, Jae-Seung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.2
    • /
    • pp.237-242
    • /
    • 2011
  • This paper proposes a detection algorithm for each section which detects the voiced section, unvoiced section, and the silence section at each frame using a multi-layer perceptron neural network. First, a power spectrum and FFT (fast Fourier transform) coefficients obtained by FFT are used as the input to the neural network for each frame, then the neural network is trained using these power spectrum and FFT coefficients. In this experiment, the performance of the proposed algorithm for detection of the voiced section, unvoiced section, and silence section was evaluated based on the detection rates using various speeches, which are degraded by white noise and used as the input data of the neural network. In this experiment, the detection rates were 92% or more for such speech and white noise when training data and evaluation data were the different.

Sound System Analysis for Health Smart Home

  • CASTELLI Eric;ISTRATE Dan;NGUYEN Cong-Phuong
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.237-243
    • /
    • 2004
  • A multichannel smart sound sensor capable to detect and identify sound events in noisy conditions is presented in this paper. Sound information extraction is a complex task and the main difficulty consists is the extraction of high­level information from an one-dimensional signal. The input of smart sound sensor is composed of data collected by 5 microphones and its output data is sent through a network. For a real time working purpose, the sound analysis is divided in three steps: sound event detection for each sound channel, fusion between simultaneously events and sound identification. The event detection module find impulsive signals in the noise and extracts them from the signal flow. Our smart sensor must be capable to identify impulsive signals but also speech presence too, in a noisy environment. The classification module is launched in a parallel task on the channel chosen by data fusion process. It looks to identify the event sound between seven predefined sound classes and uses a Gaussian Mixture Model (GMM) method. Mel Frequency Cepstral Coefficients are used in combination with new ones like zero crossing rate, centroid and roll-off point. This smart sound sensor is a part of a medical telemonitoring project with the aim of detecting serious accidents.

  • PDF

Text-Independent Speaker Verification Using Variational Gaussian Mixture Model

  • Moattar, Mohammad Hossein;Homayounpour, Mohammad Mehdi
    • ETRI Journal
    • /
    • v.33 no.6
    • /
    • pp.914-923
    • /
    • 2011
  • This paper concerns robust and reliable speaker model training for text-independent speaker verification. The baseline speaker modeling approach is the Gaussian mixture model (GMM). In text-independent speaker verification, the amount of speech data may be different for speakers. However, we still wish the modeling approach to perform equally well for all speakers. Besides, the modeling technique must be least vulnerable against unseen data. A traditional approach for GMM training is expectation maximization (EM) method, which is known for its overfitting problem and its weakness in handling insufficient training data. To tackle these problems, variational approximation is proposed. Variational approaches are known to be robust against overtraining and data insufficiency. We evaluated the proposed approach on two different databases, namely KING and TFarsdat. The experiments show that the proposed approach improves the performance on TFarsdat and KING databases by 0.56% and 4.81%, respectively. Also, the experiments show that the variationally optimized GMM is more robust against noise and the verification error rate in noisy environments for TFarsdat dataset decreases by 1.52%.

Voice Packet Processing Scheme for Voice Quality and Bandwidth Efficiency in VoIP (VoIP의 음성품질/대역효율 개선을 위한 음성패킷 처리)

  • Kim, Jae-Won;Sohn, Dong-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.7
    • /
    • pp.896-904
    • /
    • 2004
  • In this paper, We present an efficient variable rate speech coder for spectral efficiency and packet processing technique for packet loss compensation of a voice codec with 10msec frame in VoIP service. Through disconnecting the users from the spectral resource during silence interval of about 60% period, a variable rate voice coder based on a voice activity detection(VAD) can increase spectral gain by two times. The performance of the method was analyzed by variation of detected voice activity factor and degraded speech frame ratio under various background noise level, and compared those of G.729B of ITU-T 8kbps standard speech codec. A method to compensate lost packets utilized addition of recovery data to a main stream and error concealment scheme for speech quality enhancement, the performance is verified by reconstructed speech quality. The proposed scheme can achieve spectral gain by two times or enhance speech quality by 3dB through reserved bandwidth of VAD. Therefore, the proposed method can enhance a spectral efficiency or speech quality of VoIP.

  • PDF