• Title/Summary/Keyword: Non-speech

Search Result 468, Processing Time 0.021 seconds

Harmonics-based Spectral Subtraction and Feature Vector Normalization for Robust Speech Recognition

  • Beh, Joung-Hoon;Lee, Heung-Kyu;Kwon, Oh-Il;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.7-20
    • /
    • 2004
  • In this paper, we propose a two-step noise compensation algorithm in feature extraction for achieving robust speech recognition. The proposed method frees us from requiring a priori information on noisy environments and is simple to implement. First, in frequency domain, the Harmonics-based Spectral Subtraction (HSS) is applied so that it reduces the additive background noise and makes the shape of harmonics in speech spectrum more pronounced. We then apply a judiciously weighted variance Feature Vector Normalization (FVN) to compensate for both the channel distortion and additive noise. The weighted variance FVN compensates for the variance mismatch in both the speech and the non-speech regions respectively. Representative performance evaluation using Aurora 2 database shows that the proposed method yields 27.18% relative improvement in accuracy under a multi-noise training task and 57.94% relative improvement under a clean training task.

  • PDF

Feature Vector Processing for Speech Emotion Recognition in Noisy Environments (잡음 환경에서의 음성 감정 인식을 위한 특징 벡터 처리)

  • Park, Jeong-Sik;Oh, Yung-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.2 no.1
    • /
    • pp.77-85
    • /
    • 2010
  • This paper proposes an efficient feature vector processing technique to guard the Speech Emotion Recognition (SER) system against a variety of noises. In the proposed approach, emotional feature vectors are extracted from speech processed by comb filtering. Then, these extracts are used in a robust model construction based on feature vector classification. We modify conventional comb filtering by using speech presence probability to minimize drawbacks due to incorrect pitch estimation under background noise conditions. The modified comb filtering can correctly enhance the harmonics, which is an important factor used in SER. Feature vector classification technique categorizes feature vectors into either discriminative vectors or non-discriminative vectors based on a log-likelihood criterion. This method can successfully select the discriminative vectors while preserving correct emotional characteristics. Thus, robust emotion models can be constructed by only using such discriminative vectors. On SER experiment using an emotional speech corpus contaminated by various noises, our approach exhibited superior performance to the baseline system.

  • PDF

An Experimental Study on Barging-In Effects for Speech Recognition Using Three Telephone Interface Boards

  • Park, Sung-Joon;Kim, Ho-Kyoung;Koo, Myoung-Wan
    • Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.159-165
    • /
    • 2001
  • In this paper, we make an experiment on speech recognition systems with barging-in and non-barging-in utterances. Barging-in capability, with which we can say voice commands while voice announcement is coming out, is one of the important elements for practical speech recognition systems. Barging-in capability can be realized by echo cancellation techniques based on the LMS (least-mean-square) algorithm. We use three kinds of telephone interface boards with barging-in capability, which are respectively made by Dialogic Company, Natural MicroSystems Company and Korea Telecom. Speech database was made using these three kinds of boards. We make a comparative recognition experiment with this speech database.

  • PDF

Feature Parameter Extraction and Analysis in the Wavelet Domain for Discrimination of Music and Speech (음악과 음성 판별을 위한 웨이브렛 영역에서의 특징 파라미터)

  • Kim, Jung-Min;Bae, Keun-Sung
    • MALSORI
    • /
    • no.61
    • /
    • pp.63-74
    • /
    • 2007
  • Discrimination of music and speech from the multimedia signal is an important task in audio coding and broadcast monitoring systems. This paper deals with the problem of feature parameter extraction for discrimination of music and speech. The wavelet transform is a multi-resolution analysis method that is useful for analysis of temporal and spectral properties of non-stationary signals such as speech and audio signals. We propose new feature parameters extracted from the wavelet transformed signal for discrimination of music and speech. First, wavelet coefficients are obtained on the frame-by-frame basis. The analysis frame size is set to 20 ms. A parameter $E_{sum}$ is then defined by adding the difference of magnitude between adjacent wavelet coefficients in each scale. The maximum and minimum values of $E_{sum}$ for period of 2 seconds, which corresponds to the discrimination duration, are used as feature parameters for discrimination of music and speech. To evaluate the performance of the proposed feature parameters for music and speech discrimination, the accuracy of music and speech discrimination is measured for various types of music and speech signals. In the experiment every 2-second data is discriminated as music or speech, and about 93% of music and speech segments have been successfully detected.

  • PDF

Robust Speech Enhancement Based on Soft Decision Employing Spectral Deviation (스펙트럼 변이를 이용한 Soft Decision 기반의 음성향상 기법)

  • Choi, Jae-Hun;Chang, Joon-Hyuk;Kim, Nam-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.222-228
    • /
    • 2010
  • In this paper, we propose a new approach to noise estimation incorporating spectral deviation with soft decision scheme to enhance the intelligibility of the degraded speech signal in non-stationary noisy environments. Since the conventional noise estimation technique based on soft decision scheme estimates and updates the noise power spectrum using a fixed smoothing parameter which was assumed in stationary noisy environments, it is difficult to obtain the robust estimates of noise power spectrum in non-stationary noisy environments that spectral characteristics of noise signal such as restaurant constantly change. In this paper, once we first classify the stationary noise and non-stationary noise environments based on the analysis of spectral deviation of noise signal, we adaptively estimate and update the noise power spectrum according to the classified noise types. The performances of the proposed algorithm are evaluated by ITU-T P. 862 perceptual evaluation of speech quality (PESQ) under various ambient noise environments and show better performances compared with the conventional method.

Effects of Continuous Speech Therapy in Patients with Non-fluent Aphasia Using kMIT (kMIT를 이용한 비유창성 실어증 환자 음성 언어의 치료효과 연구)

  • Lee Ju Hee;Ko Myun Hwan;Kim Hyun Gi;Hong Ki Hwan
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.16 no.2
    • /
    • pp.158-164
    • /
    • 2005
  • Melody intonation therepy (MIT) is to improve the linguistic aspects of the verbal utterance for aphasic patients utilizing the intact right brain. It is applied to the aphasic patients with good comprehension, poor fluency, and little available speech are thought to be ideal candidates. The purpose of the study was to investigate the effects of Korean Melody intonation therapy (kMIT) in patients with non-fluent aphasia. Five male non-fluent aphasic patients were participated in this study. Average ages were 49.9 years old. Each therapy took 45-50minutes once a week for six months. Aphasic Screen lest (RISS) was used to assess language parameter such as Auditory comprehension, oral expression, reading, writing and calculation ability before and after kMIT. Mean of Length Utterance, verbal intelligibility and articulation disorder were assessed also. Computerized Speech Lab was used to assess the acoustic characteristics of aphasic patients before and after kMIT. The results are as follows : 1) Auditory comprehension, oral expression, reading, writing and calculation ability of the subjects increased after UH'. However, only oral expression showed significant difference (p<0.05). 2) Mean of Length Utterance of five patients generally increased after Un. 3) After kMIT, verbal intelligibility increased and showed significant difference (p<0.05). 4) Misarticulation rate generally decreased after m. 5) Voice Onset Time of the alveolar lenis /t/ and velar lenis /k/ gradually decreased after kMIT. 6) However, intonation pattern were increased gradually in yes'no question after kMIT.

  • PDF

The Effect of Visual Cues in the Identification of the English Consonants /b/ and /v/ by Native Korean Speakers (한국어 화자의 영어 양순음 /b/와 순치음 /v/ 식별에서 시각 단서의 효과)

  • Kim, Yoon-Hyun;Koh, Sung-Ryong;Valerie, Hazan
    • Phonetics and Speech Sciences
    • /
    • v.4 no.3
    • /
    • pp.25-30
    • /
    • 2012
  • This study investigated whether native Korean listeners could use visual cues for the identification of the English consonants /b/ and /v/. Both auditory and audiovisual tokens of word minimal pairs in which the target phonemes were located in word-initial or word-medial position were used. Participants were instructed to decide which consonant they heard in $2{\times}2$ conditions: cue (audio-only, audiovisual) and location (word-initial, word-medial). Mean identification scores were significantly higher for audiovisual than audio-only condition and for word-initial than word-medial condition. Also, according to signal detection theory, sensitivity, d', and response bias, c were calculated based on both hit rates and false alarm rates. The measures showed that the higher identification rate in the audiovisual condition was related with an increase in sensitivity. There were no significant differences in response bias measures across conditions. This result suggests that native Korean speakers can use visual cues while identifying confusing non-native phonemic contrasts. Visual cues can enhance non-native speech perception.

A Corpus-based study on the Effects of Gender on Voiceless Fricatives in American English

  • Yoon, Tae-Jin
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.117-124
    • /
    • 2015
  • This paper investigates the acoustic characteristics of English fricatives in the TIMIT corpus, with a special focus on the role of gender in rendering fricatives in American English. The TIMIT database includes 630 talkers and 2342 different sentences, comprising over five hours of speech. Acoustic analyses are conducted in the domain of spectral and temporal properties by treating gender as an independent factor. The results of acoustic analyses revealed that the most acoustic properties of voiceless sibilants turned out to be different between male and female speakers, but those of voiceless non-sibilants did not show differences. A classification experiment using linear discriminant analysis (LDA) revealed that 85.73% of voiceless fricatives are correctly classified. The sibilants are 88.61% correctly classified, whereas the non-sibilants are only 57.91% correctly classified. The majority of the errors are from the misclassification of /ɵ/ as [f]. The average accuracy of gender classification is 77.67%. Most of the inaccuracy results are from the classification of female speakers in non-sibilants. The results are accounted for by resorting to biological differences as well as macro-social factors. The paper contributes to the understanding of the role of gender in a large-scale speech corpus.

Comparison of Self-Reporting Voice Evaluations between Professional and Non-Professional Voice Users with Voice Disorders by Severity and Type (음성장애가 있는 직업적 음성사용자와 비직업적 음성사용자의 음성장애 중증도와 유형에 따른 자기보고식 음성평가 차이)

  • Kim, Jaeock
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.67-76
    • /
    • 2015
  • The purpose of this study was to compare professional (Pro) and non-professional (Non-pro) voice users with voice disorders in self-reporting voice evaluation using Korean-Voice Handicap Index (K-VHI) and Korean-Voice Related Quality of Life (K-VRQOL). In addition, those were compared by voice quality and voice disorder type. 94 Pro and 106 Non-pro were asked to fill out the K-VHI and K-VRQOL, perceptually evaluated on GRBAS scales, and divided into three types of voice disorders (functional, organic and neurologic) by an experienced speech-language pathologist and an otolaryngologist. The results showed that the functional (F) and physical (P) scores of K-VHI in Pro group were significantly higher than those in Non-pro group. As the voice quality evaluated by G scale got worse, the scores of all aspects except emotional (E) of K-VHI and social-emotional (SE) of K-VRQOL were higher. All scores of K-VHI and K-VRQOL in neurologic voice disorders were significantly higher than those in functional and organic voice disorders. In conclusion, professional voice users are more sensitive to their functional and physical handicap resulted by their voice problems and that goes double for the patients with severe and neurologic voice disorders.

Cross-Generational Differences of /o/ and /u/ in Informal Text Reading (편지글 읽기에 나타난 한국어 모음 /오/-/우/의 세대간 차이)

  • Han, Jeong-Im;Kang, Hyunsook;Kim, Joo-Yeon
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.201-207
    • /
    • 2013
  • This study is a follow-up study of Han and Kang (2013) and Kang and Han (2013) which examined cross-generational changes in the Korean vowels /o/ and /u/ using acoustic analyses of the vowel formants of these two vowels, their Euclidean distances and the overlap fraction values generated in SOAM 2D (Wassink, 2006). Their results showed an on-going approximation of /o/ and /u/, more evident in female speakers and non-initial vowels. However, these studies employed non-words in a frame sentence. To see the extent to which these two vowels are merged in real words in spontaneous speech, we conducted an acoustic analysis of the formants of /o/ and /u/ produced by two age groups of female speakers while reading a letter sample. The results demonstrate that 1) the younger speakers employed mostly F2 but not F1 differences in the production of /o/ and /u/; 2) the Euclidean distance of these two vowels was shorter in non-initial than initial position, but there was no difference in Euclidean distance between the two age groups (20's vs. 40-50's); 3) overall, /o/ and /u/ were more overlapped in non-initial than initial position, but in non-initial position, younger speakers showed more congested distribution of the vowels than in older speakers.