• Title/Summary/Keyword: Non-speech

Search Result 468, Processing Time 0.021 seconds

Various Approaches to Improve Exclusion Performance of Non-similar Candidates from N-best Recognition Results on Isolated Word Recognition (고립 단어 인식 결과의 비유사 후보 단어 제외 성능을 개선하기 위한 다양한 접근 방법 연구)

  • Yun, Young-Sun
    • Phonetics and Speech Sciences
    • /
    • v.2 no.4
    • /
    • pp.153-161
    • /
    • 2010
  • Many isolated word recognition systems may generate non-similar words for recognition candidates because they use only acoustic information. The previous study [1,2] investigated several techniques which can exclude non-similar words from N-best candidate words by applying Levenstein distance measure. This paper discusses the various improving techniques of removing non-similar recognition results. The mentioned methods include comparison penalties or weights, phone accuracy based on confusion information, weights candidates by ranking order and partial comparisons. Through experimental results, it is found that some proposed method keeps more accurate recognition results than the previous method's results.

  • PDF

Target Speaker Speech Restoration via Spectral bases Learning (주파수 특성 기저벡터 학습을 통한 특정화자 음성 복원)

  • Park, Sun-Ho;Yoo, Ji-Ho;Choi, Seung-Jin
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.3
    • /
    • pp.179-186
    • /
    • 2009
  • This paper proposes a target speech extraction which restores speech signal of a target speaker form noisy convolutive mixture of speech and an interference source. We assume that the target speaker is known and his/her utterances are available in the training time. Incorporating the additional information extracted from the training utterances into the separation, we combine convolutive blind source separation(CBSS) and non-negative decomposition techniques, e.g., probabilistic latent variable model. The nonnegative decomposition is used to learn a set of bases from the spectrogram of the training utterances, where the bases represent the spectral information corresponding to the target speaker. Based on the learned spectral bases, our method provides two postprocessing steps for CBSS. Channel selection step finds a desirable output channel from CBSS, which dominantly contains the target speech. Reconstruct step recovers the original spectrogram of the target speech from the selected output channel so that the remained interference source and background noise are suppressed. Experimental results show that our method substantially improves the separation results of CBSS and, as a result, successfully recovers the target speech.

Comparison of voice range profiles of modal and falsetto register in dysphonic and non-dysphonic adult women (음성장애 성인 여성과 정상음성 성인 여성 간 진성구와 가성구의 음성범위프로파일 비교)

  • Jaeock Kim;Seung Jin Lee
    • Phonetics and Speech Sciences
    • /
    • v.14 no.4
    • /
    • pp.67-75
    • /
    • 2022
  • This study compared voice range profiles (VRPs) of modal and falsetto register in 53 dysphonic and 53 non-dysphonic adult women with gliding vowel /a/'. The results shows that maximum fundamental frequency (F0MAX), maximum intensity (IMAX), F0 range (F0RANGE), and intensity range (IRANGE) are lower in the dysphonic group than in the non-dysphonic group. F0MAX and F0RANGE are significantly higher in falsetto register than modal register in both groups. IMAX and IRANGE are significantly higher in falsetto register in the non-dysphonic group, but those are not different between two registers in the dysphonic group. There was no statistically significant difference in minimum F0 (F0MIN) and minimum intensity (IMIN) between the two groups. Modal-falsetto register transition occurred at 378.86 Hz (F4#) in the dysphonic group and 557.79 Hz (C5#) in the non-dysphonic group, which was significantly lower in the dysphonic group. It can be seen that both modal and falsetto registers in dysphonic adult women are reduced compared to non-dysphoinc adult women, indicating that the vocal folds of dysphonic adult women are not easy to vibrate in high pitches. The results of this study would be the basic data for understanding the acoustic features of voice disorders.

Adaptive Feedback Cancellation Using by Independent Component Analysis for Digital Hearing Aid (독립성분분석을 이용한 디지털 보청기용 적응형 궤환 제거)

  • Ji, Yoon-Sang;Lee, Sang-Min;Jung, Sae-Young;Kim, In-Young;Kim, Sun-I
    • Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.79-89
    • /
    • 2005
  • Acoustic feedback between microphone and receiver can be effectively cancelled adaptive feedback cancellation algorithm. Although many speech sounds have non-Gaussian distribution, most algorithms were tested with speech like sounds whose distribution were Guassian type. In this paper, we proposed an adaptive feedback cancellation algorithm based on independent component analysis (ICA) for digital hearing aid. The algorithm was tested with not only Gaussian distribution but also Laplacian distribution. We verified that the proposed algorithm has better acoustic feedback cancelling performance than conventional normalized root mean square (NLMS) algorithm, especially speech like sounds with Laplacian distribution.

  • PDF

Performance Evaluation of English Word Pronunciation Correction System (한국인을 위한 외국어 발음 교정 시스템의 개발 및 성능 평가)

  • Kim Mu Jung;Kim Hyo Sook;Kim Sun Ju;Kim Byoung Gi;Ha Jin-Young;Kwon Chul Hong
    • MALSORI
    • /
    • no.46
    • /
    • pp.87-102
    • /
    • 2003
  • In this paper, we present an English pronunciation correction system for Korean speakers and show some of experimental results on it. The aim of the system is to detect mispronounced phonemes in spoken words and to give appropriate correction comments to users. There are several English pronunciation correction systems adopting speech recognition technology, however, most of them use conventional speech recognition engines. From this reason, they could not give phoneme based correction comments to users. In our system, we build two kinds of phoneme models: standard native speaker models and Korean's error models. We also design recognition network based on phonemes to detect Koreans' common mispronunciations. We get 90% detection rate in insertion/deletion/replacement of phonemes, but we cannot get high detection rate in diphthong split and accents.

  • PDF

A Study on the Vowel Duration of the Buckeye Corpus (벅아이 코퍼스의 모음 길이 연구)

  • Chung, Hyejung;Yoon, Kyuchul
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.103-110
    • /
    • 2015
  • The purpose of this study is to assess the vowel property by examining the vowel duration of the American English vowles found in the Buckeye corpus[6]. The vowel durations were analyzed in terms of various linguistic factors including the number of syllables of the word containing the vowel, the location of the vowel in a word, types of stress, function versus content word, the word frequency in the corpus and the speech rate calculated from the three consecutive words. The findings from this work agreed mostly with those from earlier studies, but with some exceptions. The relationship between the speech rate and the vowel duration proved non-linear.

Vowel Fundamental Frequency in Manner Differentiation of Korean Stops and Affricates

  • Jang, Tae-Yeoub
    • Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.217-232
    • /
    • 2000
  • In this study, I investigate the role of post-consonantal fundamental frequency (F0) as a cue for automatic distinction of types of Korean stops and affricates. Rather than examining data obtained by restricting contexts to a minimum to prevent the interference of irrelevant factors, a relatively natural speaker independent speech corpus is analysed. Automatic and statistical approaches are adopted to annotate data, to minimise speaker variability, and to evaluate the results. In spite of possible loss of information during those automatic analyses, statistics obtained suggest that vowel F0 is a useful cue for distinguishing manners of articulation of Korean non-continuant obstruents having the same place of articulation, especially of lax and aspirated stops and affricates. On the basis of the statistics, automatic classification is attempted over the relevant consonants in a specific context where the micro-prosodic effects appear to be maximised. The results confirm the usefulness of this effect in application for Korean phone recognition.

  • PDF

A Noise Reduction Method Combined with HMM Composition for Speech Recognition in Noisy Environments

  • Shen, Guanghu;Jung, Ho-Youl;Chung, Hyun-Yeol
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.1
    • /
    • pp.1-7
    • /
    • 2008
  • In this paper, a MSS-NOVO method that combines the HMM composition method with a noise reduction method is proposed for speech recognition in noisy environments. This combined method starts with noise reduction with modified spectral subtraction (MSS) to enhance the input noisy speech, then the noise and voice composition (NOVO) method is applied for making noise adapted models by using the noise in the non-utterance regions of the enhanced noisy speech. In order to evaluate the effectiveness of our proposed method, we compare MSS-NOVO method with other methods, i.e., SS-NOVO, MWF-NOVO. To set up the noisy speech for test, we add White noise to KLE 452 database with different SNRs range from 0dB to 15dB, at 5dB intervals. From the tests, MSS-NOVO method shows average improvement of 66.5% and 13.6% compared with the existing SS-NOVO method and MWF-NOVO method, respectively. Especially our proposed MSS-NOVO method shows a big improvement at low SNRs.

  • PDF

Automatic Pronunciation Diagnosis System of Korean Students' English Using Purification Algorithm (정제 알고리즘을 이용한 한국인 화자의 영어 발화 자동 진단 시스템)

  • Yang, Il-Ho;Kim, Min-Seok;Yu, Ha-Jin;Han, Hye-Seung;Lee, Joo-Kyeong
    • Phonetics and Speech Sciences
    • /
    • v.2 no.2
    • /
    • pp.69-75
    • /
    • 2010
  • We propose an automatic pronunciation diagnosis system to evaluate the pronunciation of a foreign language without the uttered text. We recorded English utterances spoken by native and Korean speakers, and utterances spoken by Koreans are evaluated by native speakers based on three criteria: fluency, accuracy of phones and intonation. The system evaluates the utterances of test Korean speakers based on the differences of log-likelihood given two models: one is trained by English speech uttered by native speakers, and the other is trained by English speech uttered by Korean speakers. We also applied purification algorithm to increase class differentiability. The purification can detect and eliminate the non-speech frames such as short pauses, occlusive silences that do not help to discriminate between utterances. As the results, our proposed system has higher correlation with the human scores than the baseline system.

  • PDF

Sentence Type Identification in Korean Applications to Korean-Sign Language Translation and Korean Speech Synthesis (한국어 문장 유형의 자동 분류 한국어-수화 변환 및 한국어 음성 합성에의 응용)

  • Chung, Jin-Woo;Lee, Ho-Joon;Park, Jong-C.
    • Journal of the HCI Society of Korea
    • /
    • v.5 no.1
    • /
    • pp.25-35
    • /
    • 2010
  • This paper proposes a method of automatically identifying sentence types in Korean and improving naturalness in sign language generation and speech synthesis using the identified sentence type information. In Korean, sentences are usually categorized into five types: declarative, imperative, propositive, interrogative, and exclamatory. However, it is also known that these types are quite ambiguous to identify in dialogues. In this paper, we present additional morphological and syntactic clues for the sentence type and propose a rule-based procedure for identifying the sentence type using these clues. The experimental results show that our method gives a reasonable performance. We also describe how the sentence type is used to generate non-manual signals in Korean-Korean sign language translation and appropriate intonation in Korean speech synthesis. Since the method of using sentence type information in speech synthesis and sign language generation is not much studied previously, it is anticipated that our method will contribute to research on generating more natural speech and sign language expressions.

  • PDF