• Title/Summary/Keyword: Korean speech

Search Result 5,286, Processing Time 0.026 seconds

A Validity Study on Measurement of Mental Fatigue Using Speech Technology (음성기술을 이용한 정신피로 측정에 관한 타당성 연구)

  • Song, Seungkyu;Kim, Jongyeol;Jang, Junsu;Kwon, Chulhong
    • Phonetics and Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.3-10
    • /
    • 2013
  • This study proposes a method to measure mental fatigue using speech technology, which has not been used in previous research and is easier than existing complex and difficult methods. It aims at establishing a relationship between the human voice and mental fatigue based on experiments to measure the influence of mental fatigue on the human voice. Two monotonous tasks of simple calculation such as finding the sum of three one digit numbers were used to measure the feeling of monotony and two sets of subjective questionnaires were used to measure mental fatigue. While thirty subjects perform the experiment, responses to the questionnaire and speech data were collected. Speech features related to speech source and the vocal tract filter were extracted from the speech data. According to the results, speech parameters deeply related to mental fatigue are a mean and standard deviation of fundamental frequency, jitter, and shimmer. This study shows that speech technology is a useful method for measuring mental fatigue.

Application of Shape Analysis Techniques for Improved CASA-Based Speech Separation (CASA 기반 음성분리 성능 향상을 위한 형태 분석 기술의 응용)

  • Lee, Yun-Kyung;Kwon, Oh-Wook
    • MALSORI
    • /
    • no.65
    • /
    • pp.153-168
    • /
    • 2008
  • We propose a new method to apply shape analysis techniques to a computational auditory scene analysis (CASA)-based speech separation system. The conventional CASA-based speech separation system extracts speech signals from a mixture of speech and noise signals. In the proposed method, we complement the missing speech signals by applying the shape analysis techniques such as labelling and distance function. In the speech separation experiment, the proposed method improves signal-to-noise ratio by 6.6 dB. When the proposed method is used as a front-end of speech recognizers, it improves recognition accuracy by 22% for the speech-shaped stationary noise condition and 7.2% for the two-talker noise condition at the target-to-masker ratio than or equal to -3 dB.

  • PDF

Performance Analysis of Noisy Speech Recognition Depending on Parameters for Noise and Signal Power Estimation in MMSE-STSA Based Speech Enhancement (MMSE-STSA 기반의 음성개선 기법에서 잡음 및 신호 전력 추정에 사용되는 파라미터 값의 변화에 따른 잡음음성의 인식성능 분석)

  • Park Chul-Ho;Bae Keun-Sung
    • MALSORI
    • /
    • no.57
    • /
    • pp.153-164
    • /
    • 2006
  • The MMSE-STSA based speech enhancement algorithm is widely used as a preprocessing for noise robust speech recognition. It weighs the gain of each spectral bin of the noisy speech using the estimate of noise and signal power spectrum. In this paper, we investigate the influence of parameters used to estimate the speech signal and noise power in MMSE-STSA upon the recognition performance of noisy speech. For experiments, we use the Aurora2 DB which contains noisy speech with subway, babble, car, and exhibition noises. The HTK-based continuous HMM system is constructed for recognition experiments. Experimental results are presented and discussed with our findings.

  • PDF

Speech Active Interval Detection Method in Noisy Speech (잡음음성에서의 음성 활성화 구간 검출 방법)

  • Lee, Kwang-Seok;Choo, Yeon-Gyu;Kim, Hyun-Deok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.779-782
    • /
    • 2008
  • It is important to detect speech active interval from Noisy Speech in speech communication and speech recognition. In this research, we propose characteristic parameter with combining spectral Entropy for detect speech active interval in Noisy Speech, and compare performance of speech active interval based on energy. The results shows that analysis using proposed characteristic parameter is higher performance the others in noisy environment.

  • PDF

The Use of a Temporary Speech Aid Prosthesis to Treat Speech in Velopharyngeal Insufficiency (VPI) (비인강폐쇄부전 환자의 언어교정을 위해 발음 보조장치를 이용한 증례)

  • Kim, Eun-Ju;Ko, Seung-O;Shin, Hyo-Keun;Kim, Hyun-Gi
    • Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.3-14
    • /
    • 2002
  • VPI occurs when the velum and lateral and posterior pharyngeal wall fail to separate the nasal cavity from the oral cavity during deglutition and speech. There are a number of congenital and acquired conditions which result in VPI. Congenital conditions include cleft palate, submucous cleft palate and congenital palatal insufficiency (CPI). Acquired conditions include carcinoma of the palate or pharynx and neurologic disorders. The speech characteristics of VPI is characterized by hypernasality, nasal air emission, decreased intraoral air pressure, increased nasal air flow, decreased intelligibility. VPI can be treated with various methods that include speech therapy, surgical procedures to reduce the velopharyngeal gap, speech aid prosthesis, and combination of surgery and prosthesis. This article describes four cases of VPI treated by speech aid prosthesis and speech therapy with satisfactory result.

  • PDF

Extraction of Speech Features for Emotion Recognition (감정 인식을 위한 음성 특징 도출)

  • Kwon, Chul-Hong;Song, Seung-Kyu;Kim, Jong-Yeol;Kim, Keun-Ho;Jang, Jun-Su
    • Phonetics and Speech Sciences
    • /
    • v.4 no.2
    • /
    • pp.73-78
    • /
    • 2012
  • Emotion recognition is an important technology in the filed of human-machine interface. To apply speech technology to emotion recognition, this study aims to establish a relationship between emotional groups and their corresponding voice characteristics by investigating various speech features. The speech features related to speech source and vocal tract filter are included. Experimental results show that statistically significant speech parameters for classifying the emotional groups are mainly related to speech sources such as jitter, shimmer, F0 (F0_min, F0_max, F0_mean, F0_std), harmonic parameters (H1, H2, HNR05, HNR15, HNR25, HNR35), and SPI.

Modelling and Synthesis of Emotional Speech on Multimedia Environment (멀티미디어 환경을 위한 정서음성의 모델링 및 합성에 관한 연구)

  • Jo, Cheol-Woo;Kim, Dae-Hyun
    • Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.35-47
    • /
    • 1999
  • This paper describes procedures to model and synthesize emotional speech in a multimedia environment. At first, procedures to model the visual representation of emotional speech are proposed. To display the sequences of the images in synchronized form with speech, MSF(Multimedia Speech File) format is proposed and the display software is implemented. Then the emotional speech sinal is collected and analysed to obtain the prosodic characteristics of the emotional speech in limited domain. Multi-emotional sentences are spoken by actors. From the emotional speech signals, prosodic structures are compared in terms of the pseudo-syntactic structure. Based on the analyzed result, neutral speech is transformed into a specific emotinal state by modifying the prosodic structures.

  • PDF

A Robust Speech Recognition Method Combining the Model Compensation Method with the Speech Enhancement Algorithm (음질향상 기법과 모델보상 방식을 결합한 강인한 음성인식 방식)

  • Kim, Hee-Keun;Chung, Yong-Joo;Bae, Keun-Seung
    • Speech Sciences
    • /
    • v.14 no.2
    • /
    • pp.115-126
    • /
    • 2007
  • There have been many research efforts to improve the performance of the speech recognizer in noisy conditions. Among them, the model compensation method and the speech enhancement approach have been used widely. In this paper, we propose to combine the two different approaches to further enhance the recognition rates in the noisy speech recognition. For the speech enhancement, the minimum mean square error-short time spectral amplitude (MMSE-STSA) has been adopted and the parallel model combination (PMC) and Jacobian adaptation (JA) have been used as the model compensation approaches. From the experimental results, we could find that the hybrid approach that applies the model compensation methods to the enhanced speech produce better results than just using only one of the two approaches.

  • PDF

Statistical Model-Based Voice Activity Detection Using Spatial Cues for Dual-Channel Noisy Speech Recognition (이중채널 잡음음성인식을 위한 공간정보를 이용한 통계모델 기반 음성구간 검출)

  • Shin, Min-Hwa;Park, Ji-Hun;Kim, Hong-Kook;Lee, Yeon-Woo;Lee, Seong-Ro
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.141-148
    • /
    • 2010
  • In this paper, voice activity detection (VAD) for dual-channel noisy speech recognition is proposed in which spatial cues are employed. In the proposed method, a probability model for speech presence/absence is constructed using spatial cues obtained from dual-channel input signal, and a speech activity interval is detected through this probability model. In particular, spatial cues are composed of interaural time differences and interaural level differences of dual-channel speech signals, and the probability model for speech presence/absence is based on a Gaussian kernel density. In order to evaluate the performance of the proposed VAD method, speech recognition is performed for speech segments that only include speech intervals detected by the proposed VAD method. The performance of the proposed method is compared with those of several methods such as an SNR-based method, a direction of arrival (DOA) based method, and a phase vector based method. It is shown from the speech recognition experiments that the proposed method outperforms conventional methods by providing relative word error rates reductions of 11.68%, 41.92%, and 10.15% compared with SNR-based, DOA-based, and phase vector based method, respectively.

  • PDF

A comparison of phonological error patterns in the single word and spontaneous speech of children with speech sound disorders (말소리장애 아동의 단어와 자발화 문맥의 음운오류패턴 비교)

  • Park, kayeon;Kim, Soo-Jin
    • Phonetics and Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.165-173
    • /
    • 2015
  • This study was aim to compare the phonological error patterns and PCC(Percentage of Correct Consonants) derived from the single word and spontaneous speech contexts of the speech sound disorders with unknown origin(SSD). The present study suggest that the development phonological error patterns and non-developmental error patterns of the target children, in according to speech context. The subjects were 15 children with SSD up to the age of 5 from 3 years of age. This research use 37 words of APAC(Assessment of Phonology & Articulation for Children) in the single word context and 100 eojeol in the spontaneous speech context. There was no difference of PCC between the single word and the spontaneous speech contexts. Significantly different developmental phonological error patterns between the single word and the spontaneous speech contexts were syllable deletion, word-medial onset deletion, liquid deletion, gliding, affrication, fricative other error, tensing, regressive assimilation. Significantly different non-developmental phonological error patterns were backing, addtion of phoneme, aspirating. The study showed that there was no difference of PCC between elicited single word and spontaneous conversational context. And there were some different phonological error patterns derived from the two contexts of the speech sound disorders. The more important interventions target is the error patterns of the spontaneous speech contexts for the immediate generalization and rising overall intelligibility.