• 제목/요약/키워드: Korean speech

검색결과 5,286건 처리시간 0.028초

Implementation and Evaluation of an HMM-Based Speech Synthesis System for the Tagalog Language

  • ;김경태;김종진
    • 대한음성학회지:말소리
    • /
    • 제68권
    • /
    • pp.49-63
    • /
    • 2008
  • This paper describes the development and assessment of a hidden Markov model (HMM) based Tagalog speech synthesis system, where Tagalog is the most widely spoken indigenous language of the Philippines. Several aspects of the design process are discussed here. In order to build the synthesizer a speech database is recorded and phonetically segmented. The constructed speech corpus contains approximately 89 minutes of Tagalog speech organized in 596 spoken utterances. Furthermore, contextual information is determined. The quality of the synthesized speech is assessed by subjective tests employing 25 native Tagalog speakers as respondents. Experimental results show that the new system is able to obtain a 3.29 MOS which indicates that the developed system is able to produce highly intelligible neutral Tagalog speech with stable quality even when a small amount of speech data is used for HMM training.

  • PDF

입술정보를 이용한 음성 특징 파라미터 추정 및 음성인식 성능향상 (Estimation of speech feature vectors and enhancement of speech recognition performance using lip information)

  • 민소희;김진영;최승호
    • 대한음성학회지:말소리
    • /
    • 제44호
    • /
    • pp.83-92
    • /
    • 2002
  • Speech recognition performance is severly degraded under noisy envrionments. One approach to cope with this problem is audio-visual speech recognition. In this paper, we discuss the experiment results of bimodal speech recongition based on enhanced speech feature vectors using lip information. We try various kinds of speech features as like linear predicion coefficient, cepstrum, log area ratio and etc for transforming lip information into speech parameters. The experimental results show that the cepstrum parameter is the best feature in the point of reconition rate. Also, we present the desirable weighting values of audio and visual informations depending on signal-to-noiso ratio.

  • PDF

스펙트럼의 변동계수를 이용한 잡음에 강인한 음성 구간 검출 (Noise-Robust Speech Detection Using The Coefficient of Variation of Spectrum)

  • 김영민;한민수
    • 대한음성학회지:말소리
    • /
    • 제48호
    • /
    • pp.107-116
    • /
    • 2003
  • This paper deals with a new parameter for voice detection which is used for many areas of speech engineering such as speech synthesis, speech recognition and speech coding. CV (Coefficient of Variation) of speech spectrum as well as other feature parameters is used for the detection of speech. CV is calculated only in the specific range of speech spectrum. Average magnitude and spectral magnitude are also employed to improve the performance of detector. From the experimental results the proposed voice detector outperformed the conventional energy-based detector in the sense of error measurements.

  • PDF

Acoustic Analysis of Speech Disorder Associated with Motor Aphasia - A Case Report -

  • Ko, Myung-Hwan;Kim, Hyun-Ki;Kim, Yun-Hee
    • 음성과학
    • /
    • 제7권1호
    • /
    • pp.97-107
    • /
    • 2000
  • Motor aphasia is an affection frequently caused by insult of the left middle cerebral artery and usually accompanied by a large lesion involving the Broca's area and the adjacent motor and premotor areas. Therefore, a patient with motor aphasia commonly shows articulatory disturbances due to failure of the motor programing of speech sound. Objective assessment and treatment of phonologic programing is one of the important aspects of speech therapy in aphasic patients. We analyzed the speech disorders acompanied with motor aphasia in a 45-year-old man using a computerized sound spectrograph, Visi-$Pitch{\circledR}$, and Multi-Dimensional Voice $Program{\circledR}$. We concluded that a computerized speech analysis system is a useful tool to visualize and quantitatively analyse the severity and progression of dysarthria, and the effect of speech therapy.

  • PDF

Determining the Relative Differences of Emotional Speech Using Vocal Tract Ratio

  • Wang, Jianglin;Jo, Cheol-Woo
    • 음성과학
    • /
    • 제13권1호
    • /
    • pp.109-116
    • /
    • 2006
  • In this paper, our study focuses on obtaining the differences of emotional speech in three different vocal tract sections. The vocal tract area was computed from the area function of the emotional speech. The total vocal tract was divided into 3 sections (vocal fold section, middle section and lip section) to acquire the differences in each vocal tract section of emotional speech. The experiment data include 6 emotional speeches from 3 males and 3 females. The 6 emotions consist of neutral, happiness, anger, sadness, fear and boredom. The measured difference is computed by the ratio through comparing each emotional speech with the normal speech. The experimental results present that there is not a remarkable difference at lip section, but the fear and sadness have a great change at the vocal fold part.

  • PDF

Acoustic Driving Simulator Design for Evaluating an In-car Speech Recognizer

  • Lee, Seongjae;Kang, Sunmee
    • 말소리와 음성과학
    • /
    • 제5권2호
    • /
    • pp.93-97
    • /
    • 2013
  • This paper is on designing an indoor driving simulator to evaluate the performance of in-car speech recognizer when influenced by the elements, which lower the success rate of speech recognition. The proposed simulator simulates vehicle noise which was pre-recorded in diverse driving environments and driver's speech. Additionally, the proposed Lombard effect conversion module in this simulator enables the speech recorded in a studio environment to convert into various possible driving scenarios. The relevant experimental results have confirmed that the proposed simulator is a feasible approach for realizing an effective method as it achieved similar speech recognition results to the real driving environment.

음성기반 멀티모달 사용자 인터페이스의 사용성 평가 방법론 (Usability Test Guidelines for Speech-Oriented Multimodal User Interface)

  • 홍기형
    • 대한음성학회지:말소리
    • /
    • 제67호
    • /
    • pp.103-120
    • /
    • 2008
  • Basic components for multimodal interface, such as speech recognition, speech synthesis, gesture recognition, and multimodal fusion, have their own technological limitations. For example, the accuracy of speech recognition decreases for large vocabulary and in noisy environments. In spite of those technological limitations, there are lots of applications in which speech-oriented multimodal user interfaces are very helpful to users. However, in order to expand application areas for speech-oriented multimodal interfaces, we have to develop the interfaces focused on usability. In this paper, we introduce usability and user-centered design methodology in general. There has been much work for evaluating spoken dialogue systems. We give a summary for PARADISE (PARAdigm for Dialogue System Evaluation) and PROMISE (PROcedure for Multimodal Interactive System Evaluation) that are the generalized evaluation frameworks for voice and multimodal user interfaces. Then, we present usability components for speech-oriented multimodal user interfaces and usability testing guidelines that can be used in a user-centered multimodal interface design process.

  • PDF

ETRI 방송뉴스음성인식시스템 소개 (Introduction of ETRI Broadcast News Speech Recognition System)

  • 박준
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2006년도 춘계 학술대회 발표논문집
    • /
    • pp.89-93
    • /
    • 2006
  • This paper presents ETRI broadcast news speech recognition system. There are two major issues on the broadcast news speech recognition: 1) real-time processing and 2) out-of-vocabulary handling. For real-time processing, we devised the dual decoder architecture. The input speech signal is segmented based on the long-pause between utterances, and each decoder processes the speech segment alternatively. One decoder can start to recognize the current speech segment without waiting for the other decoder to recognize the previous speech segment completely. Thus, the processing delay is not accumulated. For out-of-vocabulary handling, we updated both the vocabulary and the language model, based on the recent news articles on the internet. By updating the language model as well as the vocabulary, we can improve the performance up to 17.2% ERR.

  • PDF

Spectral Folding방법과 GMM 변환을 이용한 대역폭 확장의 Hybrid 방법 (The Hybrid Bandwidth Extenstion Method Using Spectral Folding and GMM Transformation)

  • 최무열;김형순
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2006년도 춘계 학술대회 발표논문집
    • /
    • pp.131-134
    • /
    • 2006
  • The narrowband speech over the telephone network is lacking in the information from low-band (0-300 Hz) and high-band (3400-8000 Hz) that are found in wideband speech (0-8000 Hz). As a result, narrowband speech is characterized by the reduced intelligibility and muffled quality, and degraded speaker identification. Spectral folding is the easiest way to reconstruct the missing high-band; however, the reconstructed speech still brings the sense of band-limited characteristic because of the absence of low-band and mid-band frequency components. To compensate for the lack of the extended speech, we propose to combine the spectral folding method and GMM transformation method, which is a statistical method to reconstruct wideband speech. The reconstructed wideband speech showed that the absent frequency components was filled up with relatively low spectral mismatch. According to the subjective speech quality evaluations, the proposed method was preferred to other methods.

  • PDF

Segmental timing of young children and adults

  • Kim Min-Jung;Carol Stoel-Gammon
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2006년도 춘계 학술대회 발표논문집
    • /
    • pp.59-62
    • /
    • 2006
  • Young children's speech is compared to adult-to-adult speech and adult-to-child speech by measuring durations and variability of each segment in CVC words. The results demonstrate that child speech exhibits an inconsistent timing relationship between consonants and vowels within a word. In contrast, consonant and vowel durations in adult-to-adult speech and adult-to-child speech exhibit significant relationships across segments, despite variability of segments when speaking rate is decreased. The results suggest that temporal patterns of young children are quite different from those of adults, and provide some evidence for lack of motor control capability and great variance in articulatory coordination.

  • PDF