• Title/Summary/Keyword: continuous speech

Search Result 319, Processing Time 0.024 seconds

A Study on the Continuous Speech Recognition for the Automatic Creation of International Phonetics (국제 음소의 자동 생성을 활용한 연속음성인식에 관한 연구)

  • Kim, Suk-Dong;Hong, Seong-Soo;Shin, Chwa-Cheul;Woo, In-Sung;Kang, Heung-Soon
    • Journal of Korea Game Society
    • /
    • v.7 no.2
    • /
    • pp.83-90
    • /
    • 2007
  • One result of the trend towards globalization is an increased number of projects that focus on natural language processing. Automatic speech recognition (ASR) technologies, for example, hold great promise in facilitating global communications and collaborations. Unfortunately, to date, most research projects focus on single widely spoken languages. Therefore, the cost to adapt a particular ASR tool for use with other languages is often prohibitive. This work takes a more general approach. We propose an International Phoneticizing Engine (IPE) that interprets input files supplied in our Phonetic Language Identity (PLI) format to build a dictionary. IPE is language independent and rule based. It operates by decomposing the dictionary creation process into a set of well-defined steps. These steps reduce rule conflicts, allow for rule creation by people without linguistics training, and optimize run-time efficiency. Dictionaries created by the IPE can be used with the speech recognition system. IPE defines an easy-to-use systematic approach that can obtained 92.55% for the recognition rate of Korean speech and 89.93% for English.

  • PDF

Emotion recognition in speech using hidden Markov model (은닉 마르코프 모델을 이용한 음성에서의 감정인식)

  • 김성일;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.3
    • /
    • pp.21-26
    • /
    • 2002
  • This paper presents the new approach of identifying human emotional states such as anger, happiness, normal, sadness, or surprise. This is accomplished by using discrete duration continuous hidden Markov models(DDCHMM). For this, the emotional feature parameters are first defined from input speech signals. In this study, we used prosodic parameters such as pitch signals, energy, and their each derivative, which were then trained by HMM for recognition. Speaker adapted emotional models based on maximum a posteriori(MAP) estimation were also considered for speaker adaptation. As results, the simulation performance showed that the recognition rates of vocal emotion gradually increased with an increase of adaptation sample number.

  • PDF

Performance Improvement of Packet Loss Concealment Algorithm in G.711 Using Adaptive Signal Scale Estimation (적응적 신호 크기 예측을 이용한 G.711 패킷 손실 은닉 알고리즘의 성능향상)

  • Kim, Tae-Ha;Lee, In-Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.5
    • /
    • pp.403-409
    • /
    • 2015
  • In this paper, we propose Packet Loss Concealment (PLC) method using adaptive signal scale estimation for performance improvement of G.711 PLC. The conventional method controls a gain using 20 % attenuation factor when continuous loss occurs. However, this method lead to deterioration because that don't consider the change of signal. So, we propose gain control by adaptive signal scale estimation through before and after frame information using Least Mean Square (LMS) predictor. Performance evaluation of proposed algorithm is presented through Perceptual Evaluation of Speech Quality (PESQ) evaulation.

Segmentation of continuous Korean Speech Based on Boundaries of Voiced and Unvoiced Sounds (유성음과 무성음의 경계를 이용한 연속 음성의 세그먼테이션)

  • Yu, Gang-Ju;Sin, Uk-Geun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.7
    • /
    • pp.2246-2253
    • /
    • 2000
  • In this paper, we show that one can enhance the performance of blind segmentation of phoneme boundaries by adopting the knowledge of Korean syllabic structure and the regions of voiced/unvoiced sounds. eh proposed method consists of three processes : the process to extract candidate phoneme boundaries, the process to detect boundaries of voiced/unvoiced sounds, and the process to select final phoneme boundaries. The candidate phoneme boudaries are extracted by clustering method based on similarity between two adjacent clusters. The employed similarity measure in this a process is the ratio of the probability density of adjacent clusters. To detect he boundaries of voiced/unvoiced sounds, we first compute the power density spectrum of speech signal in 0∼400 Hz frequency band. Then the points where this paper density spectrum variation is greater than the threshold are chosen as the boundaries of voiced/unvoiced sounds. The final phoneme boundaries consist of all the candidate phoneme boundaries in voiced region and limited number of candidate phoneme boundaries in unvoiced region. The experimental result showed about 40% decrease of insertion rate compared to the blind segmentation method we adopted.

  • PDF

The Continuous Speech Recognition with Prosodic Phrase Unit (운율구 단위의 연속음 인식)

  • 강지영;엄기완;김진영;최승호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.8
    • /
    • pp.9-16
    • /
    • 1999
  • Generally, a speaker structures utterances very clearly by grouping words into phrases. This facilitates the listener's recovery of the meaning of the utterance and the speaker's intention. To this purpose, a speaker uses, among other things, prosodic information such as intonation pause, duration, intensity, etc. The research described here is concerned with the relationship between the strength of prosodic boundaries in spoken utterances as perceived by untrained listeners(Perceptual boundary strength, PBS)-In this paper, the preceptual boundary strength is used as the same meaning of the prosodic boundary strength-and prosodic information. We made a rule determinating the prosodic boundaries and verified the usefulness of the prosodic phrase as a recognition unit. Experiments results showed that the performance of speech recognition(SR) is improved in aspect of recognition rate and time compared with that using sentences as recognition unit. In the future we will suggest the methods that estimate more appropriate boundaries and study more various methods of prosody assisted SR.

  • PDF

A Design of Lowpass Active Filter for ADLS Tx/Rx Stage (ADSL 송수신단용 저역통과 능동필터 설계)

  • Lee Geun-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.1
    • /
    • pp.38-42
    • /
    • 2005
  • CMOS analog lowpass filters using speech signal bandwidth for a Asymmetrical Digital Subscriver Line(ADSL) modem are presented. Designed active lowpass filters are composed of the CMOS complementary high-swing cascode stage which can increase transconductance of an active element. As a result, their cutoff frequency are 138kHz and 1,100kHz respectively. A low-voltage high-swing cascode integrator which improved on a gain and unit gain frequency used to design the filters. The designed filters are verified by HSPICE simulation with the $0.251{\mu}m\;CMOS\;n-well$ Parameter and a single 2.5V power supply.

Improving the Performance of the Continuous Speech Recognition by Estimating Likelihoods of the Phonetic Rules (음소변동규칙의 적합도 조정을 통한 연속음성인식 성능향상)

  • Na, Min-Soo;Chung, Min-Hwa
    • Proceedings of the KSPS conference
    • /
    • 2006.11a
    • /
    • pp.80-83
    • /
    • 2006
  • The purpose of this paper is to build a pronunciation lexicon with estimated likelihoods of the phonetic rules based on the phonetic realizations and therefore to improve the performance of CSR using the dictionary. In the baseline system, the phonetic rules and their application probabilities are defined with the knowledge of Korean phonology and experimental tuning. The advantage of this approach is to implement the phonetic rules easily and to get stable results on general domains. However, a possible drawback of this method is that it is hard to reflect characteristics of the phonetic realizations on a specific domain. In order to make the system reflect phonetic realizations, the likelihood of phonetic rules is reestimated based on the statistics of the realized phonemes using a forced-alignment method. In our experiment, we generates new lexica which include pronunciation variants created by reestimated phonetic rules and its performance is tested with 12 Gaussian mixture HMMs and back-off bigrams. The proposed method reduced the WER by 0.42%.

  • PDF

Development of a Foreign Language Speaking Training System Based on Speech Recognition Technology (음성 인식 테크놀로지 기반의 외국어 말하기 훈련 시스템 개발)

  • Koo, Dukhoi
    • Journal of The Korean Association of Information Education
    • /
    • v.23 no.5
    • /
    • pp.491-497
    • /
    • 2019
  • As the world develops into a global society, more and more people want to speak foreign languages fluently. To speak fluently, you must have sufficient training in speaking, which requires a dialogue partner. Recently, it is expected that the development of voice recognition information technology will enable the development of a system for conducting foreign language speaking training without human beings from the other party. In this study, a test bed system for foreign language speaking training was developed and applied to elementary school classes. Elementary school students were asked to present their English conversation situation and conduct speaking training. Then, satisfaction with the system and potential for continuous utilization were surveyed. The system developed in this study has been identified as helpful for the training of learning to speak a foreign language.

Effects of general and oral health on quality of life in the elderly living alone and with family (독거노인과 가족동거노인의 건강 및 구강건강이 건강 관련 삶의 질에 미치는 영향)

  • Jung, Eun-Ju
    • Journal of Korean society of Dental Hygiene
    • /
    • v.19 no.4
    • /
    • pp.577-589
    • /
    • 2019
  • Objectives: The purpose of this study was to investigate the effects of general and oral health on quality of life in the elderly living alone and with family. Methods: We analyzed data from the $6^{th}$ Korea National Health and Nutrition Examination Survey. Distribution of the elderly living alone and with family based on the general characteristics and general and oral health was analyzed using complex-sample chi-square tests. Multiple logistic regression was used to analyze the factors affecting quality of life by calculating the 95% confidence intervals. Results: In the elderly living alone, the quality of life significantly correlated with restriction of activity, perceived general and oral health status, perceived stress, and speech difficulties. Further, in the elderly living with family, lower quality of life significantly correlated with restriction of activity, perceived health status, walking days per week, life time smoking history, Community Periodontal Index, and chewing and speech difficulties. Conclusions: The elderly are concerned with self-maintenance of general and oral health. Therefore, systematic policies related to health services need to be developed and operated at the national level. It is especially necessary to take social interest in the elderly living alone and a more continuous and professional approach in their health care.

Pitch trajectories of English vowels produced by American men, women, and children

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.31-37
    • /
    • 2018
  • Pitch trajectories reflect a continuous variation of vocal fold movements over time. This study examined the pitch trajectories of English vowels produced by 139 American English speakers, statistically analyzing their trajectories using the Generalized Additive Mixed Models (GAMMs). First, Praat was used to read the sound data of Hillenbrand et al. (1995). A pitch analysis script was then prepared, and six pitch values at the corresponding time points within each vowel segment were collected and checked. The results showed that the group of men produced the lowest pitch trajectories, followed by the groups of women, boys, then girls. The density line showed a bimodal distribution. The pitch values at the six corresponding time points formed a single dip, which changed gradually across the vowel segment from 204 to 193 to 196 Hz. The normality tests performed on the pitch data rejected the null hypothesis. Nonparametric tests were therefore conducted to discover the significant differences in the values among the four groups. The GAMMs, which analyzed all the pitch data, produced significant results among the pitch values at the six corresponding time points but not between the two groups of boys and girls. The GAMMs also revealed that the two groups were significantly different only at the first and second time points. Accordingly, the methodology of this study and its findings may be applicable to future studies comparing curvilinear data sets elicited by experimental conditions.