• Title/Summary/Keyword: evaluation of speech sub-systems

Search Result 3, Processing Time 0.015 seconds

Speech Evaluation Variables Related to Speech Intelligibility in Children with Spastic Cerebral Palsy (경직형 뇌성마비아동의 말명료도 및 말명료도와 관련된 말 평가 변인)

  • Park, Ji-Eun;Kim, Hyang-Hee;Shin, Ji-Cheol;Choi, Hong-Shik;Sim, Hyun-Sub;Park, Eun-Sook
    • Phonetics and Speech Sciences
    • /
    • v.2 no.4
    • /
    • pp.193-212
    • /
    • 2010
  • The purpose of our study was to provide effective speech evaluation items examining the variables of speech that successfully predict the speech intelligibility in CP children. The subjects were 55 children with spastic type cerebral palsy. As for the speech evaluation, we performed a speech subsystem evaluation and a speech intelligibility test. The results of the study are as follows. The evaluation task for the speech subsystems consisted of 48 task items within an observational evaluation stage and three levels of severity. The levels showed correlations with gross motor functions, fine motor functions, and age. Second, the evaluation items for the speech subsystems were rearranged into seven factors. Third, 34 out of 48 task items that positively correlated with the syllable intelligibility rating were as follows. There were four items in the observational evaluation stage. Among the nonverbal articulatory function evaluation items, there were 11 items in level one. There were 12 items in level two. In level three there were eight items. Fourth, there were 23 items among the 48 evaluation tasks that correlated with the sentence intelligibility rating. There was one item in the observational evaluation stage which was in the articulatory structure evaluation task. In level one there were six items. In level two, there were eight items. In level three, there was a total number of eight items. Fifth, there was a total number of 14 items that influenced the syllable intelligibility rating. Sixth, there was a total number of 13 items that influenced the syllable intelligibility rating. According to the results above, the variables that influenced the speech intelligibility of CP children among the articulatory function tasks were in the respiratory function task, phonatory function task, and lip and chin related tasks. We did not find any correlation for the tongue function. The results of our study could be applied to speech evaluation, setting therapy goals, and evaluating the degree of progression in children with CP. We only studied children with the spastic type of cerebral palsy, and there were a small number of severe degree CP children compared to those with a moderate degree of CP. Therefore, when evaluating children with other degrees of severity, we may have to take their characteristics more into account. Further study on speech evaluation variables in relation to the severity of the speech intelligibility and different types of cerebral palsy may be necessary.

  • PDF

Speech Recognition in Noisy Environments using Wiener Filtering (Wiener Filtering을 이용한 잡음환경에서의 음성인식)

  • Kim, Jin-Young;Eom, Ki-Wan;Choi, Hong-Sub
    • Speech Sciences
    • /
    • v.1
    • /
    • pp.277-283
    • /
    • 1997
  • In this paper, we present a robust recognition algorithm based on the Wiener filtering method as a research tool to develop the Korean Speech recognition system. We especially used Wiener filtering method in cepstrum-domain, because the method in frequency-domain is computationally expensive and complex. Evaluation of the effectiveness of this method has been conducted in speaker-independent isolated Korean digit recognition tasks using discrete HMM speech recognition systems. In these tasks, we used 12th order weighted cepstral as a feature vector and added computer simulated white gaussian noise of different levels to clean speech signals for recognition experiments under noisy conditions. Experimental results show that the presented algorithm can provide an improvement in recognition of as much as from $5\%\;to\;\20\%$ in comparison to spectral subtraction method.

  • PDF

Human Laughter Generation using Hybrid Generative Models

  • Mansouri, Nadia;Lachiri, Zied
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1590-1609
    • /
    • 2021
  • Laughter is one of the most important nonverbal sound that human generates. It is a means for expressing his emotions. The acoustic and contextual features of this specific sound are different from those of speech and many difficulties arise during their modeling process. During this work, we propose an audio laughter generation system based on unsupervised generative models: the autoencoder (AE) and its variants. This procedure is the association of three main sub-process, (1) the analysis which consist of extracting the log magnitude spectrogram from the laughter database, (2) the generative models training, (3) the synthesis stage which incorporate the involvement of an intermediate mechanism: the vocoder. To improve the synthesis quality, we suggest two hybrid models (LSTM-VAE, GRU-VAE and CNN-VAE) that combine the representation learning capacity of variational autoencoder (VAE) with the temporal modelling ability of a long short-term memory RNN (LSTM) and the CNN ability to learn invariant features. To figure out the performance of our proposed audio laughter generation process, objective evaluation (RMSE) and a perceptual audio quality test (listening test) were conducted. According to these evaluation metrics, we can show that the GRU-VAE outperforms the other VAE models.