• Title/Summary/Keyword: speech situation

Search Result 122, Processing Time 0.028 seconds

Mother-Child Interactions in Preschool Children Who Stutter (학령전기 말더듬아동의 어머니-아동 상호작용 행동특성)

  • Kim, Jeong-Mee;Sim, Hyun-Sub;Lee, Eun-Ju
    • Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.35-48
    • /
    • 2005
  • This study was to examine the relationship between maternal interactive behaviors and stuttering behaviors in preschool children who stutter. Participants were twenty-four children who stutter and their mothers. For the purpose of the current study, 5$\sim$10 minutes of 50 minutes videotaped scenes originally collected to develop fluency assessment instrument were re-videotaped. They included mother-child interactions during playing with toys and reading book situations. Mothers-children interactive behaviors were assessed with Maternal Behavior Rating Sroles(MBRS) and Child Behavior Rating Scales (CBRS). And children's stuttering were assessed with Paradise-Fluency Assessment(P-FA). The results were as follows: 1) the maternal interactive behavior did not significantly differ depending on situations, but scores of maternal responsive factor were higher in the play situation than in the reading situation. 2) Maternal responsiveness might influence on promoting the children's pivotal behavior with children who stutter. And 3) the level of maternal responsiveness was the predictor of children's stuttering behaviors. The therapeutic implication of the results were discussed.

  • PDF

The Effect of Noise on the Normal and Pathological Voice (소음환경이 정상 및 병적음성에 미치는 영향)

  • Hong, Ki-Hwan;Yang, Yoon-Soo;Kim, Hyun-Gi
    • Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.27-38
    • /
    • 2002
  • The purpose of this article is to present the acoustic parameters (VOT, jitter, shimmer, vF0, vAm, NHR, SPI, VTI, DVB, DSH) for consonants (/pipi/, /$p^{h}ip^{h}i$/, /p'ip'i/) and sustained vowels (/a/, /e/, /i/) produced by normal subjects and dysphonia patients at two vocal effort(normal, high) by Lombard effect using 60dB white noise. Lombard effect indicates the vocal effort increase in noisy situation. At normal vocal effort, in general the acoustic parameter values of patients are greater than normal. And in noisy situation, significant decrease of acoustic values is seen in normal compared with in dysphonia patients. The clinical implication of this finding, the vocal quality in dysphonia is not compensated by vocal effort as well as normal subjects because of the inefficiency caused by abnormal vocal fold appearance and function. And with this result, we can counsel that the voice quality can not be improved as well as the patient expect.

  • PDF

Speaker Identification Using Augmented PCA in Unknown Environments (부가 주성분분석을 이용한 미지의 환경에서의 화자식별)

  • Yu, Ha-Jin
    • MALSORI
    • /
    • no.54
    • /
    • pp.73-83
    • /
    • 2005
  • The goal of our research is to build a text-independent speaker identification system that can be used in any condition without any additional adaptation process. The performance of speaker recognition systems can be severely degraded in some unknown mismatched microphone and noise conditions. In this paper, we show that PCA(principal component analysis) can improve the performance in the situation. We also propose an augmented PCA process, which augments class discriminative information to the original feature vectors before PCA transformation and selects the best direction for each pair of highly confusable speakers. The proposed method reduced the relative recognition error by 21%.

  • PDF

Performance Improvement of Speech Recognition Based on Independent Component Analysis (독립성분분석법을 이용한 음성인식기의 성능향상)

  • 김창근;한학용;허강인
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2001.06a
    • /
    • pp.285-288
    • /
    • 2001
  • In this paper, we proposed new method of speech feature extraction using ICA(Independent Component Analysis) which minimized the dependency and correlation among speech signals on purpose to separate each component in the speech signal. ICA removes the repeating of data after finding the axis direction which has the greatest variance in input dimension. We verified improvement of speech recognition ability with training and recognition experiments when ICA compared with conventional mel-cepstrum features using HMM. Also, we can see that ICA dealt with the situation of recognition ability decline that is caused by environmental noise.

  • PDF

Translation and Adaptation of the Children's Home Inventory for Listening Difficulties (CHILD) into Korean (가정환경 아동듣기평가(CHILD) 부모용 설문지의 한국어 번역 및 적용 연구)

  • Choi, Jae Hee;Seo, Young Ran;Jang, Hyun Sook
    • 재활복지
    • /
    • v.20 no.4
    • /
    • pp.247-264
    • /
    • 2016
  • The Children's Home Inventory for Listening Difficulties (CHILD) questionnaire has been applied for assessing listening and communication difficulties in various home situations for children with hearing loss. The purpose of the study was to translate the CHILD questionnaire for parents into Korean and verify reliability and validity of Korean version of CHILD (CHILD-K). CHILD-K was completed by 55 parents of children (from ages 3~12 years) using cochlear implants (CI). Among the 55 children, 27 were in preschool and 28 in elementary. Internal consistency reliability of CHILD-K was verified by Chronbach's alpha. The mixed factorial ANOVA was conducted to compare the effects of the age group and situation factors (Quiet, Noise, Distance, Social, and Media factors) on the score of CHILD. The results indicated that CHILD-K showed excellent internal consistency reliability (${\alpha}=.96$). The CHILD scores among age groups were significantly different as the older age group resulted in higher scores in all situations except Distance. For both groups the mean scores for the Quiet situation were significantly higher than other situations, and the mean scores for the Social situation were significantly lower than other situations. Moreover, analysis showed that children with CI had difficulties in the Social situation combined with other situation factors. The results indicate that the Korean version of CHILD questionnaire is a reliable tool for the assessment of communication abilities in home situation in Korean-speaking children using CI.

A Study on Phased Reading Techniques of Mathematical Expression in the Digital Talking Book (디지털 음성 도서에서 MathML 수식의 수준별 독음 변환 기법)

  • Hwang, Jungsoo;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.8
    • /
    • pp.1025-1032
    • /
    • 2014
  • Until now, there were few supports on reading the mathematical expressions except text based expressions, so it is important to provide the reading of the mathematical expressions. Also, there are various of obstacles for people who are not visually impaired when reading the mathematical expressions such as the situation of presbyopia, reading the mathematical expressions in the vehicles, and so on. Therefore, supports for people to read mathematical expressions in various situations are needed. In the previous research, the main goal was to transform the mathematical expressions into Korean text based on Content MathML. In this paper, we expanded the range of the research from a reading disabilities to people who are not reading disabilities. We tested appropriacy of the rules we made to convert the MathML based expressions into speech and defined 3 math-to-speech rules in korean based on levels. We implemented the mathematical expressions by using 3 math-to-speech rules. We took comprehension test to find out whether our math to speech rules are well-defined or not.

Real Time Environmental Classification Algorithm Using Neural Network for Hearing Aids (인공 신경망을 이용한 보청기용 실시간 환경분류 알고리즘)

  • Seo, Sangwan;Yook, Sunhyun;Nam, Kyoung Won;Han, Jonghee;Kwon, See Youn;Hong, Sung Hwa;Kim, Dongwook;Lee, Sangmin;Jang, Dong Pyo;Kim, In Young
    • Journal of Biomedical Engineering Research
    • /
    • v.34 no.1
    • /
    • pp.8-13
    • /
    • 2013
  • Persons with sensorineural hearing impairment have troubles in hearing at noisy environments because of their deteriorated hearing levels and low-spectral resolution of the auditory system and therefore, they use hearing aids to compensate weakened hearing abilities. Various algorithms for hearing loss compensation and environmental noise reduction have been implemented in the hearing aid; however, the performance of these algorithms vary in accordance with external sound situations and therefore, it is important to tune the operation of the hearing aid appropriately in accordance with a wide variety of sound situations. In this study, a sound classification algorithm that can be applied to the hearing aid was suggested. The proposed algorithm can classify the different types of speech situations into four categories: 1) speech-only, 2) noise-only, 3) speech-in-noise, and 4) music-only. The proposed classification algorithm consists of two sub-parts: a feature extractor and a speech situation classifier. The former extracts seven characteristic features - short time energy and zero crossing rate in the time domain; spectral centroid, spectral flux and spectral roll-off in the frequency domain; mel frequency cepstral coefficients and power values of mel bands - from the recent input signals of two microphones, and the latter classifies the current speech situation. The experimental results showed that the proposed algorithm could classify the kinds of speech situations with an accuracy of over 94.4%. Based on these results, we believe that the proposed algorithm can be applied to the hearing aid to improve speech intelligibility in noisy environments.

Study about Windows System Control Using Gesture and Speech Recognition (제스처 및 음성 인식을 이용한 윈도우 시스템 제어에 관한 연구)

  • 김주홍;진성일이남호이용범
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1289-1292
    • /
    • 1998
  • HCI(human computer interface) technologies have been often implemented using mouse, keyboard and joystick. Because mouse and keyboard are used only in limited situation, More natural HCI methods such as speech based method and gesture based method recently attract wide attention. In this paper, we present multi-modal input system to control Windows system for practical use of multi-media computer. Our multi-modal input system consists of three parts. First one is virtual-hand mouse part. This part is to replace mouse control with a set of gestures. Second one is Windows control system using speech recognition. Third one is Windows control system using gesture recognition. We introduce neural network and HMM methods to recognize speeches and gestures. The results of three parts interface directly to CPU and through Windows.

  • PDF

A Emergency Sound Detecting Method for Smarter City (스마트 시티에서의 이머전시 사운드 감지방법)

  • Cho, Young-Im
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1143-1149
    • /
    • 2010
  • Because the noise is the main cause for decreasing the performance at speech recognition, the place or environment is very important in speech recognition. To improve the speech recognition performance in the real situations where various extraneous noises are abundant, a novel combination of FIR and Wiener filters is proposed and experimented. The combination resulted in improved accuracy and reduced processing time, enabling fast analysis and response in emergency situations. Usually, there are many dangerous situations in our city life, so for the smarter city it is necessary to detect many types of sound in various environment. Therefore this paper is about how to detect many types of sound in real city, especially on CCTV. This paper is for implementing the smarter city by detecting many types of sounds and filtering one of the emergency sound in this sound stream. And then it can be possible to handle with the emergency or dangerous situation.

A Study on the Design of Integrated Speech Enhancement System for Hands-Free Mobile Radiotelephony in a Car

  • Park, Kyu-Sik;Oh, Sang-Hun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.2E
    • /
    • pp.45-52
    • /
    • 1999
  • This paper presents the integrated speech enhancement system for hands-free mobile communication. The proposed integrated system incorporates both acoustic echo cancellation and engine noise reduction device to provide signal enhancement of desired speech signal from the echoed plus noisy environments. To implement the system, a delayless subband adaptive structure is used for acoustic echo cancellation operation. The NLMS based adaptive noise canceller then applied to the residual echo removed noisy signal to achieve the selective engine noise attenuation in dominant frequency component. Two sets of computer simulations are conducted to demonstrate the effectiveness of the system; one for the fixed acoustical environment condition, the other for the robustness of the system in which, more realistic situation, the acoustic transmission environment change. Simulation results confirm the system performance of 20-25dB ERLE in acoustic echo cancellation and 9-19 dB engine noise attenuation in dominant frequency component for both cases.

  • PDF