• Title/Summary/Keyword: Speech problem

Search Result 470, Processing Time 0.033 seconds

Speech pathologic evaluation of children with ankyloglossia (설유착증 환자의 언어병리학적 평가)

  • Lee, Ju-Kyung
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.155-157
    • /
    • 2007
  • Objective : There are close relationship between intraoral abnormal structure and speech-functional problem. Patients with cleft palate & ankyloglossia are typical examples. Patients with abnormal structure can be repaired toward normal structure by operation. Ankyloglossia may cause functional limitation - for example, speech disorder - even if adequate surgical treatment were done. And, each individuals have each speech disorders. The objective of this study is to evaluate the speechs of childrens with ankyloglossia, and to determine whether ankyloglossia is associated with articulation problem. We wanted to present criteria for indication of frenectomy. Study design The experimental group is composed of 10 childrens who visited our department of oral and maxillofacial surgery, dental hospital, Chonbuk university, due to ankyloglossia and articulation problem,. The average age is 5 Y 7M, M : F ratio is 4 : 1 at the time of speech test. The VPI consonant discrimination degree, PPVT, PCAT, Nasometer II, Visi-Pitch test result were obtained from each group. Result : There was significant difference for 'language development' through PPVT. Except 3 members of experimental group, all remainder showed retardation for 'language development'. For 'errored consonant rate', data showed more higher scores in alveolar consonant. There 'consonant error' in experimental group, mostly showed 'alveolar consonant', also a major modality of 'consonant error' was mostly distortion. Conclusion : We can judge the severity of ankyloglossia patient by examinig language development degree & speech test of 'alveolar consonant' . And we can make a decision for frenulotomy using these results.

  • PDF

The Robot Speech Recognition using TMS320VC5510 DSK (TMS320VC5510 DSK를 이용한 음성인식 로봇)

  • Choi, Ji-Hyun;Chung, Ik-Joo
    • Journal of Industrial Technology
    • /
    • v.27 no.A
    • /
    • pp.211-218
    • /
    • 2007
  • As demands for interaction of humans and robots are increasing, robots are expected to be equipped with intelligibility which humans have. Especially, for natural communication, hearing capabilities are so essential that speech recognition technology for robot is getting more important. In this paper, we implement a speech recognizer suitable for robot applications. One of the major problem in robot speech recognition is poor speech quality captured when a speaker talks distant from the microphone a robot is mounted with. To cope with this problem, we used wireless transmission of commands recognized by the speech recognizer implemented using TMS320VC5510 DSK. In addition, as for implementation, since TMS320VC5510 DSP is a fixed-point device, we represent efficient realization of HMM algorithm using fixed-point arithmetic.

  • PDF

Post-Processing of IVA-Based 2-Channel Blind Source Separation for Solving the Frequency Bin Permutation Problem (IVA 기반의 2채널 암묵적신호분리에서 주파수빈 뒤섞임 문제 해결을 위한 후처리 과정)

  • Chu, Zhihao;Bae, Keunsung
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.211-216
    • /
    • 2013
  • The IVA(Independent Vector Analysis) is a well-known FD-ICA method used to solve the frequency permutation problem. It generally works quite well for blind source separation problems, but still needs some improvements in the frequency bin permutation problem. This paper proposes a post-processing method which can improve the source separation performance with the IVA by fixing the remaining frequency permutation problem. The proposed method makes use of the correlation coefficient of power ratio between frequency bins for separated signals with the IVA-based 2-channel source separation. Experimental results verified that the proposed method could fix the remaining frequency permutation problem in the IVA and improve the speech quality of the separated signals.

Speech Enhancement Based on Psychoacoustic Model (심리음향모델에 근거한 음성개선)

  • Lee Jingeol
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.337-338
    • /
    • 2000
  • The perceptual filter for speech enhancement was analytically derived where the frequency content of the input noisy signal was made the same as that of the estimated clean signal in auditory domain. However, the analytical derivation should rely on the deconvolution associated with the spreading function in the psychoacoustic model, which results in an ill-conditioned problem. In order to cope with the problem associated with the deconvolution, we propose a novel psychoacoustic model based speech enhancement filter whose principle is the same as the perceptual filter, however the filter is derived by a constrained optimization which provides solutions to the ill-conditioned problem.

  • PDF

Speech Enhancement Based on Psychoacoustic Model

  • Lee, Jingeol;Kim, Soowon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.3E
    • /
    • pp.12-18
    • /
    • 2000
  • Psychoacoustic model based methods have recently been introduced in order to enhance speech signals corrupted by ambient noise. In particular, the perceptual filter is analytically derived where the frequency content of the input noisy signal is made the same as that of the estimated clean signal in auditory domain. However, the analytical derivation should rely on the deconvolution associated with the spreading function in the psychoacoustic model, which results in an ill-conditioned problem. In order to cope with the problem associated with the deconvolution, we propose a novel psychoacoustic model based speech enhancement filter whose principle is the same as the perceptual filter, however the filter is derived by a constrained optimization which provides solutions to the ill-conditioned problem. It is demonstrated with artificially generated signals that the proposed filter operates according to the principle. It is shown that superior performance results from the proposed filter over the perceptual filter provided that a clean speech signal is separable from noise.

  • PDF

Robust Speech Detection Based on Useful Bands for Continuous Digit Speech over Telephone Networks

  • Ji, Mi-Kyongi;Suh, Young-Joo;Kim, Hoi-Rin;Kim, Sang-Hun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.3E
    • /
    • pp.113-123
    • /
    • 2003
  • One of the most important problems in speech recognition is to detect the presence of speech in adverse environments. In other words, the accurate detection of speech boundary is critical to the performance of speech recognition. Furthermore the speech detection problem becomes severer when recognition systems are used over the telephone network, especially wireless network and noisy environment. Therefore this paper describes various speech detection algorithms for continuous digit recognition system used over wire/wireless telephone networks and we propose a algorithm in order to improve the robustness of speech detection using useful band selection under noisy telephone networks. In this paper, we compare some speech detection algorithms with the proposed one, and present experimental results done with various SNRs. The results show that the new algorithm outperforms the other speech detection methods.

Chinese Prosody Generation Based on C-ToBI Representation for Text-to-Speech (음성합성을 위한 C-ToBI기반의 중국어 운율 경계와 F0 contour 생성)

  • Kim, Seung-Won;Zheng, Yu;Lee, Gary-Geunbae;Kim, Byeong-Chang
    • MALSORI
    • /
    • no.53
    • /
    • pp.75-92
    • /
    • 2005
  • Prosody Generation Based on C-ToBI Representation for Text-to-SpeechSeungwon Kim, Yu Zheng, Gary Geunbae Lee, Byeongchang KimProsody modeling is critical in developing text-to-speech (TTS) systems where speech synthesis is used to automatically generate natural speech. In this paper, we present a prosody generation architecture based on Chinese Tone and Break Index (C-ToBI) representation. ToBI is a multi-tier representation system based on linguistic knowledge to transcribe events in an utterance. The TTS system which adopts ToBI as an intermediate representation is known to exhibit higher flexibility, modularity and domain/task portability compared with the direct prosody generation TTS systems. However, the cost of corpus preparation is very expensive for practical-level performance because the ToBI labeled corpus has been manually constructed by many prosody experts and normally requires a large amount of data for accurate statistical prosody modeling. This paper proposes a new method which transcribes the C-ToBI labels automatically in Chinese speech. We model Chinese prosody generation as a classification problem and apply conditional Maximum Entropy (ME) classification to this problem. We empirically verify the usefulness of various natural language and phonology features to make well-integrated features for ME framework.

  • PDF

A Simple Speech/Non-speech Classifier Using Adaptive Boosting

  • Kwon, Oh-Wook;Lee, Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.3E
    • /
    • pp.124-132
    • /
    • 2003
  • We propose a new method for speech/non-speech classifiers based on concepts of the adaptive boosting (AdaBoost) algorithm in order to detect speech for robust speech recognition. The method uses a combination of simple base classifiers through the AdaBoost algorithm and a set of optimized speech features combined with spectral subtraction. The key benefits of this method are the simple implementation, low computational complexity and the avoidance of the over-fitting problem. We checked the validity of the method by comparing its performance with the speech/non-speech classifier used in a standard voice activity detector. For speech recognition purpose, additional performance improvements were achieved by the adoption of new features including speech band energies and MFCC-based spectral distortion. For the same false alarm rate, the method reduced 20-50% of miss errors.

Microphone Array Based Speech Enhancement Using Independent Vector Analysis (마이크로폰 배열에서 독립벡터분석 기법을 이용한 잡음음성의 음질 개선)

  • Wang, Xingyang;Quan, Xingri;Bae, Keunsung
    • Phonetics and Speech Sciences
    • /
    • v.4 no.4
    • /
    • pp.87-92
    • /
    • 2012
  • Speech enhancement aims to improve speech quality by removing background noise from noisy speech. Independent vector analysis is a type of frequency-domain independent component analysis method that is known to be free from the frequency bin permutation problem in the process of blind source separation from multi-channel inputs. This paper proposed a new method of microphone array based speech enhancement that combines independent vector analysis and beamforming techniques. Independent vector analysis is used to separate speech and noise components from multi-channel noisy speech, and delay-sum beamforming is used to determine the enhanced speech among the separated signals. To verify the effectiveness of the proposed method, experiments for computer simulated multi-channel noisy speech with various signal-to-noise ratios were carried out, and both PESQ and output signal-to-noise ratio were obtained as objective speech quality measures. Experimental results have shown that the proposed method is superior to the conventional microphone array based noise removal approach like GSC beamforming in the speech enhancement.

The Study on the Expential Smoothing Method of the Concatenation Parts in the Speech Waveform (음성 파형분절의 지수함수 스므딩 기법에 관한 연구)

  • 박찬수
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1991.06a
    • /
    • pp.7-10
    • /
    • 1991
  • In a text-to-speech system, sound units (phonemes, words, or phrases, etc.) can be concatenated together to produce required utterance. The quality of the resulting speech is dependent on factors including the phonological/prosodic contour, the quality of basic concatenation units, and how well the units join together. Thus although the quality of each basic sound unit is high, if occur the discontinuity in the concatenation part then the quality of synthesis speech is decrease. To solve this problem, a smoothing operation should be carried out in concatenation parts. But a major problem is that, as yet, no method of parameter smoothing is available for joining the segment together. Thus in this paper, we proposed a new aigorithm that smoothing the unnatural discountinuous parts which can be occured in speech waveform editing. This algorithm used the exponential smoothing method.

  • PDF