• 제목/요약/키워드: Korean speech

검색결과 5,300건 처리시간 0.031초

Knowledge-driven speech features for detection of Korean-speaking children with autism spectrum disorder

  • Seonwoo Lee;Eun Jung Yeo;Sunhee Kim;Minhwa Chung
    • 말소리와 음성과학
    • /
    • 제15권2호
    • /
    • pp.53-59
    • /
    • 2023
  • Detection of children with autism spectrum disorder (ASD) based on speech has relied on predefined feature sets due to their ease of use and the capabilities of speech analysis. However, clinical impressions may not be adequately captured due to the broad range and the large number of features included. This paper demonstrates that the knowledge-driven speech features (KDSFs) specifically tailored to the speech traits of ASD are more effective and efficient for detecting speech of ASD children from that of children with typical development (TD) than a predefined feature set, extended Geneva Minimalistic Acoustic Standard Parameter Set (eGeMAPS). The KDSFs encompass various speech characteristics related to frequency, voice quality, speech rate, and spectral features, that have been identified as corresponding to certain of their distinctive attributes of them. The speech dataset used for the experiments consists of 63 ASD children and 9 TD children. To alleviate the imbalance in the number of training utterances, a data augmentation technique was applied to TD children's utterances. The support vector machine (SVM) classifier trained with the KDSFs achieved an accuracy of 91.25%, surpassing the 88.08% obtained using the predefined set. This result underscores the importance of incorporating domain knowledge in the development of speech technologies for individuals with disorders.

한국어 방언 음성의 실험적 연구 (An Experimental Study of Korean Dialectal Speech)

  • 김현기;최영숙;김덕수
    • 음성과학
    • /
    • 제13권3호
    • /
    • pp.49-65
    • /
    • 2006
  • Recently, several theories on the digital speech signal processing expanded the communication boundary between human beings and machines drastically. The aim of this study is to collect dialectal speech in Korea on a large scale and to establish a digital speech data base in order to provide the data base for further research on the Korean dialectal and the creation of value-added network. 528 informants across the country participated in this study. Acoustic characteristics of vowels and consonants are analyzed by Power spectrum and Spectrogram of CSL. Test words were made on the picture cards and letter cards which contained each vowel and each consonant in the initial position of words. Plot formants were depicted on a vowel chart and transitions of diphthongs were compared according to dialectal speech. Spectral times, VOT, VD, and TD were measured on a Spectrogram for stop consonants, and fricative frequency, intensity, and lateral formants (LF1, LF2, LF3) for fricative consonants. Nasal formants (NF1, NF2, NF3) were analyzed for different nasalities of nasal consonants. The acoustic characteristics of dialectal speech showed that young generation speakers did not show distinction between close-mid /e/ and open-mid$/\epsilon/$. The diphthongs /we/ and /wj/ showed simple vowels or diphthongs depending to dialect speech. The sibilant sound /s/ showed the aspiration preceded to fricative noise. Lateral /l/ realized variant /r/ in Kyungsang dialectal speech. The duration of nasal consonants in Chungchong dialectal speech were the longest among the dialects.

  • PDF

음성 향상을 위한 NPHMM을 갖는 IMM 알고리즘 (IMM Algorithm with NPHMM for Speech Enhancement)

  • 이기용
    • 음성과학
    • /
    • 제11권4호
    • /
    • pp.53-66
    • /
    • 2004
  • The nonlinear speech enhancement method with interactive parallel-extended Kalman filter is applied to speech contaminated by additive white noise. To represent the nonlinear and nonstationary nature of speech. we assume that speech is the output of a nonlinear prediction HMM (NPHMM) combining both neural network and HMM. The NPHMM is a nonlinear autoregressive process whose time-varying parameters are controlled by a hidden Markov chain. The simulation results shows that the proposed method offers better performance gains relative to the previous results [6] with slightly increased complexity.

  • PDF

음질 개선을 통한 음성의 인식 (Speech Recognition through Speech Enhancement)

  • 조준희;이기성
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2003년도 학술회의 논문집 정보 및 제어부문 B
    • /
    • pp.511-514
    • /
    • 2003
  • The human being uses speech signals to exchange information. When background noise is present, speech recognizers experience performance degradations. Speech recognition through speech enhancement in the noisy environment was studied. Histogram method as a reliable noise estimation approach for spectral subtraction was introduced using MFCC method. The experiment results show the effectiveness of the proposed algorithm.

  • PDF

음성신호로 인한 잡음전달경로의 오조정을 감소시킨 적응잡음제거 알고리듬 (Adaptive noise cancellation algorithm reducing path misadjustment due to speech signal)

  • 박장식;김형순;김재호;손경식
    • 한국통신학회논문지
    • /
    • 제21권5호
    • /
    • pp.1172-1179
    • /
    • 1996
  • General adaptive noise canceller(ANC) suffers from the misadjustment of adaptive filter weights, because of the gradient-estimate noise at steady state. In this paper, an adaptive noise cancellation algorithm with speech detector which is distinguishing speech from silence and adaptation-transient region is proposed. The speech detector uses property of adaptive prediction-error filter which can filter the highly correlated speech. To detect speech region, estimation error which is the output of the adaptive filter is applied to the adaptive prediction-error filter. When speech signal apears at the input of the adaptive prediction-error filter. The ratio of input and output energy of adaptive prediction-error filter becomes relatively lower. The ratio becomes large when the white noise appears at the input. So the region of speech is detected by the ratio. Sign algorithm is applied at speech region to prevent the weights from perturbing by output speech of ANC. As results of computer simulation, the proposed algorithm improves segmental SNR and SNR up to about 4 dBand 11 dB, respectively.

  • PDF

Detection and Synthesis of Transition Parts of The Speech Signal

  • Kim, Moo-Young
    • 한국통신학회논문지
    • /
    • 제33권3C호
    • /
    • pp.234-239
    • /
    • 2008
  • For the efficient coding and transmission, the speech signal can be classified into three distinctive classes: voiced, unvoiced, and transition classes. At low bit rate coding below 4 kbit/s, conventional sinusoidal transform coders synthesize speech of high quality for the purely voiced and unvoiced classes, whereas not for the transition class. The transition class including plosive sound and abrupt voiced-onset has the lack of periodicity, thus it is often classified and synthesized as the unvoiced class. In this paper, the efficient algorithm for the transition class detection is proposed, which demonstrates superior detection performance not only for clean speech but for noisy speech. For the detected transition frame, phase information is transmitted instead of magnitude information for speech synthesis. From the listening test, it was shown that the proposed algorithm produces better speech quality than the conventional one.

음성명령에 의한 모바일로봇의 실시간 무선원격 제어 실현 (Real-Time Implementation of Wireless Remote Control of Mobile Robot Based-on Speech Recognition Command)

  • 심병균;한성현
    • 한국생산제조학회지
    • /
    • 제20권2호
    • /
    • pp.207-213
    • /
    • 2011
  • In this paper, we present a study on the real-time implementation of mobile robot to which the interactive voice recognition technique is applied. The speech command utters the sentential connected word and asserted through the wireless remote control system. We implement an automatic distance speech command recognition system for voice-enabled services interactively. We construct a baseline automatic speech command recognition system, where acoustic models are trained from speech utterances spoken by a microphone. In order to improve the performance of the baseline automatic speech recognition system, the acoustic models are adapted to adjust the spectral characteristics of speech according to different microphones and the environmental mismatches between cross talking and distance speech. We illustrate the performance of the developed speech recognition system by experiments. As a result, it is illustrated that the average rates of proposed speech recognition system shows about 95% above.

다학문적 접근법의 구개열 말-언어 관리 (Cleft Palate Speech - Language Management based on the Multidisciplinary Approach)

  • 양지형
    • 대한구순구개열학회지
    • /
    • 제8권2호
    • /
    • pp.95-105
    • /
    • 2005
  • Cleft lip and palate is a congenital deformity which needs a professional and consistent management from the birth and along with the physical growth of patients. The patients with cleft lip and palate can have general speech problems with resonance disorders, voice disorders and articulation disorders after the successful primary surgical management and the physical growth. Speech problems of Cleft lip and palate are characterized hypernasality, nasal air emission, increased nasal air flow, and aberrant speech marks which decrease intelligibility. These speech problems of cleft lip and palate can be treated with the secondary surgical procedure, the application of temporary prosthesis and the effective and well-timed speech therapy. The speech and language problems of cleft lip and palate, the general procedures and schedules of the speech assessment and therapy based on the multidisciplinary approach are introduced for the patients with cleft lip and palate, their family and the other members of the cleft palate treatment team.

  • PDF

The Optimum Fuzzy Vector Quantizer for Speech Synthesis

  • Lee, Jin-Rhee-;Kim, Hyung-Seuk-;Ko, Nam-kon;Lee, Kwang-Hyung-
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1993년도 Fifth International Fuzzy Systems Association World Congress 93
    • /
    • pp.1321-1325
    • /
    • 1993
  • This paper investigates the use of Fuzzy vector quantizer(FVQ) in speech synthesis. To compress speech data, we employ K-means algorithm to design codebook and then FVQ technique is used to analysize input speech vectors based on the codebook in an analysis part. In FVQ synthesis part, analysis data vectors generated in FVQ analysis is used to synthesize the speech. We have fined that synthesized speech quality depends on Fuzziness values in FVQ, and the optimum fuzziness values maximized synthesized speech SQNR are related with variance values of input speech vectors. This approach is tested on a sentence, and we compare synthesized speech by a convensional VQ with synthesized speech by a FVQ with optimum Fuzziness values.

  • PDF

SiTEC의 STiLL관련 음성 코퍼스의 구축 현황 (Creation of Speech Corpora for STiLL at SiTEC)

  • 김영일;김봉완;최대림;이광현;정은순;이용주
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2005년도 추계 학술대회 발표논문집
    • /
    • pp.13-16
    • /
    • 2005
  • As language learning that utilizes speech and information processing technology is getting popular. Speech Information Technology & Promotion Center(SiTEC) has created and is distributing speech corpora for STiLL in order to support basic research and development of products. We will introduce the corpus for Korean and those for English which we have created and are distributing.

  • PDF