• Title/Summary/Keyword: speech factors

Search Result 352, Processing Time 0.022 seconds

Correlation analysis of linguistic factors in non-native Korean speech and proficiency evaluation (비원어민 한국어 말하기 숙련도 평가와 평가항목의 상관관계)

  • Yang, Seung Hee;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.9 no.3
    • /
    • pp.49-56
    • /
    • 2017
  • Much research attention has been directed to identify how native speakers perceive non-native speakers' oral proficiency. To investigate the generalizability of previous findings, this study examined segmental, phonological, accentual, and temporal correlates of native speakers' evaluation of L2 Korean proficiency produced by learners with various levels and nationalities. Our experiment results show that proficiency ratings by native speakers significantly correlate not only with rate of speech, but also with the segmental accuracies. The influence of segmental errors has the highest correlation with the proficiency of L2 Korean speech. We further verified this finding within substitution, deletion, insertion error rates. Although phonological accuracy was expected to be highly correlated with the proficiency score, it was the least influential measure. Another new finding in this study is that the role of pitch and accent has been underemphasized so far in the non-native Korean speech perception studies. This work will serve as the groundwork for the development of automatic assessment module in Korean CAPT system.

Speech Recognition of Multi-Syllable Words Using Soft Computing Techniques (소프트컴퓨팅 기법을 이용한 다음절 단어의 음성인식)

  • Lee, Jong-Soo;Yoon, Ji-Won
    • Transactions of the Society of Information Storage Systems
    • /
    • v.6 no.1
    • /
    • pp.18-24
    • /
    • 2010
  • The performance of the speech recognition mainly depends on uncertain factors such as speaker's conditions and environmental effects. The present study deals with the speech recognition of a number of multi-syllable isolated Korean words using soft computing techniques such as back-propagation neural network, fuzzy inference system, and fuzzy neural network. Feature patterns for the speech recognition are analyzed with 12th order thirty frames that are normalized by the linear predictive coding and Cepstrums. Using four models of speech recognizer, actual experiments for both single-speakers and multiple-speakers are conducted. Through this study, the recognizers of combined fuzzy logic and back-propagation neural network and fuzzy neural network show the better performance in identifying the speech recognition.

SPEECH SYNTHESIS USING LARGE SPEECH DATA-BASE

  • Lee, Kyu-Keon;Mochida, Takemi;Sakurai, Naohiro;Shirai, Katasuhiko
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06a
    • /
    • pp.949-956
    • /
    • 1994
  • In this paper, we introduce a new speech synthesis method for Japanese and Korean arbitrary sentences using the natural speech data-base. Also, application of this method to a CAI system is discussed. In our synthesis method, a basic sentence and basic accent-phrases are selected from the data-base against a target sentence. Factors for those selections are phrase dependency structure (separation degree), number of morae, type of accent and phonemic labels. The target pitch pattern and phonemic parameter series are generated using those selected basic units. As the pitch pattern is generated using patterns which are directly extracted form real speech, it is expected to be more natural than any other pattern which is estimated by any model. Until now, we have examined this method on Japanese sentence speech and affirmed that the synthetic sound preserves human-like features fairly well. Now we extend this method to Korean sentence speech synthesis. Further more, we are trying to apply this synthesis unit to a CAI system.

  • PDF

Comparisons of Utility of Various Speech Intelligibility Evaluations of Adults with Hearing Impairment (청각장애 성인의 말명료도 평가방법의 비교)

  • Do, Yeon-Ji;Kim, Soo-Jin
    • Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.173-184
    • /
    • 2004
  • This study aims to discuss the test methodologies that evaluate the speech intelligibility of hearing-impaired adults using various contexts. Seven adults with severe hearing loss participated in the experiment. The context of the speech intelligibility consists of 77 pairs of one-syllable words with phonemic contrasts, 30 two-syllable words and the list of each 12 and 10 sentences. The speech intelligibility of various contexts had significant correlation, and both one-syllable words with phonemic contrasts and the sentence 1 had higher correlation than other tests. The one-syllable words with phonemic contrasts took longer to test than others, and it demanded more effort to select the pair of words. However, from the point of view of the identification of segmental difficulties, the one-syllable words with phonemic contrasts that reflected segmental factors contributing to the intelligibility was useful.

  • PDF

The Factors Affecting Job Satisfaction in Speech-Language Pathologists

  • Moon, Kyung-Im;Cho, In-Sook;Park, Woong-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.11
    • /
    • pp.263-270
    • /
    • 2019
  • The propose of this study was to investigate the factors affecting job satisfaction in Speech-Language Pathologists and to identify the relationships among job satisfaction self-efficiency and job stress. The participant of this study were 145 Speech-Language Pathologists. The results of present study are as follows. The mean score for job satisfaction was 3.62. job satisfaction was found to have a positive correlation with self-efficiency, and a negative correlation with job stress. The influencing factors impacting on job satisfaction were self-efficiency and job stress. This factors explained that job satisfaction was 46.8% of the variance. Conclusion: This study is expected be useful to find ways to improve subjects' job satisfaction by increasing their self-efficiency, decreasing their job stress and developing intervention programs as basic data.

Robust Distributed Speech Recognition under noise environment using MESS and EH-VAD (멀티밴드 스펙트럼 차감법과 엔트로피 하모닉을 이용한 잡음환경에 강인한 분산음성인식)

  • Choi, Gab-Keun;Kim, Soon-Hyob
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.1
    • /
    • pp.101-107
    • /
    • 2011
  • The background noises and distortions by channel are major factors that disturb the practical use of speech recognition. Usually, noise reduce the performance of speech recognition system DSR(Distributed Speech Recognition) based speech recognition also bas difficulty of improving performance for this reason. Therefore, to improve DSR-based speech recognition under noisy environment, this paper proposes a method which detects accurate speech region to extract accurate features. The proposed method distinguish speech and noise by using entropy and detection of spectral energy of speech. The speech detection by the spectral energy of speech shows good performance under relatively high SNR(SNR 15dB). But when the noise environment varies, the threshold between speech and noise also varies, and speech detection performance reduces under low SNR(SNR 0dB) environment. The proposed method uses the spectral entropy and harmonics of speech for better speech detection. Also, the performance of AFE is increased by precise speech detections. According to the result of experiment, the proposed method shows better recognition performance under noise environment.

Acoustic Variation Conditioned by Prosody in English Motherese

  • Choi, Han-Sook
    • Phonetics and Speech Sciences
    • /
    • v.2 no.1
    • /
    • pp.41-50
    • /
    • 2010
  • The current study exploresacoustic variation induced by prosodic contexts in different speech styles,with a focus on motherese or child-directed speech (CDS). The patterns of variation in the acoustic expression of voicing contrast in English stops, and the role of prosodic factors in governing such variation are investigated in CDS. Prosody-induced acoustic strengthening reported from adult-directed speech (ADS)is examined in the speech data directed to infants at the one-word stage. The target consonants are collected from Utterance-initial and -medial positions, with or without focal accent. Overall, CDS shows that the prosodic prominence of constituents under focal accent conditions variesin the acoustic correlates of the stop laryngeal contrasts. The initial position is not found with enhanced acoustic values in the current study, which is similar to the finding from ADS (Choi, 2006 Cole et al, 2007). Individualized statistical results, however, indicate that the effect of accent on acoustic measures is not very robust, compared to the effect of accent in ADS. Enhanced distinctiveness under focal accent is observed from the limited subjects' acoustic measures in CDS. The results indicate dissimilar strategies to mark prosodic structures in different speech styles as well as the consistent prosodic effect across speech styles. The stylistic variation is discussed in relation to the listener under linguistic development in CDS.

  • PDF

Overlapping of /o/ and /u/ in modern Seoul Korean: focusing on speech rate in read speech

  • Igeta, Takako;Hiroya, Sadao;Arai, Takayuki
    • Phonetics and Speech Sciences
    • /
    • v.9 no.1
    • /
    • pp.1-7
    • /
    • 2017
  • Previous studies have reported on the overlapping of $F_1$ and $F_2$ distribution for the vowels /o/ and /u/ produced by young Korean speakers of the Seoul dialect. It has been suggested that the overlapping of /o/ and /u/ occurs due to sound change. However, few studies have examined whether speech rate influences the overlapping of /o/ and /u/. On the other hand, previous studies have reported that the overlapping of /o/ and /u/ in syllable produced by male speakers is smaller than by female speakers. Few reports have investigated on the overlapping of the two vowels in read speech produced by male speakers. In the current study, we examined whether speech rates affect overlapping of /o/ and /u/ in read speech by male and female speakers. Read speech produced by twelve young adult native speakers of Seoul dialect were recorded in three speech rates. For female speakers, discriminant analysis showed that the discriminant rate became lower as the speech rate increases from slow to fast. Thus, this indicates that speech rate is one of the factors affecting the overlapping of /o/ and /u/. For male speakers, on the other hand, the discriminant rate was not correlated with speech rate, but the overlapping was larger than that of female speakers in read speech. Moreover, read speech by male speakers was less clear than by female speakers. This indicates that the overlapping may be related to unclear speech by sociolinguistic reasons for male speakers.

Affixation effects on word-final coda deletion in spontaneous Seoul Korean speech

  • Kim, Jungsun
    • Phonetics and Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.9-14
    • /
    • 2016
  • This study investigated the patterns of coda deletion in spontaneous Seoul Korean speech. More specifically, the current study focused on three factors in promoting coda deletion, namely, word position, consonant type, and morpheme type. The results revealed that, first, coda deletion frequently occurred when affixes were attached to the ends of words, rather than in affixes in word-internal positions or in roots. Second, alveolar consonants [n] and [l] in the coda positions of high-frequency affixes [nɨn] and [lɨl] were most likely to be deleted. Additionally, regarding affix reduction in the word-final position, all subjects seemed to depend on this articulatory strategy to a similar degree. In sum, the current study found that affixes without primary semantic content in spontaneous speech tend to undergo the process of reduction, favoring the occurrence of specific pronunciation variants.

Review And Challenges In Speech Recognition (ICCAS 2005)

  • Ahmed, M.Masroor;Ahmed, Abdul Manan Bin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1705-1709
    • /
    • 2005
  • This paper covers review and challenges in the area of speech recognition by taking into account different classes of recognition mode. The recognition mode can be either speaker independent or speaker dependant. Size of the vocabulary and the input mode are two crucial factors for a speech recognizer. The input mode refers to continuous or isolated speech recognition system and the vocabulary size can be small less than hundred words or large less than few thousands words. This varies according to system design and objectives.[2]. The organization of the paper is: first it covers various fundamental methods of speech recognition, then it takes into account various deficiencies in the existing systems and finally it discloses the various probable application areas.

  • PDF