• Title/Summary/Keyword: Korean speech

Search Result 5,286, Processing Time 0.031 seconds

XML Based Meta-data Specification for Industrial Speech Databases (산업용 음성 DB를 위한 XML 기반 메타데이터)

  • Joo Young-Hee;Hong Ki-Hyung
    • MALSORI
    • /
    • v.55
    • /
    • pp.77-91
    • /
    • 2005
  • In this paper, we propose an XML based meta-data specification for industrial speech databases. Building speech databases is very time-consuming and expensive. Recently, by the government supports, huge amount of speech corpus has been collected as speech databases. However, the formats and meta-data for speech databases are different depending on the constructing institutions. In order to advance the reusability and portability of speech databases, a standard representation scheme should be adopted by all speech database construction institutions. ETRI proposed a XML based annotation scheme [51 for speech databases, but the scheme has too simple and flat modeling structure, and may cause duplicated information. In order to overcome such disadvantages in this previous scheme, we first define the speech database more formally and then identify object appearing in speech databases. We then design the data model for speech databases in an object-oriented way. Based on the designed data model, we develop the meta-data specification for industrial speech databases.

  • PDF

The Change of Acceptability for the Mild Dysarthric Speakers' Speech due to Speech Rate and Loudness Manipulation (말속도와 강도 변조에 따른 경도 마비말장애 환자의 말 용인도 변화)

  • Kim, Jiyoun;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.47-55
    • /
    • 2015
  • This study examined whether speech acceptability was changed under various conditions of prosodic manipulations. Both speech rate and voice loudness reportedly are associated with acceptability and intelligibility. Speech samples by twelve speakers with mild dysarthria were recorded. Speech rate and loudness changes were made by digitally manipulating habitual sentences. 3 different loudness levels (70, 75, & 80dB) and 4 different speech rates (normal, 20% rapidly, 20% slowly, & 40% slowly) were presented to 12 SLPs (speech language pathologists). SLPs evaluated sentence acceptability by 7-point Likert scale. Repeated ANOVA were conducted to determine if the prosodic type of resynthesized cue resulted in a significant change in speech acceptability. A faster speech rate (20% rapidly) rather than habitual and slower rates (20%, 40% slowly) resulted in significant improvement in acceptability ratings (p <.001). An increased vocal loudness (up to 80dB) resulted in significant improvement in acceptability ratings (p <.05). Speech rate and loudness changes in the prosodic properties of speech may contribute to improved acceptability.

Variables for Predicting Speech Acceptability of Children with Cochlear Implants (인공와우이식 아동 말용인도의 예측 변인)

  • Yoon, Mi Sun
    • Phonetics and Speech Sciences
    • /
    • v.6 no.4
    • /
    • pp.171-179
    • /
    • 2014
  • Purposes: Speech acceptability means the subjective judgement of listeners regarding the naturalness and normality of the speech. The purpose of this study was to determine the predicting variables for speech acceptabilities of children with cochlear implants. Methods: Twenty seven children with CI participated. They had profound pre-lingual hearing loss without any additional disabilities. The mean of chronological ages was 8;9, and mean of age of implantation was 2;11. Speech samples of reading and spontaneous speech were recorded separately. Twenty college students who were not familiar to the speech of deaf children evaluated the speech acceptabilities using visual analog scale. 1 segmental (articulation) and 6 suprasegmental features (pitch, loudness, quality, resonance, intonation, and speaking rate) of speech were perceptually evaluated by 3 SLPs. Correlation and multiple regression analysis were performed to evaluate the predicting variables. Results: The means of speech acceptability for reading and spontaneous speech were 73.47 and 71.96, respectively. Speech acceptability of reading was predicated by the severity of intonation and articulation. Speech acceptability of spontaneous speech was predicated by the severity of intonation and loudness. Discussion and conclusion: Severity of intonation was the most effective variable to predict the speech acceptabilities of both reading and spontaneous speech. A further study would be necessary to generalize the result and to apply this result to intervention in clinical settings.

How Korean Learner's English Proficiency Level Affects English Speech Production Variations

  • Hong, Hye-Jin;Kim, Sun-Hee;Chung, Min-Hwa
    • Phonetics and Speech Sciences
    • /
    • v.3 no.3
    • /
    • pp.115-121
    • /
    • 2011
  • This paper examines how L2 speech production varies according to learner's L2 proficiency level. L2 speech production variations are analyzed by quantitative measures at word and phone levels using Korean learners' English corpus. Word-level variations are analyzed using correctness to explain how speech realizations are different from the canonical forms, while accuracy is used for analysis at phone level to reflect phone insertions and deletions together with substitutions. The results show that speech production of learners with different L2 proficiency levels are considerably different in terms of performance and individual realizations at word and phone levels. These results confirm that speech production of non-native speakers varies according to their L2 proficiency levels, even though they share the same L1 background. Furthermore, they will contribute to improve non-native speech recognition performance of ASR-based English language educational system for Korean learners of English.

  • PDF

Overlapping of /o/ and /u/ in modern Seoul Korean: focusing on speech rate in read speech

  • Igeta, Takako;Hiroya, Sadao;Arai, Takayuki
    • Phonetics and Speech Sciences
    • /
    • v.9 no.1
    • /
    • pp.1-7
    • /
    • 2017
  • Previous studies have reported on the overlapping of $F_1$ and $F_2$ distribution for the vowels /o/ and /u/ produced by young Korean speakers of the Seoul dialect. It has been suggested that the overlapping of /o/ and /u/ occurs due to sound change. However, few studies have examined whether speech rate influences the overlapping of /o/ and /u/. On the other hand, previous studies have reported that the overlapping of /o/ and /u/ in syllable produced by male speakers is smaller than by female speakers. Few reports have investigated on the overlapping of the two vowels in read speech produced by male speakers. In the current study, we examined whether speech rates affect overlapping of /o/ and /u/ in read speech by male and female speakers. Read speech produced by twelve young adult native speakers of Seoul dialect were recorded in three speech rates. For female speakers, discriminant analysis showed that the discriminant rate became lower as the speech rate increases from slow to fast. Thus, this indicates that speech rate is one of the factors affecting the overlapping of /o/ and /u/. For male speakers, on the other hand, the discriminant rate was not correlated with speech rate, but the overlapping was larger than that of female speakers in read speech. Moreover, read speech by male speakers was less clear than by female speakers. This indicates that the overlapping may be related to unclear speech by sociolinguistic reasons for male speakers.

Common Speech Database Collection and Validation for Communications (한국어 공통 음성 DB구축 및 오류 검증)

  • Lee Soo-jong;Kim Sanghun;Lee Youngjik
    • MALSORI
    • /
    • no.46
    • /
    • pp.145-157
    • /
    • 2003
  • In this paper, we'd like to briefly introduce Korean common speech database, which project has been started to construct a large scaled speech database since 2002. The project aims at supporting the R&D environment of the speech technology for industries. It encourages domestic speech industries and activates speech technology domestic market. In the first year, the resulting common speech database consists of 25 kinds of databases considering various recording conditions such as telephone, PC, VoIP etc. The speech database will be widely used for speech recognition, speech synthesis, and speaker identification. On the other hand, although the database was originally corrected by manual, still it retains unknown errors and human errors. So, in order to minimize the errors in the database, we tried to find the errors based on the recognition errors and classify several kinds of errors. To be more effective than typical recognition technique, we will develop the automatic error detection method. In the future, we will try to construct new databases reflecting the needs of companies and universities.

  • PDF

Students' Perception of Linked or Clear English Speech (대학생의 연음 또는 비연음 영문 지각)

  • Hwang, Sun-Yi;Yang, Byung-Gon
    • Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.107-117
    • /
    • 2006
  • This study examined how well Korean undergraduate students perceived linked or clear English speech and attempted to find areas of difficulty in their English listening caused by phonological variations. Thirty nine undergraduate students participated in listening sessions. They were divided into high and low groups by their TOEIC listening scores. Samples of linked speech included such phonological processes as linking, palatalization, flapping, and deletion. Results showed that the students had more problem perceiving linked speech than perceiving clear speech. Secondly, both the higher and the lower groups scored low on the linked speech. The lower group had more score difference between linked and clear speech. Thirdly, the students' scores increased from the speech with flapping, through deletion, palatalization, to linking. Finally, there was a strong positive correlation between their TOEIC listening scores and the perception scores. Further studies would be desirable on the level of improvement of TOEIC scores by training the students' listening ability using the linked speech.

  • PDF

Digital enhancement of pronunciation assessment: Automated speech recognition and human raters

  • Miran Kim
    • Phonetics and Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.13-20
    • /
    • 2023
  • This study explores the potential of automated speech recognition (ASR) in assessing English learners' pronunciation. We employed ASR technology, acknowledged for its impartiality and consistent results, to analyze speech audio files, including synthesized speech, both native-like English and Korean-accented English, and speech recordings from a native English speaker. Through this analysis, we establish baseline values for the word error rate (WER). These were then compared with those obtained for human raters in perception experiments that assessed the speech productions of 30 first-year college students before and after taking a pronunciation course. Our sub-group analyses revealed positive training effects for Whisper, an ASR tool, and human raters, and identified distinct human rater strategies in different assessment aspects, such as proficiency, intelligibility, accuracy, and comprehensibility, that were not observed in ASR. Despite such challenges as recognizing accented speech traits, our findings suggest that digital tools such as ASR can streamline the pronunciation assessment process. With ongoing advancements in ASR technology, its potential as not only an assessment aid but also a self-directed learning tool for pronunciation feedback merits further exploration.

Intonation Patterns of Korean Spontaneous Speech (한국어 자유 발화 음성의 억양 패턴)

  • Kim, Sun-Hee
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.85-94
    • /
    • 2009
  • This paper investigates the intonation patterns of Korean spontaneous speech through an analysis of four dialogues in the domain of travel planning. The speech corpus, which is a subset of spontaneous speech database recorded and distributed by ETRI, is labeled in APs and IPs based on K-ToBI system using Momel, an intonation stylization algorithm. It was found that unlike in English, a significant number of APs and IPs include hesitation lengthening, which is known to be a disfluency phenomenon due to speech planning. This paper also claims that the hesitation lengthening is different from the IP-final lengthening and that it should be categorized as a new category, as it greatly affects the intonation patterns of the language. Except for the fact that 19.09% of APs show hesitation lengthening, the spontaneous speech shows the same AP patterns as in read speech with higher frequency of falling patterns such as LHL in comparison with read speech which show more LH and LHLH patterns. The IP boundary tones of spontaneous speech, showing the same five patterns such as L%, HL%, LHL%, H%, LH% as in read speech, show higher frequency of rising patterns (H% and LH%) and contour tones (HL%, LH%, LHL%) while read speech on the contrary shows higher frequency of falling patterns and simple tones at the end of IPs.

  • PDF

Google speech recognition of an English paragraph produced by college students in clear or casual speech styles (대학생들이 또렷한 음성과 대화체로 발화한 영어문단의 구글음성인식)

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.43-50
    • /
    • 2017
  • These days voice models of speech recognition software are sophisticated enough to process the natural speech of people without any previous training. However, not much research has reported on the use of speech recognition tools in the field of pronunciation education. This paper examined Google speech recognition of a short English paragraph produced by Korean college students in clear and casual speech styles in order to diagnose and resolve students' pronunciation problems. Thirty three Korean college students participated in the recording of the English paragraph. The Google soundwriter was employed to collect data on the word recognition rates of the paragraph. Results showed that the total word recognition rate was 73% with a standard deviation of 11.5%. The word recognition rate of clear speech was around 77.3% while that of casual speech amounted to 68.7%. The reasons for the low recognition rate of casual speech were attributed to both individual pronunciation errors and the software itself as shown in its fricative recognition. Various distributions of unrecognized words were observed depending on each participant and proficiency groups. From the results, the author concludes that the speech recognition software is useful to diagnose each individual or group's pronunciation problems. Further studies on progressive improvements of learners' erroneous pronunciations would be desirable.