• Title/Summary/Keyword: Transcriptions

Search Result 95, Processing Time 0.021 seconds

A Study on Functions and Transcriptions of Anchogongs in Yeonggeonuigwes of Late Joseon Period (조선 후기 영건의궤에 실린 안초공의 기능과 표기법 연구)

  • Lee, Woo-Jong
    • Journal of architectural history
    • /
    • v.27 no.4
    • /
    • pp.7-16
    • /
    • 2018
  • This study is focusing on anchogongs(按草工) in yeonggeonuigwes(營建儀軌), which were recorded with few details and in unsettled transcriptions. First, the positions and functions of anchogongs in $18^{th}$ censtury are analyzed by comparing to anchogongs in more detailed early $19^{th}$ century yeonggeonuigwes and those in extant buildings. Second, with the result, the historical significances are presumed in changing transcriptions of anchogong terms in those uigwes. In $18^{th}$ century uigwes, most of anchogongs are functioned as matbo-anchogongs and only four anchogongs in a gate building were used as jongryang-anchogongs. It is mainly because the sorts of buildings in $18^{th}$ century yeonggeonuigwes had only several varieties: most of the buildings belonging royal shrines. Transcriptions of anchogong terms had been changed for reflecting functional developments of anchogongs in $18^{th}$ century. However, reflections were much later than changes of actual functions.

Voice Dialing system using Stochastic Matching (확률적 매칭을 사용한 음성 다이얼링 시스템)

  • 김원구
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.515-518
    • /
    • 2004
  • This paper presents a method that improves the performance of the personal voice dialling system in which speaker Independent phoneme HMM's are used. Since the speaker independent phoneme HMM based voice dialing system uses only the phone transcription of the input sentence, the storage space could be reduced greatly. However, the performance of the system is worse than that of the system which uses the speaker dependent models due to the phone recognition errors generated when the speaker Independent models are used. In order to solve this problem, a new method that jointly estimates transformation vectors for the speaker adaptation and transcriptions from training utterances is presented. The biases and transcriptions are estimated iteratively from the training data of each user with maximum likelihood approach to the stochastic matching using speaker-independent phone models. Experimental result shows that the proposed method is superior to the conventional method which used transcriptions only.

  • PDF

Phonology of Transcription (음운표기의 음운론)

  • Chung, Kook
    • Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.23-40
    • /
    • 2003
  • This paper examines transcription of sounds from a phonological perspective. It has found that most of transcriptions have been done on a segmental basis alone, without consideration of the whole phonological systems and levels, and without a full understanding of the nature of the linguistic and phonetic alphabets. In a word, sound transcriptions have not been done on the basis of the phonology of the language and the alphabet. This study shows a phonological model for transcribing foreign and native sounds, suggesting ways of improving some of the current transcription systems such as the Hangeul transcription of loan words and the romanization of Hangeul, as well as the phonetic transcription of English and other foreign languages.

  • PDF

Reduction and Frequency Analyses of Vowels and Consonants in the Buckeye Speech Corpus

  • Yang, Byung-Gon
    • Phonetics and Speech Sciences
    • /
    • v.4 no.3
    • /
    • pp.75-83
    • /
    • 2012
  • The aims of this study were three. First, to examine the degree of deviation from dictionary prescribed symbols and actual speech made by American English speakers. Second, to measure the frequency of vowel and consonant production of American English speakers. And third, to investigate gender differences in the segmental sounds in a speech corpus. The Buckeye Speech Corpus was recorded by forty American male and female subjects for one hour per subject. The vowels and consonants in both the phonemic and phonetic transcriptions were extracted from the original files of the corpus and their frequencies were obtained using codes of a free software R. Results were as follows: Firstly, the American English speakers produced a reduced number of vowels and consonants in daily conversation. The reduction rate from the dictionary transcriptions to the actual transcriptions was around 38.2%. Secondly, the American English speakers used more front high and back low vowels while three-fourths of the consonants accounted for stops, fricatives, and nasals. This indicates that the segmental inventory has nonlinear frequency distribution in the speech corpus. Thirdly, the two gender groups produced vowels and consonants similarly even though there were a few noticeable differences in their speech. From these results we propose that English teachers consider pronunciation education reflecting the actual speech sounds and that linguists find a way to establish unmarked segmentals from speech corpora.

Three-Stage Framework for Unsupervised Acoustic Modeling Using Untranscribed Spoken Content

  • Zgank, Andrej
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.810-818
    • /
    • 2010
  • This paper presents a new framework for integrating untranscribed spoken content into the acoustic training of an automatic speech recognition system. Untranscribed spoken content plays a very important role for under-resourced languages because the production of manually transcribed speech databases still represents a very expensive and time-consuming task. We proposed two new methods as part of the training framework. The first method focuses on combining initial acoustic models using a data-driven metric. The second method proposes an improved acoustic training procedure based on unsupervised transcriptions, in which word endings were modified by broad phonetic classes. The training framework was applied to baseline acoustic models using untranscribed spoken content from parliamentary debates. We include three types of acoustic models in the evaluation: baseline, reference content, and framework content models. The best overall result of 18.02% word error rate was achieved with the third type. This result demonstrates statistically significant improvement over the baseline and reference acoustic models.

Considering Dynamic Non-Segmental Phonetics

  • Fujino, Yoshinari
    • Proceedings of the KSPS conference
    • /
    • 2000.07a
    • /
    • pp.312-320
    • /
    • 2000
  • This presentation aims to explore some possibility of non-segmental phonetics usually ignored in phonetics education. In pedagogical phonetics, especially ESL/EFL oriented phonetics speech sounds tend to be classified in two criteria 1) 'pronunciation' which deals with segments and 2) 'prosody' or 'suprasegmentals', a criterion that deals with non-segmental elements such as stress and intonation. However, speech involves more dynamic processing. It is non-linear and multi-dimensional in spite of the linear sequence of symbols in phonetic/phonological transcriptions. No word is without pitch or voice quality apart from segmental characteristics whether it is spoken in isolation or cut out from continuous speech. This simply tells the dichotomy of pronunciation and prosody is merely a useful convention. There exists some room to consider dynamic non-segmental phonetics. Examples of non-segmental phonetic investigation, some of the analyses conducted within the frame of Firthian Prosodic Analysis, especially of the relation between vowel variants and foot types, are examined and we see what kind of auditory phonetic training is required to understand impressionistic transcriptions which lie behind the non-segmental phonetics.

  • PDF

Rich Transcription Generation Using Automatic Insertion of Punctuation Marks (자동 구두점 삽입을 이용한 Rich Transcription 생성)

  • Kim, Ji-Hwan
    • MALSORI
    • /
    • no.61
    • /
    • pp.87-100
    • /
    • 2007
  • A punctuation generation system which combines prosodic information with acoustic and language model information is presented. Experiments have been conducted first for the reference text transcriptions. In these experiments, prosodic information was shown to be more useful than language model information. When these information sources are combined, an F-measure of up to 0.7830 was obtained for adding punctuation to a reference transcription. This method of punctuation generation can also be applied to the 1-best output of a speech recogniser. The 1-best output is first time aligned. Based on the time alignment information, prosodic features are generated. As in the approach applied in the punctuation generation for reference transcriptions, the best sequence of punctuation marks for this 1-best output is found using the prosodic feature model and an language model trained on texts which contain punctuation marks.

  • PDF

On the Regulation for Pronunciation of Loanwords in Korean (외래어의 표준 발음과 어문 규범)

  • Yi, Eun-gyeong
    • Cross-Cultural Studies
    • /
    • v.38
    • /
    • pp.405-431
    • /
    • 2015
  • The purpose of this paper is to investigate how to decide pronunciation of loanwords in Korean language. There has not been a regulation for pronunciation of loanwords in Korean language. Even the dictionary published by the government does not provide any information about the pronunciation of loanwords. In this paper, some actual solutions are suggested for the pronunciation of loanwords. Korean language has Regulations of Standard Korean, Korean Orthography, Regulations on Hangeul Transcriptions on Loanwords and Pronunciation Methods of Standard Korean. These language standards could help to decide pronunciation of loanwords. Some pronunciations which could not be regulated by them must be presented in the standard pronunciation dictionary. For example, glottalization rule of 's' in many loanwords could be presented in the description of each loanword in the dictionary. However the pronunciation of loanwords must be similar to the spelling. If various pronunciations are allowed to one spelling, then people will be so confused by the discrepancy between pronunciation and spelling of loanwords.

A Phonetic Study og German (2) (독어음의 음성학적 고찰(2) - 현대독어의 복모음에 관하여 -)

  • Yun Jong-sun
    • MALSORI
    • /
    • no.19_20
    • /
    • pp.33-42
    • /
    • 1990
  • Those who are interested in the German diphthongs wil1 find that they are classified into three kinds of forms in accordance with their gliding directions: closing, centring and rising. The German [aI], for example, which derives its origin from [i:] of the riddle high German. Is regarded as a distinctive feature that distinguishes the new high German from the middle high German. The diphthong [aI] is cal led fall ing one, because the sonority of the sound undergoes a diminution as the articulation proceeds. The end part of the diphthong [aI] is less sonorous than the beginning part. In most of the German diphthongs the diminution of prominence is caused by the fact that the end part is inherently less sonorous than the beginning. This applies to the other c los Ing and centring diphthongs. This way of diminution of sonority exerts influence on methods of constructing systems of phonetic notation. The above mentioned less sonorous end part of diphthong [I] shows that it differs from some analogous sound in another context. It is useful to demonstrate the occurrence of particular allophones by introducing special symbols to denote them (here: at→ae). Forms of transcription embodying extra symbol s are cal led narrow. But since strict adherence to the principle 'one sound one symbol' would involve the introduction of a large number of symbols, this would render phonetic transcriptions cumbrous and difficult to read. A broad style of transcription provides 'one symbol for each phoneme' of the language that is transcribed. Phonemic transcriptions are simple and unambiguous to everyone who knows the principles governing the use of allophones in the language transcribed. Among those German ways of transcriptions of diphthongs ( a?, a?, ??: ae, ao, ?ø; ae, ao, ?ø) the phonemic (broad) transcription is general Iy to be recommended, for Instance, in teaching the pronunciation of a foreign language, since it combines accuracy with the greatest measure of simplicity (Some passages and terms from Daniel Jones) .

  • PDF

Speaker Adaptation for Voice Dialing (음성 다이얼링을 위한 화자적응)

  • ;Chin-Hui Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.5
    • /
    • pp.455-461
    • /
    • 2002
  • This paper presents a method that improves the performance of the personal voice dialling system in which speaker independent phoneme HMM's are used. Since the speaker independent phoneme HMM based voice dialing system uses only the phone transcription of the input sentence, the storage space could be reduced greatly. However, the performance of the system is worse than that of the system which uses the speaker dependent models due to the phone recognition errors generated when the speaker independent models are used. In order to solve this problem, a new method that jointly estimates transformation vectors for the speaker adaptation and transcriptions from training utterances is presented. The biases and transcriptions are estimated iteratively from the training data of each user with maximum likelihood approach to the stochastic matching using speaker-independent phone models. Experimental result shows that the proposed method is superior to the conventional method which used transcriptions only.