• Title/Summary/Keyword: Korean phoneme

Search Result 331, Processing Time 0.028 seconds

A study of reciting the formal poetries of Korea and French in digital era - Shijo(Korean verse) vs Sonnet (French) (콘텐츠를 위한 한ㆍ불 정형시가 낭송법의 비교 고찰)

  • 이산호
    • Sijohaknonchong
    • /
    • v.19 no.1
    • /
    • pp.85-106
    • /
    • 2003
  • Recently, the sonnet and the shijo, each representing French and Korean formal poetries, are tend to be read with the eyes only, as were more accustomed to written literature. But even after almost three millennia of written literature and increased use of digitalized poems, poetry retains its appeal to the ear as well as to the eye. To read a poem only by eyes might be wrong because it is designed to be read aloud by mouth and understood by ear, and will decrease the aesthetic sense otherwise. It is essential to find the right way to recite a poem in this dramatically changed society, and is especially important when many shijos are changing into digitalized forms to adapt the new wave of our society. The sonnet and the shijo emphasize the importance of the harmony of sounds and rhythms with certain structure, and have their own prosodies. The emotions of the speaker in poems are expressed with words. When they are pronounced. each phoneme has its own phonemic characteristics. When comparing the The Broken Bell(Baudelaire) and Chopoong ga (Jong Seo Kim) in terms of prosody and phonetics. the speakers emotions are closely related with the phonetic structure of each word. In The Broken Bell, the phonetic value of rhymes, repeated phonemes, concentration of front and back vowels. rhythms of onesyllable words shape the overall image of this poem describing the productivity of bells as appose to the sterility of the soul. Chopoong ga also shows the determined and strong will of the speaker by frequent glottalized sounds. distribution and concentration of certain vowels. and frequent use of plosives. As you see in these examples, phones, beats, and rhythms are not the mere transmitter of meaning but possess their expressive values of their own and should be the first to be considered when reciting a poem.

  • PDF

영어 발음 교육

  • 이영길
    • Proceedings of the KSPS conference
    • /
    • 1997.07a
    • /
    • pp.258-259
    • /
    • 1997
  • 1. 외국어로서의 영어 교육에 있어서 발음 지도는 어느 정도의 영어 수준에 도달하기를 기다릴 필요없이 가능한 한 저학년에서부터 직접 지도되어야 한다. 즉 영어 교육은 영어 발음 교육부터 시작되는 것이 가장 바람직하다. 어느 정도의 수준 높은 문법 이론을 알고 있는 (대)학생들이라도 발음에 관한 한 많은 연습이 요구되는 경우가 흔히 있다. 바꿔 말하면 이러한 학생들은 그들이 갖고 있는 문법 지식만큼 발음에 대한 적극적인 구사력도 당연히 발휘할 수 있어야할 것이다. 영어 교육을 강조할 때 문장 구조와 어휘 교육이 중요시된다면 발음 또한 조기 교육 단계부터 영어 교육 프로그램의 필수불가결한 요소로 인식되어야 한다. 그렇다면 제일 처음 무엇을 어떻게 시작 해야할 것인가\ulcorner 흔히 음소(phoneme)라는 말의 최소 단위부터 시작하여 자음군(consonant cluster)과 같은 음 결합체를 가르친 다음 단어 강세(word stress)를 다루며, 마지막으로 문장 강세(sentence stress), 리듬(rhythm), 억양(intonation) 등을 포함함 이음말(connected speech)을 가르치는 순서가 될 수 있을 것이다. 그러나 이러한 방법이 이론상 논리적이긴 하지만 실제로 영어를 외국어로 배우는 우리 학생들에게는 얼마나 효과를 거둘 수 있는지 매우 의심스렵다. 오히려 가장 유익한 순서는 기본 억양 과 같은 적절한 표현과 함께 주어진 화맥 속에서의 의미 있는 문장 강세를 가르치고 그 다음에 그에 수반되는 중요한 소리의 발음을 지적해 주는 것이다. 예를 들면 Give it to him과 같은 구조를 교사가 구두로 제시할 때 단어 하나 하나를 강조한 나머지 너무 천천히 말하게 되면 전체 문장의 발음을 오히려 어렵게 만들어버린다. 중요한 것 은 기본 의사소통에 필요한 부분에 초점을 맞추는 일이다. 개별 단어에 부수되는 문제점은 '보충 지도'(remedial teaching)로 교정이 가능하다. 2. 우리의 초등학교 영어 교육의 현황을 고려할 때 비록 발음 지도가 쉬운 일은 아니지만 미래 지향적 결과를 기대할 때 우선 두 가지를 생각할 수 있다. 첫째로 현재의 교육대학교의 교사양성에 있어서 영어교육의 교과과정을 염두에 두지 않을 수 없다. 1981년도부터 교육대학교가 4년제가 명실공히 영어과로 운영되기는 수년밖에 되지 않는 실정이다. 현재의 교과과정도 현장에서 영어교육을 담당하기에는 불충분할 뿐만 아니라 영어발음에 관한 뚜렷한 과정이 없는 실정이다. 혼히 외국인 강사가 담당하는 이른바 영어회화 시간이 곧 발음 시간도 될 수 있다고 생각하기 쉬우나 이것은 전적으로 별개의 문제이다. 따라서 체계적인 발음 교육을 할 수 있는 교과과정이 되기를 바란다. 3. 앞에서 언급했듯이 4년제 이전에 졸업한 현직 교사들은 재학 중 영어 발음에 관한 지도를 받아본 적이 없다. 여기서 중요한 것은 이들 교사들에게 적절하고도 충분한 발음 교육을 시켜야 하는 연수 과정이다. 소리로 듣고 말해야 하는 초둥 영어 교육에 서 교사의 발음에 관한 지식은 그 중요성을 아무리 과대평가해도 지나치지 않을 것이다. 문제는 연수 내용이다. 적어도 현재까지 실시되어 온 초둥영어교육 담당자 연수 교과목 내용은 핵심을 찾기 힘들 정도로 교파목이 다양하고 산만하다. 따라서 예를 들면 영어발음 지도에 관한 과목도 마지못해 끼워 넣는 식의 과목 배정이다. 여기에 고작 할당된 시간은 많아야 4시간 정도이다. 대학에서 한 학기에도 부족한 영어 발음을 아 무런 배경 지식도 없는 초등 교사들에게 4시간 동안 무엇을 어떻게 가르칠 것인가\ulcorner

  • PDF

A Study on Regression Class Generation of MLLR Adaptation Using State Level Sharing (상태레벨 공유를 이용한 MLLR 적응화의 회귀클래스 생성에 관한 연구)

  • 오세진;성우창;김광동;노덕규;송민규;정현열
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.8
    • /
    • pp.727-739
    • /
    • 2003
  • In this paper, we propose a generation method of regression classes for adaptation in the HM-Net (Hidden Markov Network) system. The MLLR (Maximum Likelihood Linear Regression) adaptation approach is applied to the HM-Net speech recognition system for expressing the characteristics of speaker effectively and the use of HM-Net in various tasks. For the state level sharing, the context domain state splitting of PDT-SSS (Phonetic Decision Tree-based Successive State Splitting) algorithm, which has the contextual and time domain clustering, is adopted. In each state of contextual domain, the desired phoneme classes are determined by splitting the context information (classes) including target speaker's speech data. The number of adaptation parameters, such as means and variances, is autonomously controlled by contextual domain state splitting of PDT-SSS, depending on the context information and the amount of adaptation utterances from a new speaker. The experiments are performed to verify the effectiveness of the proposed method on the KLE (The center for Korean Language Engineering) 452 data and YNU (Yeungnam Dniv) 200 data. The experimental results show that the accuracies of phone, word, and sentence recognition system increased by 34∼37%, 9%, and 20%, respectively, Compared with performance according to the length of adaptation utterances, the performance are also significantly improved even in short adaptation utterances. Therefore, we can argue that the proposed regression class method is well applied to HM-Net speech recognition system employing MLLR speaker adaptation.

English Phoneme Recognition using Segmental-Feature HMM (분절 특징 HMM을 이용한 영어 음소 인식)

  • Yun, Young-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.3
    • /
    • pp.167-179
    • /
    • 2002
  • In this paper, we propose a new acoustic model for characterizing segmental features and an algorithm based upon a general framework of hidden Markov models (HMMs) in order to compensate the weakness of HMM assumptions. The segmental features are represented as a trajectory of observed vector sequences by a polynomial regression function because the single frame feature cannot represent the temporal dynamics of speech signals effectively. To apply the segmental features to pattern classification, we adopted segmental HMM(SHMM) which is known as the effective method to represent the trend of speech signals. SHMM separates observation probability of the given state into extra- and intra-segmental variations that show the long-term and short-term variabilities, respectively. To consider the segmental characteristics in acoustic model, we present segmental-feature HMM(SFHMM) by modifying the SHMM. The SFHMM therefore represents the external- and internal-variation as the observation probability of the trajectory in a given state and trajectory estimation error for the given segment, respectively. We conducted several experiments on the TIMIT database to establish the effectiveness of the proposed method and the characteristics of the segmental features. From the experimental results, we conclude that the proposed method is valuable, if its number of parameters is greater than that of conventional HMM, in the flexible and informative feature representation and the performance improvement.

Detecting Spelling Errors by Comparison of Words within a Document (문서내 단어간 비교를 통한 철자오류 검출)

  • Kim, Dong-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.83-92
    • /
    • 2011
  • Typographical errors by the author's mistyping occur frequently in a document being prepared with word processors contrary to usual publications. Preparing this online document, the most common orthographical errors are spelling errors resulting from incorrectly typing intent keys to near keys on keyboard. Typical spelling checkers detect and correct these errors by using morphological analyzer. In other words, the morphological analysis module of a speller tries to check well-formedness of input words, and then all words rejected by the analyzer are regarded as misspelled words. However, if morphological analyzer accepts even mistyped words, it treats them as correctly spelled words. In this paper, I propose a simple method capable of detecting and correcting errors that the previous methods can not detect. Proposed method is based on the characteristics that typographical errors are generally not repeated and so tend to have very low frequency. If words generated by operations of deletion, exchange, and transposition for each phoneme of a low frequency word are in the list of high frequency words, some of them are considered as correctly spelled words. Some heuristic rules are also presented to reduce the number of candidates. Proposed method is able to detect not syntactic errors but some semantic errors, and useful to scoring candidates.

Coarticulation Model of Hangul Visual speedh for Lip Animation (입술 애니메이션을 위한 한글 발음의 동시조음 모델)

  • Gong, Gwang-Sik;Kim, Chang-Heon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.9
    • /
    • pp.1031-1041
    • /
    • 1999
  • 기존의 한글에 대한 입술 애니메이션 방법은 음소의 입모양을 몇 개의 입모양으로 정의하고 이들을 보간하여 입술을 애니메이션하였다. 하지만 발음하는 동안의 실제 입술 움직임은 선형함수나 단순한 비선형함수가 아니기 때문에 보간방법에 의해 중간 움직임을 생성하는 방법으로는 음소의 입술 움직임을 효과적으로 생성할 수 없다. 또 이 방법은 동시조음도 고려하지 않아 음소들간에 변화하는 입술 움직임도 표현할 수 없었다. 본 논문에서는 동시조음을 고려하여 한글을 자연스럽게 발음하는 입술 애니메이션 방법을 제안한다. 비디오 카메라로 발음하는 동안의 음소의 움직임들을 측정하고 입술 움직임 제어 파라미터들을 추출한다. 각각의 제어 파라미터들은 L fqvist의 스피치 생성 제스처 이론(speech production gesture theory)을 이용하여 실제 음소의 입술 움직임에 근사한 움직임인 지배함수(dominance function)들로 정의되고 입술 움직임을 애니메이션할 때 사용된다. 또, 각 지배함수들은 혼합함수(blending function)와 반음절에 의한 한글 합성 규칙을 사용하여 결합하고 동시조음이 적용된 한글을 발음하게 된다. 따라서 스피치 생성 제스처 이론을 이용하여 입술 움직임 모델을 구현한 방법은 기존의 보간에 의해 중간 움직임을 생성한 방법보다 실제 움직임에 근사한 움직임을 생성하고 동시조음도 고려한 움직임을 보여준다.Abstract The existing lip animation method of Hangul classifies the shape of lips with a few shapes and implements the lip animation with interpolating them. However it doesn't represent natural lip animation because the function of the real motion of lips, during articulation, isn't linear or simple non-linear function. It doesn't also represent the motion of lips varying among phonemes because it doesn't consider coarticulation. In this paper we present a new coarticulation model for the natural lip animation of Hangul. Using two video cameras, we film the speaker's lips and extract the lip control parameters. Each lip control parameter is defined as dominance function by using L fqvist's speech production gesture theory. This dominance function approximates to the real lip animation of a phoneme during articulation of one and is used when lip animation is implemented. Each dominance function combines into blending function by using Hangul composition rule based on demi-syllable. Then the lip animation of our coarticulation model represents natural motion of lips. Therefore our coarticulation model approximates to real lip motion rather than the existing model and represents the natural lip motion considered coarticulation.

Analysis of Acoustic Characteristics of Vowel and Consonants Production Study on Speech Proficiency in Esophageal Speech (식도발성의 숙련 정도에 따른 모음의 음향학적 특징과 자음 산출에 대한 연구)

  • Choi, Seong-Hee;Choi, Hong-Shik;Kim, Han-Soo;Lim, Sung-Eun;Lee, Sung-Eun;Pyo, Hwa-Young
    • Speech Sciences
    • /
    • v.10 no.3
    • /
    • pp.7-27
    • /
    • 2003
  • Esophageal Speech uses the esophageal air during phonation. Fluent esophageal speakers frequently intake air in oral communication, but unskilled esophageal speakers are difficult with swallowing lots of air. The purpose of this study was to investigate the difference of acoustic characteristics of vowel and consonants production according to the speech proficiency level in esophageal speech. 13 normal male speakers and 13 male esophageal speakers (5 unskilled esophageal speakers, 8 skilled esophageal speakers) with age ranging from 50 to 70 years old. The stimuli were sustained /a/ vowel and 36 meaningless two syllable words. Used vowel is /a/ and consonants were 18 : /k, n, t, m, p, s, c, $C^{h},\;k^{h},\;t^{h},\;p^{h}$, h, I, k', t', p', s', c'/. Fundermental frequency (Fx), Jitter, shimmer, HNR, MPT were measured with by electroglottography using Lx speech studio (Laryngograph Ltd, London, UK). 36 meaningless words produced by esophageal speakers were presented to 3 speech-language pathologists who phonetically transcribed their responses. Fx, Jitter, HNR parameters is significant different between skilled esophageal speakers and unskilled esophageal speakers (P<.05). Considering manner of articulation, ANOVA showed that differences in two esophageal speech groups on speech proficiency were significant; Glide had the highest number of confusion with the other phoneme class, affricates are the most intelligible in the unskilled esophageal speech group, whereas in the skilled esophageal speech group fricatives resulted highest number of confusions, nasals are the most intelligible. In the place of articulation, glottal /h/ is the highest confusion consonant in both groups. Bilabials are the most intelligible in the skilled esophageal speech, velars are the most intelligible in the unskilled esophageal speech. In the structure of syllable, 'CV+V' is more confusion in the skilled esophageal group, unskilled esophageal speech group has similar confusion in both structures. In unskilled esophageal speech, significantly different Fx, Jitter, HNR acoustic parameters of vowel and the highest confusions of Liquid, Nasals consonants could be attributed to unstable, improper contact of neoglottis as vibratory source and insufficiency in the phonatory air supply, and higher motoric demand of remaining articulation due to morphological characteristics of vocal tract after laryngectomy.

  • PDF

A Study on the Continuous Speech Recognition for the Automatic Creation of International Phonetics (국제 음소의 자동 생성을 활용한 연속음성인식에 관한 연구)

  • Kim, Suk-Dong;Hong, Seong-Soo;Shin, Chwa-Cheul;Woo, In-Sung;Kang, Heung-Soon
    • Journal of Korea Game Society
    • /
    • v.7 no.2
    • /
    • pp.83-90
    • /
    • 2007
  • One result of the trend towards globalization is an increased number of projects that focus on natural language processing. Automatic speech recognition (ASR) technologies, for example, hold great promise in facilitating global communications and collaborations. Unfortunately, to date, most research projects focus on single widely spoken languages. Therefore, the cost to adapt a particular ASR tool for use with other languages is often prohibitive. This work takes a more general approach. We propose an International Phoneticizing Engine (IPE) that interprets input files supplied in our Phonetic Language Identity (PLI) format to build a dictionary. IPE is language independent and rule based. It operates by decomposing the dictionary creation process into a set of well-defined steps. These steps reduce rule conflicts, allow for rule creation by people without linguistics training, and optimize run-time efficiency. Dictionaries created by the IPE can be used with the speech recognition system. IPE defines an easy-to-use systematic approach that can obtained 92.55% for the recognition rate of Korean speech and 89.93% for English.

  • PDF

Meta-analysis of the effectiveness of speech processing analysis methods: Focus on phonological encoding, phonological short-term memory, articulation transcoding (메타분석을 통한 말 처리 분석방법의 효과 연구: 음운부호화, 음운단기기억, 조음전환을 중심으로)

  • Eun-Joo Ryu;Ji-Wan Ha
    • Phonetics and Speech Sciences
    • /
    • v.16 no.3
    • /
    • pp.71-78
    • /
    • 2024
  • This study aimed to establish evaluation methods for the speech processing stages of phonological encoding, phonological short-term memory, and articulation transcoding from a psycholinguistic perspective. A meta-analysis of 21 studies published between 2000 and 2024, involving 1,442 participants, was conducted. Participants were divided into six groups: general, dyslexia, speech sound disorder, language delay, apraxia+aphasia, and childhood apraxia of speech. The analysis revealed effect sizes of g=.46 for phonological encoding errors, g=.57 for phonological short-term memory errors, and g=.63 for articulation transition errors. These results suggest that substitution errors, order and repetition errors, and phoneme addition and voicing substitution errors are key indicators for assessing these abilities. This study contributes to a comprehensive understanding of speech and language disorders by providing a methodological framework for evaluating speech processing stages and a detailed analysis of error characteristics. Future research should involve non-word repetition tasks across various speech and language disorder groups to further validate these methods, offering valuable data for the assessment and treatment of these disorders.

A quantitative study on the minimal pair of Korean phonemes: Focused on syllable-initial consonants (한국어 음소 최소대립쌍의 계량언어학적 연구: 초성 자음을 중심으로)

  • Jung, Jieun
    • Phonetics and Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.29-40
    • /
    • 2019
  • The paper investigates the minimal pair of Korean phonemes quantitatively. To achieve this goal, I calculated the number of consonant minimal pairs in the syllable-initial position as both raw counts and relative counts, and analyzed the part of speech relations of the two words in the minimal pair. "Urimalsaem" was chosen as the object of this study because it was judged that the minimal pair analysis should be done through a dictionary and it is the largest among Korean dictionaries. The results of the study are summarized as follows. First, there were 153 types of minimal pairs out of 337,135 examples. The ranking of phoneme pairs from highest to lowest was 'ㅅ-ㅈ, ㄱ-ㅅ, ㄱ-ㅈ, ㄱ-ㅂ, ㄱ-ㅎ, ${\ldots}$, ㅆ-ㅋ, ㄸ-ㅋ, ㅉ-ㅋ, ㄹ-ㅃ, ㅃ-ㅋ'. The phonemes that played a major role in the formation of the minimal pair were /ㄱ, ㅅ, ㅈ, ㅂ, ㅊ/, in that order, which showed a high proportion of palatals. The correlation between the raw count of minimal pairs and the relative count of minimal pairs was found to be quite high r=0.937. Second, 87.91% of the minimal pairs shared the part of speech (same syntactic category). The most frequently observed type has been 'noun-noun' pair (70.25%), and 'vowel-vowel' pair (14.77%) was the next ranking. It can be indicated that the minimal pair could be grouped into similar categories in terms of semantics. The results of this study can be useful for various research in Korean linguistics, speech-language pathology, language education, language acquisition, speech synthesis, and artificial intelligence-machine learning as basic data related to Korean phonemes.