• Title/Summary/Keyword: 음소 오류

Search Result 61, Processing Time 0.033 seconds

A COMPARATIVE STUDY ON AUDITORY ATTENTION AND PHONEME DIFFERENTIAL ABILITY AMONG CHILDREN WITH READING DISABILITY AND WITH ATTENTION DEFICIT/HYPERACTIVITY (읽기 장애와 주의력 결핍/과잉 운동 장애아동의 주의력 과제와 음소 변별 과제 수행 비교 - 청각 과제를 중심으로 -)

  • Lee, Kyung-Hee;Shin, Min-Sup;Kim, Boong-Nyun;Cho, Soo-Churl
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.14 no.2
    • /
    • pp.197-208
    • /
    • 2003
  • Objective:In this study, we hypothesized that deficit in processing rapid linguistic stimuli is at the heart of Reading Disability(RD) and deficit in response inhibition is at the heart of Attention Deficit/Hyperactivity(ADHD). We conducted experiments to identify the core cognitive characteristics of children either with RD or with ADHD or with both, using attentional tasks and phoneme differential tests. Method:In the study 1, 28 children with ADHD, 16 children with RD+ADHD were individually administered visual/auditory performance tests. Then, the differences of performance on attentional tasks between two groups were compared while IQs of two groups were controlled. In the study 2, 13 children with RD+ADHD/RD, 13 children with ADHD, and 13 normal children were administered computerized phoneme differential tests. Result:Visual attentional tasks did not distinguish an ADHD group from a RD+ADHD group. With auditory attentional tasks, however, the comorbid group showed significantly more difficulties, causing a large variance in reaction time. RD, RD+ADHD, and ADHD groups showed more errors in phoneme differential tests than a normal control group, and each group showed distinctive performance patterns. Discussion:An ADHD group had difficulty in response inhibition and sustained attention, and children who also had RD along with ADHD magnified the auditory attentional difficulties. Even though children with RD had more trouble with responding correctly to target stimuli, their responses were not significantly different from those of children with ADHD.

  • PDF

Conformer with lexicon transducer for Korean end-to-end speech recognition (Lexicon transducer를 적용한 conformer 기반 한국어 end-to-end 음성인식)

  • Son, Hyunsoo;Park, Hosung;Kim, Gyujin;Cho, Eunsoo;Kim, Ji-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.530-536
    • /
    • 2021
  • Recently, due to the development of deep learning, end-to-end speech recognition, which directly maps graphemes to speech signals, shows good performance. Especially, among the end-to-end models, conformer shows the best performance. However end-to-end models only focuses on the probability of which grapheme will appear at the time. The decoding process uses a greedy search or beam search. This decoding method is easily affected by the final probability output by the model. In addition, the end-to-end models cannot use external pronunciation and language information due to structual problem. Therefore, in this paper conformer with lexicon transducer is proposed. We compare phoneme-based model with lexicon transducer and grapheme-based model with beam search. Test set is consist of words that do not appear in training data. The grapheme-based conformer with beam search shows 3.8 % of CER. The phoneme-based conformer with lexicon transducer shows 3.4 % of CER.

Performance Improvement of Connected Digit Recognition by Considering Phonemic Variations in Korean Digit and Speaking Styles (한국어 숫자음의 음운변화 및 화자 발성특성을 고려한 연결숫자 인식의 성능향상)

  • 송명규;김형순
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.401-406
    • /
    • 2002
  • Each Korean digit is composed of only a syllable, so recognizers as well as Korean often have difficulty in recognizing it. When digit strings are pronounced, the original pronunciation of each digit is largely changed due to the co-articulation effect. In addition to these problems, the distortion caused by various channels and noises degrades the recognition performance of Korean connected digit string. This paper dealt with some techniques to improve recognition performance of it, which include defining a set of PLUs by considering phonemic variations in Korean digit and constructing a recognizer to handle speakers various speaking styles. In the speaker-independent connected digit recognition experiments using telephone speech, the proposed techniques with 1-Gaussian/state gave string accuracy of 83.2%, i. e., 7.2% error rate reduction relative to baseline system. With 11-Gaussians/state, we achieved the highest string accuracy of 91.8%, i. e., 4.7% error rate reduction.

STANDARDIZATION OF WORD/NONWORD READING TEST AND LETTER-SYMBOL DISCRIMINATION TASK FOR THE DIAGNOSIS OF DEVELOPMENTAL READING DISABILITY (발달성 읽기 장애 진단을 위한 단어/비단어 읽기 검사와 글자기호감별검사의 표준화 연구)

  • Cho, Soo-Churl;Lee, Jung-Bun;Chungh, Dong-Seon;Shin, Sung-Woong
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.14 no.1
    • /
    • pp.81-94
    • /
    • 2003
  • Objectives:Developmental reading disorder is a condition which manifests significant developmenttal delay in reading ability or persistent errors. About 3-7% of school-age children have this condition. The purpose of the present study was to validate the diagnostic values of Word/Nonword Reading Test and Letter-Symbol Discrimination Task for the purpose of overcoming the caveats of Basic Learning Skills Test. Methods:Sixty-three reading-disordered patients(mean age 10.48 years old) and sex, age-matched 77 normal children(mean age 10.33 years old) were selected by clinical evaluation and DSM-IV criteria. Reading I and II of Basic Learning Skills Test, Word/Nonword Reading Test, and Letter-Symbol Discrimination Task were carried out to them. Word/Nonword Reading Test:One hundred usual highfrequency words and one hundred meaningless nonwords were presented to the subjects within 1.2 and 2.4 seconds, respectively. Through these results, automatized phonological processing ability and conscious letter-sound matching ability were estimated. Letter-Symbol Discrimination Task:mirror image letters which reading-disordered patients are apt to confuse were used. Reliability, concurrent validity, construct validity, and discriminant validity tests were conducted. Results:Word/Nonword Reading Test:the reliability(alpha) was 0.96, and concurrent validity with Basic Learning Skills test was 0.94. The patients with developmental reading disorders differed significantly from normal children in Word/Nonword Reading Test performances. Through discriminant analysis, 83.0% of original cases were correctly classified by this test. Letter-Symbol Discrimination Task:the reliability(alpha) was 0.86, and concurrent validity with Basic Learning Skills test was 0.86. There were significant differences in scores between the patients and normal children. Factor analysis revealed that this test were composed of saccadic mirror image processing, global accuracy, mirror image processing deficit, static image processing, global vigilance deficit, and inattention-impulsivity factors. By discriminant analysis, 87.3% of the patients and normal children were correctly classified. Conclusion:The patients with developmental reading disorders had deficits in automatized visuallexical route, morpheme-phoneme conversion mechanism, and visual information processing. These deficits were reliably and validly evaluated by Word/Nonword Reading Test and Letter-Symbol Discrimination Task.

  • PDF

초등학교에서의 영어 발음 및 청취 교육

  • 정인교
    • Proceedings of the KSPS conference
    • /
    • 1997.07a
    • /
    • pp.248-248
    • /
    • 1997
  • 오늘날 영어교육은 교과과정령에 엄연히 명시된 네 가지 기능(four skills) 즉 듣기, 말하기, 원기, 쓰기라는 정당하고도 보편 타당성 있는 명분 하에 어떻게 가르쳐 왔는가 를 반문해 보면 많은 아쉬움이 남는다. 그간 6년간의 중등과정, 심지어는 대학에서 환 두해까지 영어를 이수한 사람틀 중에는 문자를 통해서는 상당한 수준, 그것도 영어 토박이들조차 놀랄 정도의 영어를 이해하지만, 소리를 통해 들을 때는 ---말하는 것은 두말 할 것도 없고---아주 간단한 내용의 영어조차 알아듣기 힘든 경험을 한 사람이 많다는 것은 부인할 수 없는 사실이다. 그 이유는 명백하다. 즉, 문자를 대할 때는 시각적 자극의 형태가 두뇌 속에 저장된 정보---가공할 문법적 지식---와 일치하기 때문에 쉽게 이해를 할 수 있는 반면, 소리를 들을 때는 청각적 자극의 형태가 두뇌 속에 저장된 정보---극히 불완전한 발음사전, 또는 모국어의 음운체계에 의한 영어발음--- 와 차이가 있기 때문일 것이다. 그러므로 적어도 말소리를 매체로 하는 의사소통에 있어서는 영어의 본토박이 발음을 정확히, 아니면 적어도 매우 근접하게 나마 터득하여(습관화하여)두뇌에 저장하는 일이 가장 중요한 일이다. 따라서 영어교사는 모국어의 음운체계에 대한 정확하고도 상세한 지식을 토대로 하여 영어의 음운체계와 '언어학적으로 의미 있는 (linguistically significant)' 대초분석의 방법으로 발음을 지도한다면 보다 나은 학습효과를 기대할 수 있을 것이다. 일반적으로 모국어의 발음이 외국어의 발음에 간섭을 유발하는 경우는 다음과 같다. 1. 분절음체계가 서로 다를 때 2. 한 언어의 음소가 다른 언어의 이음(allophone)일 때 3. 유사한 음의 조음장소와 방법 이 다를 때 4. 분절음의 분포 또는 배열이 다를 때 5. 음운현상이 다를 때 6. 언어의 리듬이 다를 때 위의 여섯 가지 경우를 중심으로 영어와 한국어의 발음특성을 대조하여 '낯선 말투(foreign accent)' 또는 발음오류를 최소로 줄이는 것이 영어교사의 일차적인 목표이다.

  • PDF

Korean Sentiment Analysis using Multi-channel and Densely Connected Convolution Networks (Multi-channel과 Densely Connected Convolution Networks을 이용한 한국어 감성분석)

  • Yoon, Min-Young;Koo, Min-Jae;Lee, Byeong Rae
    • Annual Conference of KIPS
    • /
    • 2019.05a
    • /
    • pp.447-450
    • /
    • 2019
  • 본 논문은 한국어 문장의 감성 분류를 위해 문장의 형태소, 음절, 자소를 입력으로 하는 합성곱층과 DenseNet 을 적용한 Text Multi-channel DenseNet 모델을 제안한다. 맞춤법 오류, 음소나 음절의 축약과 탈락, 은어나 비속어의 남용, 의태어 사용 등 문법적 규칙에 어긋나는 다양한 표현으로 인해 단어 기반 CNN 으로 추출 할 수 없는 특징들을 음절이나 자소에서 추출 할 수 있다. 한국어 감성분석에 형태소 기반 CNN 이 많이 쓰이고 있으나, 본 논문에서 제안한 Text Multi-channel DenseNet 모델은 형태소, 음절, 자소를 동시에 고려하고, DenseNet 에 정보를 밀집 전달하여 문장의 감성 분류의 정확도를 개선하였다. 네이버 영화 리뷰 데이터를 대상으로 실험한 결과 제안 모델은 85.96%의 정확도를 보여 Multi-channel CNN 에 비해 1.45% 더 정확하게 문장의 감성을 분류하였다.

A Performance Improvement Method using Variable Break in Corpus Based Japanese Text-to-Speech System (가변 Break를 이용한 코퍼스 기반 일본어 음성 합성기의 성능 향상 방법)

  • Na, Deok-Su;Min, So-Yeon;Lee, Jong-Seok;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2
    • /
    • pp.155-163
    • /
    • 2009
  • In text-to-speech systems, the conversion of text into prosodic parameters is necessarily composed of three steps. These are the placement of prosodic boundaries. the determination of segmental durations, and the specification of fundamental frequency contours. Prosodic boundaries. as the most important and basic parameter. affect the estimation of durations and fundamental frequency. Break prediction is an important step in text-to-speech systems as break indices (BIs) have a great influence on how to correctly represent prosodic phrase boundaries, However. an accurate prediction is difficult since BIs are often chosen according to the meaning of a sentence or the reading style of the speaker. In Japanese, the prediction of an accentual phrase boundary (APB) and major phrase boundary (MPB) is particularly difficult. Thus, this paper presents a method to complement the prediction errors of an APB and MPB. First, we define a subtle BI in which it is difficult to decide between an APB and MPB clearly as a variable break (VB), and an explicit BI as a fixed break (FB). The VB is chosen using the classification and regression tree, and multiple prosodic targets in relation to the pith and duration are then generated. Finally. unit-selection is conducted using multiple prosodic targets. In the MOS test result. the original speech scored a 4,99. while proposed method scored a 4.25 and conventional method scored a 4.01. The experimental results show that the proposed method improves the naturalness of synthesized speech.

Changes of Speech Discrimination Score Depending on Inter-syllable Pause Duration in Normal Hearing Children (정상 청력 아동의 음절 간 쉼 간격에 따른 어음이해도 변화)

  • Park, J.I.;Lee, J.Y.;Heo, S.D.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.8 no.2
    • /
    • pp.139-144
    • /
    • 2014
  • Speech discrimination is affected by the speed of speech. The speed of speech can be adjusted at the pause duration, the pause duration can take the resting time to avoid in overloading information. The study will be examine the effects of aging and audiological rehabilitation, and the auditory processing as basic research to investigate the normative data. 7 boys and 8 girls were participated. They have no problem with speech language pathologically and audiologically. There are 4 sets of test implement, and each test set was made out with 20 3-syllable words. Pause duration of all of these words are adjusted in normal(250 ms), slow(500 ms) and very slow(1000 ms). There are 4 words for a multiple-choice that including one word with written correctly and three words with written 1 phoneme wrong. Participant hear the word, and then have to choose one. Speech discrimination score in 250, 500, 1,000 ms of pause duration were $73{\pm}19.4%$, $84{\pm}12.2%$, $88{\pm}8.8%$, respectively.

  • PDF

Visualization of Korean Speech Based on the Distance of Acoustic Features (음성특징의 거리에 기반한 한국어 발음의 시각화)

  • Pok, Gou-Chol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.3
    • /
    • pp.197-205
    • /
    • 2020
  • Korean language has the characteristics that the pronunciation of phoneme units such as vowels and consonants are fixed and the pronunciation associated with a notation does not change, so that foreign learners can approach rather easily Korean language. However, when one pronounces words, phrases, or sentences, the pronunciation changes in a manner of a wide variation and complexity at the boundaries of syllables, and the association of notation and pronunciation does not hold any more. Consequently, it is very difficult for foreign learners to study Korean standard pronunciations. Despite these difficulties, it is believed that systematic analysis of pronunciation errors for Korean words is possible according to the advantageous observations that the relationship between Korean notations and pronunciations can be described as a set of firm rules without exceptions unlike other languages including English. In this paper, we propose a visualization framework which shows the differences between standard pronunciations and erratic ones as quantitative measures on the computer screen. Previous researches only show color representation and 3D graphics of speech properties, or an animated view of changing shapes of lips and mouth cavity. Moreover, the features used in the analysis are only point data such as the average of a speech range. In this study, we propose a method which can directly use the time-series data instead of using summary or distorted data. This was realized by using the deep learning-based technique which combines Self-organizing map, variational autoencoder model, and Markov model, and we achieved a superior performance enhancement compared to the method using the point-based data.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.