• Title/Summary/Keyword: Syllable Model

Search Result 78, Processing Time 0.025 seconds

Brain-Operated Typewriter using the Language Prediction Model

  • Lee, Sae-Byeok;Lim, Heui-Seok
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.10
    • /
    • pp.1770-1782
    • /
    • 2011
  • A brain-computer interface (BCI) is a communication system that translates brain activity into commands for computers or other devices. In other words, BCIs create a new communication channel between the brain and an output device by bypassing conventional motor output pathways consisting of nerves and muscles. This is particularly useful for facilitating communication for people suffering from paralysis. Due to the low bit rate, it takes much more time to translate brain activity into commands. Especially it takes much time to input characters by using BCI-based typewriters. In this paper, we propose a brain-operated typewriter which is accelerated by a language prediction model. The proposed system uses three kinds of strategies to improve the entry speed: word completion, next-syllable prediction, and next word prediction. We found that the entry speed of BCI-based typewriter improved about twice as much through our demonstration which utilized the language prediction model.

HANDWRITTEN HANGUL RECOGNITION MODEL USING MULTI-LABEL CLASSIFICATION

  • HANA CHOI
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.27 no.2
    • /
    • pp.135-145
    • /
    • 2023
  • Recently, as deep learning technology has developed, various deep learning technologies have been introduced in handwritten recognition, greatly contributing to performance improvement. The recognition accuracy of handwritten Hangeul recognition has also improved significantly, but prior research has focused on recognizing 520 Hangul characters or 2,350 Hangul characters using SERI95 data or PE92 data. In the past, most of the expressions were possible with 2,350 Hangul characters, but as globalization progresses and information and communication technology develops, there are many cases where various foreign words need to be expressed in Hangul. In this paper, we propose a model that recognizes and combines the consonants, medial vowels, and final consonants of a Korean syllable using a multi-label classification model, and achieves a high recognition accuracy of 98.38% as a result of learning with the public data of Korean handwritten characters, PE92. In addition, this model learned only 2,350 Hangul characters, but can recognize the characters which is not included in the 2,350 Hangul characters

Hangul Encoding Standard based on Unicode (유니코드의 한글 인코딩 표준안)

  • Ahn, Dae-Hyuk;Park, Young-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.12
    • /
    • pp.1083-1092
    • /
    • 2007
  • In Unicode, two types of Hangul encoding schemes are currently in use, namely, the "precomposed modern Hangul syllables" model and the "conjoining Hangul characters" model. The current Unicode Hangul conjoining rules allow a precomposed Hangul syllable to be a member of a syllable which includes conjoining Hangul characters; this has resulted in a number of different Hangul encoding implementations. This unfortunate problem stems from an incomplete understanding of the Hangul writing system when the normalization and encoding schemes were originally designed. In particular, the extended use of old Hangul was not taken into consideration. As a result, there are different ways to represent Hangul syllables, and this cause problem in the processing of Hangul text, for instance in searching, comparison and sorting functions. In this paper, we discuss the problems with the normalization of current Hangul encodings, and suggest a single efficient rule to correctly process the Hangul encoding in Unicode.

An Implementation of a Lightweight Spacing-Error Correction System for Korean (한국어 경량형 띄어쓰기 교정 시스템의 구현)

  • Song, Yeong-Kil;Kim, Hark-Soo
    • The Journal of Korean Association of Computer Education
    • /
    • v.12 no.2
    • /
    • pp.87-96
    • /
    • 2009
  • We propose a Korean spacing-error correction system that requires small memory usage although the proposed method is a mixture of rule-based and statistical methods. In addition, to train the proposed model to be robust in mobile colloquial sentences in which spelling errors and omissions of functional words are frequently occurred, we propose a method to automatically transform typical colloquial corpus to mobile colloquial corpus. The proposed system uses statistical information of syllable uni-grams in order to increase coverages on new syllable patterns. Then, the proposed system uses error correction rules of two or more grams of syllables in order to increase accuracies. In the experiments on fake mobile colloquial sentences, the proposed system showed relatively high accuracy of 92.10% (93.80% in typical colloquial corpus, 94.07% in typical balanced corpus) spite of small memory usage of about 1MB.

  • PDF

Analysis of the Timing of Spoken Korean Using a Classification and Regression Tree (CART) Model

  • Chung, Hyun-Song;Huckvale, Mark
    • Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.77-91
    • /
    • 2001
  • This paper investigates the timing of Korean spoken in a news-reading speech style in order to improve the naturalness of durations used in Korean speech synthesis. Each segment in a corpus of 671 read sentences was annotated with 69 segmental and prosodic features so that the measured duration could be correlated with the context in which it occurred. A CART model based on the features showed a correlation coefficient of 0.79 with an RMSE (root mean squared prediction error) of 23 ms between actual and predicted durations in reserved test data. These results are comparable with recent published results in Korean and similar to results found in other languages. An analysis of the classification tree shows that phrasal structure has the greatest effect on the segment duration, followed by syllable structure and the manner features of surrounding segments. The place features of surrounding segments only have small effects. The model has application in Korean speech synthesis systems.

  • PDF

Generative Chatting Model based on Index-Term Encoding and Syllable Decoding (색인어 인코딩과 음절 디코딩에 기반한 생성 채팅 모델)

  • Kim, JinTae;Kim, Sihyung;Kim, HarkSoo;Lee, Yeonsoo;Choi, Maengsic
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.125-129
    • /
    • 2017
  • 채팅 시스템은 사람이 사용하는 자연어를 이용해 컴퓨터와 대화를 하는 시스템이다. 한국어 특성상 대화체에서 동일한 의미를 가졌지만 다른 형태를 가진 경우가 많다. 본 논문에서는 Attention mechanism Encoder-Decoder Model을 사용해 한국어 특성에 맞는 효과적인 생성 모델을 만들 수 있는 입력, 출력 단위를 제안한다. 실험에서 정성 평가와 ROUSE, BLEU 평가를 진행한 결과 형태소 단위의 입력 보다 본 논문에서 제안한 색인어 입력 단위의 성능이 높고, 의사 형태소 단위 출력 보다 음절 단위 출력을 사용한 시스템이 더 문법적 오류가 적고 적합한 응답을 생성하는 것을 보였다.

  • PDF

Japanese Adults' Perceptual Categorization of Korean Three-way Distinction (한국어 3중 대립 음소에 대한 일본인의 지각적 범주화)

  • Kim, Jee-Hyun;Kim, Jung-Oh
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2005.05a
    • /
    • pp.163-167
    • /
    • 2005
  • Current theories of cross-language speech perception claim that patterns of perceptual assimilation of non-native segments to native categories predict relative difficulties in learning to perceive (and produce) non-native phones. Perceptual assimilation patterns by Japanese listeners of the three-way voicing distinction in Korean syllable-initial obstruent consonants were assessed directly. According to Speech Learning Model (SLM) and Perceptual Assimilation Model (PAM), the resulting perceptual assimilation pattern predicts relative difficulty in discrimination between lenis and aspirated consonants, and relative ease in the discrimination of fortis. This study compared the effects of two different training conditions on Japanese adults’perceptual categorization of Korean three-way distinction. In one condition, participants were trained to discriminate lenis and aspirated consonants which were predicted to be problematic, whereas in another condition participants were trained with all three classes of 'learnability' did not seem to depend lawfully on the perceived cross-language similarity of Korean and Japanese consonants.

  • PDF

Generative Chatting Model based on Index-Term Encoding and Syllable Decoding (색인어 인코딩과 음절 디코딩에 기반한 생성 채팅 모델)

  • Kim, JinTae;Kim, Sihyung;Kim, HarkSoo;Lee, Yeonsoo;Choi, Maengsic
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.125-129
    • /
    • 2017
  • 채팅 시스템은 사람이 사용하는 자연어를 이용해 컴퓨터와 대화를 하는 시스템이다. 한국어 특성상 대화체에서 동일한 의미를 가졌지만 다른 형태를 가진 경우가 많다. 본 논문에서는 Attention mechanism Encoder-Decoder Model을 사용해 한국어 특성에 맞는 효과적인 생성 모델을 만들 수 있는 입력, 출력 단위를 제안한다. 실험에서 정성 평가와 ROUSE, BLEU 평가를 진행한 결과 형태소 단위의 입력 보다 본 논문에서 제안한 색인어 입력 단위의 성능이 높고, 의사 형태소 단위 출력 보다 음절 단위 출력을 사용한 시스템이 더 문법적 오류가 적고 적합한 응답을 생성하는 것을 보였다.

  • PDF

Letter Perception in Two-Syllable Korean Words: Comparison of an Interactive Activation Model to an Elementary Perceiver and Memorizer model (II) (두 음절 한글단어에 있어서 낱자의 지각: 상호작용활성화 모형과 초보지각자-기억자 모형의 비교 검증 (II))

  • Kim, Jung-Oh;Kim, Jae-Kap
    • Annual Conference on Human and Language Technology
    • /
    • 1990.11a
    • /
    • pp.235-246
    • /
    • 1990
  • McClelland와 Rumelhart (1981)의 상호작용활성화 모형과 Feigembaum과 Simon (1984)의 초보지각자-기억자 모형 (Elementary Perceiver and Memorizer, EPAM)을 두 음절 한글단어에서의 낱자 지각을 중심으로 검증한 김 재갑과 김 정오 (1990)의 실험을 새 자극 단어들과 변수를 포함시킨 두 실험을 수행하였다. 여섯 낱자들로 구성된 단어의 둘째 음절위치에서 단어 내의 낱자들은 단독으로 제시되는 낱자들보다 더 부정확하게 파악되었고 (단어열등효과), 이 결과는 다섯 낱자로 된 두 음절 단어들에서는 관찰되지 않았고, 그 배후 기제는 세 낱자들로 구성된 단어의 부분에 대한 제한된 주의 용량의 배정임을 시사하는 결과를 얻었다. 본 연구의 결과들은 상호작용활성화 모형보다 EPAM 모형에 의해 더 잘 다루어짐이 밝혀졌다.

  • PDF

A Comparative study on the Effectiveness of Segmentation Strategies for Korean Word and Sentence Classification tasks (한국어 단어 및 문장 분류 태스크를 위한 분절 전략의 효과성 연구)

  • Kim, Jin-Sung;Kim, Gyeong-min;Son, Jun-young;Park, Jeongbae;Lim, Heui-seok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.39-47
    • /
    • 2021
  • The construction of high-quality input features through effective segmentation is essential for increasing the sentence comprehension of a language model. Improving the quality of them directly affects the performance of the downstream task. This paper comparatively studies the segmentation that effectively reflects the linguistic characteristics of Korean regarding word and sentence classification. The segmentation types are defined in four categories: eojeol, morpheme, syllable and subchar, and pre-training is carried out using the RoBERTa model structure. By dividing tasks into a sentence group and a word group, we analyze the tendency within a group and the difference between the groups. By the model with subchar-level segmentation showing higher performance than other strategies by maximal NSMC: +0.62%, KorNLI: +2.38%, KorSTS: +2.41% in sentence classification, and the model with syllable-level showing higher performance at maximum NER: +0.7%, SRL: +0.61% in word classification, the experimental results confirm the effectiveness of those schemes.