• Title/Summary/Keyword: 한국어 문법 교정

Search Result 20, Processing Time 0.024 seconds

Improving a Korean Spell/Grammar Checker for the Web-Based Language Learning System (웹기반 언어 학습시스템을 위한 한국어 철자/문법 검사기의 성능 향상)

  • 남현숙;김광영;권혁철
    • Korean Journal of Cognitive Science
    • /
    • v.12 no.3
    • /
    • pp.1-18
    • /
    • 2001
  • The goal of this paper is the pedagogical application of a Korean Spell/Grammar Checker to the web-based language learning system for Korean writing. To maximize the efficient instruction of our learning system \\`Urimal Baeumteo\\` we have to improve our Korean Spell/Grammar Checker. Today the NLP system\\`s performance defends on its semantic processing capability. In our Korean Spell/Grammar Checker. the tasks accomplished in the semantic level are: the detection and correction of misused derived and compound nouns in a Korean spell-checking device and the detection and correction of syntactic and semantic errors in a Korean grammars-checking device. We describe a common approach to the partial parsing using collocation rules based on the dependency grammar. To provide more detailed semantic rules. we classified nouns according to their concepts. and subcategorized verbs referring to their syntactic and semantic features. Improving a Korean Spell/Gl-Grammar Checker makes our learning system active and intelligent in a web-based environment. We acknowledge the flaws in our system: the classification of nouns based on their meanings and concepts is a time consuming task. the analytic unit of this study is principally limited to the phrases in a sentence therefore the accurate parsing of embedded sentences remains a difficult problem to solve. Concerning the web-based language learning system. it is critically important to consider its interface design and structure of its contents.

  • PDF

Analysis of Predicate/Arguments Syntactico-Semantic Relation for the Extension of a Korean Grammar Checker (한국어 문법 검사기의 기능 확장을 위한 서술어와 논항의 통사.의미적 관계 분석)

  • Nam, Hyeon-Suk;Son, Hun-Seok;Choi, Seong-Pil;Park, Yong-Uk;So, Gil-Ja;Gwon, Hyeok-Cheol
    • Annual Conference on Human and Language Technology
    • /
    • 1997.10a
    • /
    • pp.403-408
    • /
    • 1997
  • 언어의 내적 특성을 반영하는 의미 문체의 검사 및 교정은 언어의 형태적인 면과 관련있는 단순한 철자 검사 및 교정에 비해 더 난해하고 복잡한 양상을 띤다. 본 논문이 제안하는 의미 정보를 이용한 명사 분류 방법은 의미와 문체 오류의 포착과 수정 기능을 향상시키기 위한 방법의 하나이다. 이 논문은 문맥상 용법이 어긋나는 서술어를 교정하기 위해 명사 의미 분류방법을 서술어/논항의 통사 의미적 관계 분석에 이용하여 의미 규칙을 세우는 과정을 서술한다. 여기서 논항인 명사의 의미 정보를 체계적으로 분류하기 위해 시소러스 기법과 의미망을 응용한다. 서술어와 논항 사이의 통사 의미적 관계에 따라 의미 문체 오류를 검사하고 교정함으로써 규칙들을 일반화하여 구축하게 하고 이미 존재하고 있는 규칙을 단순화함으로써 한국어 문법 검사기의 기능을 보완한다.

  • PDF

A Korean Grammar Checker based on the Trees Resulted from a Full Parser (전체 문장 분석에 기반한 한국어 문법 검사기)

  • 이공주;황선영;김지은
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.10
    • /
    • pp.992-999
    • /
    • 2003
  • The purpose of a grammar checker is to find a grammatical erroneous expression in a sentence, and to provide appropriate suggestions for them. To find those errors, grammar checker should parse the whole input sentence, which is a highly time-consuming job. B7or this reason, most Korean grammar checkers adopt a partial parser that can analyze a fragment of a sentence without an ambiguity. This paper presents a Korean grammar checker using a full parser in order to find grammatical errors. This approach allows the grammar checker to critique the errors between the two words in a long distance relationship within a sentence. As a result, this approach improves the accuracy in correcting errors, but it nay come at the expense of decrease in its performance. The Korean grammar checker described in this paper is implemented with 65 rules for checking and correcting the grammatical errors. The grammar checker shows 96.49% in checking accuracy against the test corpus including 7 million words.

The error character Revision System of the Korean using Sememe (의미소를 이용한 한국어 오류 문자 교정 시스템)

  • 박현재;박해선;강원일;손영선
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09b
    • /
    • pp.31-34
    • /
    • 2003
  • 현재 구현되어 있는 한국어 철자 교정 시스템은 문장의 문법 정보나 연어 관계로부터 문장의 오류를 처리하는 방식을 쓰고 있다. 본 논문에서는, 홑문장에서 의미소 사이의 관계를 이용하여 오타 문자를 수정하고 오타에 의한 의미적인 오류가 있을 때에는 의미에 해당하는 적절한 단어를 대체하여 제공하는 시스템을 제안한다. 단어의 뜻에 따라 체언은 의미 트리를 형성하고, 서술어는 주어 및 목적어의 체언과 의미 관계를 정의한다. 오류가 포함된 문장에서, 의미 관계를 비교, 분석하여 주어 및 목적어의 체언이 틀렸을 경우에는 서술어로부터, 서술어가 틀렸을 경우에는 주어 및 목적어의 체언으로부터, 수식어가 틀렸을 경우에는 체언 또는 서술어로부터 정의된 상호 의미 관계를 이용하여 한 문자에 대한 오타를 수정하고 오타에 의한 의미적 오류가 발견될 때에는 상기와 같은 철자 교정 방법을 적용하였다.

  • PDF

Context-sensitive Spelling Correction using Measuring Relationship between Words (단어 간 연관성 측정을 통한 문맥 철자오류 교정)

  • Choi, Sung-Ki;Kim, Minho;Kwon, Hyuk-Chul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.1362-1365
    • /
    • 2013
  • 한국어 텍스트에 나타나는 오류어의 유형은 크게 단순 철자오류와 문맥 철자오류로 구분할 수 있다. 이중 문맥 철자오류는 문맥의 의미 통사적 관계를 고려해야만 해당 어휘의 오류 여부를 알 수 있는 오류로서 철자오류 중 교정 난도가 가장 높다. 문맥 철자오류의 유형은 발음 유상성에 따른 오류, 오타 오류, 문법 오류, 띄어쓰기 오류로 구분할 수 있다. 본 연구에서는 오타 오류에 의해 발생하는 문맥 철자오류를 어의 중의성 해소와 같은 문제로 보고 교정 어휘 쌍을 이용한 통계적 문맥 철자오류 교정 방법을 제안한다. 미리 생성한 교정 어휘 쌍을 대상으로 교정 어휘 쌍의 각 어휘와 주변 문맥 간 의미적 연관성을 통계적으로 측정하여 문맥 철자오류를 검색하고 교정한다. 제안한 방법을 적용한 결과 3개의 교정 어휘 쌍 모두 90%를 넘는 정확도를 보였다.

PEEP-Talk: Deep Learning-based English Education Platform for Personalized Foreign Language Learning (PEEP-Talk: 개인화 외국어 학습을 위한 딥러닝 기반 영어 교육 플랫폼)

  • Lee, SeungJun;Jang, Yoonna;Park, Chanjun;Kim, Minwoo;Yahya, Bernardo N;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.293-299
    • /
    • 2021
  • 본 논문은 외국어 학습을 위한 딥러닝 기반 영어 교육 플랫폼인 PEEP-Talk (Personalized English Education Platform)을 제안한다. PEEP-Talk는 딥러닝 기반 페르소나 대화 시스템과 영어 문법 교정 피드백 기능이 내장된 교육용 플랫폼이다. 또한 기존 페르소나 대화시스템과 다르게 대화의 흐름이 벗어날 시 이를 자동으로 판단하여 대화 주제를 실시간으로 변경할 수 있는 CD (Context Detector) 모듈을 제안하며 이를 적용하여 실제 사람과 대화하는 듯한 느낌을 사용자에게 줄 수 있다. 본 논문은 PEEP-Talk의 각 모듈에 대한 정량적인 분석과 더불어 CD 모듈을 객관적으로 판단할 수 있는 새로운 성능 평가지표인 CDM (Context Detector Metric)을 기반으로 PEEP-Talk의 강건함을 검증하였다. 이와 더불어 PEEP-Talk를 카카오톡 채널을 이용하여 배포하였다.

  • PDF

English Sentence Critique Using Extended Verb Pattern (확장된 동사형을 이용한 영어문장 검사기)

  • Cha, Eui-Young;Kim, Young-Taek
    • Annual Conference on Human and Language Technology
    • /
    • 1992.10a
    • /
    • pp.491-501
    • /
    • 1992
  • 변환 방식의 기계 번역에서 가장 중요한 부분은 변환 단계이며 여기서 변환사전이 매우 중요한 역활을 담당한다. 그러므로 인간이나 기계 번역기에 의해 생성되는 영어 문장은 이들이 가지고 있는 동사 사전의 내용과 효율적인 생성 알고리즘에 의해서 문장의 수준이나 정확성이 결정된다. 이렇게 생성된 문장을 검사하는 기존의 영어 문법 검사기들은 영어권의 사람들을 위주로 만들어졌기 때문에 문법적인 중요한 규정들을 포함하지 않고 있어서 비영어권의 사용자가 이용하기에는 부적절하다. 본 논문에서는 인간이 번역하였거나 기계 번역기에 의해 생성된 문장을 검사하고 교정할 수 있도록, 확장된 동사형을 기반으로 한 동사 사전을 제안하고 이를 이용한 영어 문장 검사기를 구현한다.

  • PDF

The error character Revision System of the Korean using Semantic relationship of sentence component (문장 성분의 의미 관계를 이용한 한국어 오류 문자 교정 시스템)

  • Park, Hyun-Jae;Park, Hae-Sun;Kang, One-Il;Sohn, Young-Sun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.1
    • /
    • pp.28-32
    • /
    • 2004
  • Till now, Korean spelling proofreading system has corrected words of a sentence from the relationship of a collocation or the grammatical information of the sentence. In this paper, we propose a system that corrects a word using the relationship among the sememes in a single sentence and substitutes an apt word for a word of the sentence that has the meaningful mistake by a mistyping. The proposed system makes several sentences that are able to communicate with each sememe. The substantives forms meaning tree according to the meaning of the word and the predicate of a sentence defines the meaningful relationship between a substantives of the subject and the object. After this system compares and analyzes the relationship of meaning, it corrects the mistyping of a word in a single sentence that includes an error. If the system finds out the semantic error by the mistyping, it applies the spelling proofreading method that proposed in this paper.

Generalization of error decision rules in a grammar checker using Korean WordNet, KorLex (명사 어휘의미망을 활용한 문법 검사기의 문맥 오류 결정 규칙 일반화)

  • So, Gil-Ja;Lee, Seung-Hee;Kwon, Hyuk-Chul
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.405-414
    • /
    • 2011
  • Korean grammar checkers typically detect context-dependent errors by employing heuristic rules that are manually formulated by a language expert. These rules are appended each time a new error pattern is detected. However, such grammar checkers are not consistent. In order to resolve this shortcoming, we propose new method for generalizing error decision rules to detect the above errors. For this purpose, we use an existing thesaurus KorLex, which is the Korean version of Princeton WordNet. KorLex has hierarchical word senses for nouns, but does not contain any information about the relationships between cases in a sentence. Through the Tree Cut Model and the MDL(minimum description length) model based on information theory, we extract noun classes from KorLex and generalize error decision rules from these noun classes. In order to verify the accuracy of the new method in an experiment, we extracted nouns used as an object of the four predicates usually confused from a large corpus, and subsequently extracted noun classes from these nouns. We found that the number of error decision rules generalized from these noun classes has decreased to about 64.8%. In conclusion, the precision of our grammar checker exceeds that of conventional ones by 6.2%.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.