• Title/Summary/Keyword: 어절분석

Search Result 280, Processing Time 0.03 seconds

Improvement of Korean Homograph Disambiguation using Korean Lexical Semantic Network (UWordMap) (한국어 어휘의미망(UWordMap)을 이용한 동형이의어 분별 개선)

  • Shin, Joon-Choul;Ock, Cheol-Young
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.71-79
    • /
    • 2016
  • Disambiguation of homographs is an important job in Korean semantic processing and has been researched for long time. Recently, machine learning approaches have demonstrated good results in accuracy and speed. Other knowledge-based approaches are being researched for untrained words. This paper proposes a hybrid method based on the machine learning approach that uses a lexical semantic network. The use of a hybrid approach creates an additional corpus from subcategorization information and trains this additional corpus. A homograph tagging phase uses the hypernym of the homograph and an additional corpus. Experimentation with the Sejong Corpus and UWordMap demonstrates the hybrid method is to be effective with an increase in accuracy from 96.51% to 96.52%.

Morphology Representation using STT API in Rasbian OS (Rasbian OS에서 STT API를 활용한 형태소 표현에 대한 연구)

  • Woo, Park-jin;Im, Je-Sun;Lee, Sung-jin;Moon, Sang-ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.373-375
    • /
    • 2021
  • In the case of Korean, the possibility of development is lower than that of English if tagging is done through the word tokenization like English. Although the form of tokenizing the corpus by separating it into morpheme units via KoNLPy is represented as a graph database, full separation of voice files and verification of practicality is required when converting the module from graph database to corpus. In this paper, morphology representation using STT API is shown in Raspberry Pi. The voice file converted to Corpus is analyzed to KoNLPy and tagged. The analyzed results are represented by graph databases and can be divided into tokens divided by morpheme, and it is judged that data mining extraction with specific purpose is possible by determining practicality and degree of separation.

  • PDF

Exploiting Chunking for Dependency Parsing in Korean (한국어에서 의존 구문분석을 위한 구묶음의 활용)

  • Namgoong, Young;Kim, Jae-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.7
    • /
    • pp.291-298
    • /
    • 2022
  • In this paper, we present a method for dependency parsing with chunking in Korean. Dependency parsing is a task of determining a governor of every word in a sentence. In general, we used to determine the syntactic governor in Korean and should transform the syntactic structure into semantic structure for further processing like semantic analysis in natural language processing. There is a notorious problem to determine whether syntactic or semantic governor. For example, the syntactic governor of the word "먹고 (eat)" in the sentence "밥을 먹고 싶다 (would like to eat)" is "싶다 (would like to)", which is an auxiliary verb and therefore can not be a semantic governor. In order to mitigate this somewhat, we propose a Korean dependency parsing after chunking, which is a process of segmenting a sentence into constituents. A constituent is a word or a group of words that function as a single unit within a dependency structure and is called a chunk in this paper. Compared to traditional dependency parsing, there are some advantage of the proposed method: (1) The number of input units in parsing can be reduced and then the parsing speed could be faster. (2) The effectiveness of parsing can be improved by considering the relation between two head words in chunks. Through experiments for Sejong dependency corpus, we have shown that the USA and LAS of the proposed method are 86.48% and 84.56%, respectively and the number of input units is reduced by about 22%p.

An Analysis of Linguistic Features in Science Textbooks across Grade Levels: Focus on Text Cohesion (과학교과서의 학년 간 언어적 특성 분석 -텍스트 정합성을 중심으로-)

  • Ryu, Jisu;Jeon, Moongee
    • Journal of The Korean Association For Science Education
    • /
    • v.41 no.2
    • /
    • pp.71-82
    • /
    • 2021
  • Learning efficiency can be maximized by careful matching of text features to expected reader features (i.e., linguistic and cognitive abilities, and background knowledge). The present study aims to explore whether this systematic principle is reflected in the development of science textbooks. The current study examined science textbook texts on 20 measures provided by Auto-Kohesion, a Korean language analysis tool. In addition to surface-level features (basic counts, word-related measures, syntactic complexity measures) which have been commonly used in previous text analysis studies, the present study included cohesion-related features as well (noun overlap ratios, connectives, pronouns). The main findings demonstrate that the surface measures (e.g., word and sentence length, word frequency) overall increased in complexity with grade levels, whereas the majority of the other measures, particularly cohesion-related measures, did not systematically vary across grade levels. The current results suggest that students of lower grades are expected to experience learning difficulties and lowered motivation due to the challenging texts. Textbooks are also not likely to be suitable for students of higher grades to develop the ability to process difficulty level texts required for higher education. The current study suggests that various text-related features including cohesion-related measures need to be carefully considered in the process of textbook development.

A Method of Word Sense Disambiguation for Korean Complex Noun Phrase Using Verb-Phrase Pattern and Predicative Noun (기계 번역 의미 대역 패턴을 이용한 한국어 복합 명사 의미 결정 방법)

  • Yang, Seong-Il;Kim, Young-Kil;Park, Sang-Kyu;Ra, Dong-Yul
    • Annual Conference on Human and Language Technology
    • /
    • 2003.10d
    • /
    • pp.246-251
    • /
    • 2003
  • 한국어의 언어적 특성에 의해 빈번하게 등장하는 명사와 기능어의 나열은 기능어나 연결 구문의 잦은 생략현상에 의해 복합 명사의 출현을 발생시킨다. 따라서, 한국어 분석에서 복합 명사의 처리 방법은 매우 중요한 문제로 인식되었으며 활발한 연구가 진행되어 왔다. 복합 명사의 의미 결정은 복합 명사구 내 단위 명사간의 의미적인 수식 관계를 고려하여 머리어의 선택과 의미를 함께 결정할 필요가 있다. 본 논문에서는 정보 검색의 색인어 추출 방법에서 사용되는 복합 명사구 내의 서술성 명사 처리를 이용하여 복합 명사의 의미 결정을 인접 명사의 의미 공기 정보가 아닌 구문관계에 따른 의미 공기 정보를 사용하여 분석하는 방법을 제시한다. 복합 명사구 내에서 구문적인 관계는 명사구 내에 서술성 명사가 등장하는 경우 보-술 관계에 의한 격 결정 문제로 전환할 수 있다. 이러한 구문 구조는 명사 의미를 결정할 수 있는 추가적인 정보로 활용할 수 있으며, 이때 구문 구조 파악을 위해 구축된 의미 제약 조건을 활용하도록 한다. 구조 분석에서 사용되는 격틀 정보는 동사와 공기하는 명사의 구문 관계를 분석하기 위해 의미 정보를 제약조건으로 하여 구축된다. 이러한 의미 격틀 정보는 단문 내 명사들의 격 결정과 격을 채우는 명사 의미를 결정할 수 있는 정보로 활용된다. 본 논문에서는 현재 개발중인 한영 기계 번역 시스템 Tellus-KE의 단문 단위 대역어 선정을 위해 구축된 의미 대역패턴인 동사구 패턴을 사용한다. 동사구 패턴에 기술된 한국어의 단문 단위 의미 격 정보를 사용하는 경우, 격결정을 위해 사용되는 의미 제약 조건이 복합 명사의 중심어 선택과 의미 결정에 재활용 될 수 있으며, 병렬말뭉치에 의해 반자동으로 구축되는 의미 대역 패턴을 사용하여 데이터 구축의 어려움을 개선하고자 한다. 및 산출 과정에 즉각적으로 활용될 수 있을 것이다. 또한, 이러한 정보들은 현재 구축중인 세종 전자사전에도 직접 반영되고 있다.teness)은 언화행위가 성공적이라는 것이다.[J. Searle] (7) 수로 쓰인 것(상수)(象數)과 시로 쓰인 것(의리)(義理)이 하나인 것은 그 나타난 것과 나타나지 않은 것들 사이에 어떠한 들도 없음을 말한다. [(성중영)(成中英)] (8) 공통의 규범의 공통성 속에 규범적인 측면이 벌써 있다. 공통성에서 개인적이 아닌 공적인 규범으로의 전이는 규범, 가치, 규칙, 과정, 제도로의 전이라고 본다. [C. Morrison] (9) 우리의 언어사용에 신비적인 요소를 부인할 수가 없다. 넓은 의미의 발화의미(utterance meaning) 속에 신비적인 요소나 애정표시도 수용된다. 의미분석은 지금 한글을 연구하고, 그 결과에 의존하여서 우리의 실제의 생활에 사용하는 $\ulcorner$한국어사전$\lrcorner$ 등을 만드는 과정에서, 어떤 의미에서 실험되었다고 말할 수가 있는 언어과학의 연구의 결과에 의존하여서 수행되는 철학적인 작업이다. 여기에서는 하나의 철학적인 연구의 시작으로 받아들여지는 이 의미분석의 문제를 반성하여 본다.반인과 다르다는 것이 밝혀졌다. 이 결과가 옳다면 한국의 심성 어휘집은 어절 문맥에 따라서 어간이나 어근 또는 활용형 그 자체로 이루어져 있을 것이다.으며, 레드 클로버 + 혼파 초지가 건물수량과 사료가치를 높이는데 효과적이었다.\ell}$ 이었으며 , yeast extract 첨가(添加)하여 배양시(培養時)는 yeast extract 농도(濃度)가 증가(增加)함에 따라 단백질(蛋白質) 함량(含量)도 증가(增加)하였다. 7. CHS-13 균주(菌株)의 RNA 함량(

  • PDF

A Comparative Study on Optimal Feature Identification and Combination for Korean Dialogue Act Classification (한국어 화행 분류를 위한 최적의 자질 인식 및 조합의 비교 연구)

  • Kim, Min-Jeong;Park, Jae-Hyun;Kim, Sang-Bum;Rim, Hae-Chang;Lee, Do-Gil
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.11
    • /
    • pp.681-691
    • /
    • 2008
  • In this paper, we have evaluated and compared each feature and feature combinations necessary for statistical Korean dialogue act classification. We have implemented a Korean dialogue act classification system by using the Support Vector Machine method. The experimental results show that the POS bigram does not work well and the morpheme-POS pair and other features can be complementary to each other. In addition, a small number of features, which are selected by a feature selection technique such as chi-square, are enough to show steady performance of dialogue act classification. We also found that the last eojeol plays an important role in classifying an entire sentence, and that Korean characteristics such as free order and frequent subject ellipsis can affect the performance of dialogue act classification.

A Study on Error Correction Using Phoneme Similarity in Post-Processing of Speech Recognition (음성인식 후처리에서 음소 유사율을 이용한 오류보정에 관한 연구)

  • Han, Dong-Jo;Choi, Ki-Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.6 no.3
    • /
    • pp.77-86
    • /
    • 2007
  • Recently, systems based on speech recognition interface such as telematics terminals are being developed. However, many errors still exist in speech recognition and then studies about error correction are actively conducting. This paper proposes an error correction in post-processing of the speech recognition based on features of Korean phoneme. To support this algorithm, we used the phoneme similarity considering features of Korean phoneme. The phoneme similarity, which is utilized in this paper, rams data by mono-phoneme, and uses MFCC and LPC to extract feature in each Korean phoneme. In addition, the phoneme similarity uses a Bhattacharrya distance measure to get the similarity between one phoneme and the other. By using the phoneme similarity, the error of eo-jeol that may not be morphologically analyzed could be corrected. Also, the syllable recovery and morphological analysis are performed again. The results of the experiment show the improvement of 7.5% and 5.3% for each of MFCC and LPC.

  • PDF

Rule-based Speech Recognition Error Correction for Mobile Environment (모바일 환경을 고려한 규칙기반 음성인식 오류교정)

  • Kim, Jin-Hyung;Park, So-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.10
    • /
    • pp.25-33
    • /
    • 2012
  • In this paper, we propose a rule-based model to correct errors in a speech recognition result in the mobile device environment. The proposed model considers the mobile device environment with limited resources such as processing time and memory, as follows. In order to minimize the error correction processing time, the proposed model removes some processing steps such as morphological analysis and the composition and decomposition of syllable. Also, the proposed model utilizes the longest match rule selection method to generate one error correction candidate per point, assumed that an error occurs. For the purpose of deploying memory resource, the proposed model uses neither the Eojeol dictionary nor the morphological analyzer, and stores a combined rule list without any classification. Considering the modification and maintenance of the proposed model, the error correction rules are automatically extracted from a training corpus. Experimental results show that the proposed model improves 5.27% on the precision and 5.60% on the recall based on Eojoel unit for the speech recognition result.

Using Syntactic Unit of Morpheme for Reducing Morphological and Syntactic Ambiguity (형태소 및 구문 모호성 축소를 위한 구문단위 형태소의 이용)

  • Hwang, Yi-Gyu;Lee, Hyun-Young;Lee, Yong-Seok
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.7
    • /
    • pp.784-793
    • /
    • 2000
  • The conventional morphological analysis of Korean language presents various morphological ambiguities because of its agglutinative nature. These ambiguities cause syntactic ambiguities and they make it difficult to select the correct parse tree. This problem is mainly related to the auxiliary predicate or bound noun in Korean. They have a strong relationship with the surrounding morphemes which are mostly functional morphemes that cannot stand alone. The combined morphemes have a syntactic or semantic role in the sentence. We extracted these morphemes from 0.2 million tagged words and classified these morphemes into three types. We call these morphemes a syntactic morpheme and regard them as an input unit of the syntactic analysis. This paper presents the syntactic morpheme is an efficient method for solving the following problems: 1) reduction of morphological ambiguities, 2) elimination of unnecessary partial parse trees during the parsing, and 3) reduction of syntactic ambiguity. Finally, the experimental results show that the syntactic morpheme is an essential unit for reducing morphological and syntactic ambiguity.

  • PDF

Categorization of Korean News Articles Based on Convolutional Neural Network Using Doc2Vec and Word2Vec (Doc2Vec과 Word2Vec을 활용한 Convolutional Neural Network 기반 한국어 신문 기사 분류)

  • Kim, Dowoo;Koo, Myoung-Wan
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.742-747
    • /
    • 2017
  • In this paper, we propose a novel approach to improve the performance of the Convolutional Neural Network(CNN) word embedding model on top of word2vec with the result of performing like doc2vec in conducting a document classification task. The Word Piece Model(WPM) is empirically proven to outperform other tokenization methods such as the phrase unit, a part-of-speech tagger with substantial experimental evidence (classification rate: 79.5%). Further, we conducted an experiment to classify ten categories of news articles written in Korean by feeding words and document vectors generated by an application of WPM to the baseline and the proposed model. From the results of the experiment, we report the model we proposed showed a higher classification rate (89.88%) than its counterpart model (86.89%), achieving a 22.80% improvement. Throughout this research, it is demonstrated that applying doc2vec in the document classification task yields more effective results because doc2vec generates similar document vector representation for documents belonging to the same category.