• Title/Summary/Keyword: 어휘 중의성 해소

Search Result 49, Processing Time 0.024 seconds

A Study of Prosodic Features of Causative and Passive Verbs in Kyungsang Dialect (경상도말 피사동어휘의 운율 특징)

  • Park Hansang
    • Proceedings of the KSPS conference
    • /
    • 1996.02a
    • /
    • pp.183-191
    • /
    • 1996
  • 한국어의 사동문과 피동문은 주로 사동사와 피동사에 의하여 실현된다. 한국어 통사론에서 특이한 점은 피동문과 사동문이 동일한 형태를 취함으로써 중의성이 있는 문장이 등장한다는 점이다. 피동사와 사동사는 형용사 및 동사 어간에 피동접미사가 불어 파생된다. 이러한 사동사와 피동사의 파생에서 특이한 사항은 형용사에 사동접미사가 불어서 사동사가 구성된다는 점과 자동사에 피동접미사가 불어 피동사가 구성된다는 점이다. 사동사와 피동사가 갖는 이러한 통사적 형태적 특성이 경상도 말에서 어떠한 운율적 특징을 가지고 나타나는지를 살펴보는 것이 이 논문의 목적이다. 부산에서 태어나고 자란 20대 중반의 학생 5명에게 5쌍의 피사동문을 읽게 하여 그 결과를 살펴보았다. 경상도말의 사동사는 H+H(M)+L의 성조를 보이고 피동사는 M+H+L의 성조를 보인다. 이러한 특성과 아울러 정상도 말의 피동접미사는 사동접미사에 비하여 상대적으로 길이가 길다. 이 같은 특징은 피동사는 사동사에 피동접미가 붙은 것이라고 하여 피동과 사동을 하나로 묶어 설명하는 주장을 뒷받침한다. 그리고 동일한 형태를 보이는 문장의 중의성이 운율에 의해 해소된다는 점이 흥미롭다. 통시적으로는 중세국어에서 현대국어로의 성조변화, 공시적으로는 서울말과 경상도 말의 피사동이 보이는 체계적 대응을 보이는지는 앞으로의 연구과제다.

  • PDF

Development and Automatic Extraction of Subcategorization Dictionary (하위범주화 사전의 구축 및 자동 확장)

  • 이수선;박현재;우요섭
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10b
    • /
    • pp.179-181
    • /
    • 2000
  • 한국어의 통사적, 의미적 중의성 해결을 위해 하위범주화 사전을 구축하였다. 용언에 따라 제한될 수 있는 문형 패턴과 의미역(semantic roles) 정보의 표준을 정하여 이를 부가하였고 구축한 하위범주화 사전이 명사에 대한 의미를 갖고 있는 계층 시소러스 의미사전과 연동하도록 용언과 명사와의 의미적 연어 관계에 따라 의미마커를 부여했다. 논문에서 구현된 하위범주화 사전이 구문과 어휘의 중의성을 어느 정도 해소하는지 확인하기 위해 반자동적으로 의미 태깅(Sense Tagging)된 말뭉치와 구문분석된 말뭉치를 통해 검증 작업을 수행했다. 이 과정에서 자동으로 하위범주 패턴에 대한 빈도 정보나, 연어정보, 각 의미역과 용언의 통계적 공기 정보 등을 추출하여 하위범주화사전에 추가시켰다. 또한 여기서 얻은 정보를 기준으로 하위범주화 사전을 자동으로 확장하는 알고리즘을 적용하여 확장시켰다.

  • PDF

Improvement of Korean Homograph Disambiguation using Korean Lexical Semantic Network (UWordMap) (한국어 어휘의미망(UWordMap)을 이용한 동형이의어 분별 개선)

  • Shin, Joon-Choul;Ock, Cheol-Young
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.71-79
    • /
    • 2016
  • Disambiguation of homographs is an important job in Korean semantic processing and has been researched for long time. Recently, machine learning approaches have demonstrated good results in accuracy and speed. Other knowledge-based approaches are being researched for untrained words. This paper proposes a hybrid method based on the machine learning approach that uses a lexical semantic network. The use of a hybrid approach creates an additional corpus from subcategorization information and trains this additional corpus. A homograph tagging phase uses the hypernym of the homograph and an additional corpus. Experimentation with the Sejong Corpus and UWordMap demonstrates the hybrid method is to be effective with an increase in accuracy from 96.51% to 96.52%.

Construction and application of Korean Semantic-Network based on Korean Dictionary (사전을 기반으로 한 한국어 의미망 구축과 활용)

  • 최호섭;옥철영;장문수;장명길
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.04b
    • /
    • pp.448-450
    • /
    • 2002
  • 시소러스 의미망, 온톨로지 등과 같은 지식베이스는 자연언어처리와 관련된 여러 분야에서 중요한 언어자원의 역할을 담당하고 있다. 하지만 정보검색, 기계번역과 같은 특정 분야마다 다르게 구축되어 이러한 지식베이스는 실질적인 한국어 처리에는 크게 효과를 보지 못하고 있는 실정이다. 본 논문은 한국어를 대상으로 한 시소러스, 의미망의 등의 구축 방법론적 문제를 지적하고 말뭉치를 중심으로 한 텍스트 언어처리에 필요한 의미망의 구축 방법과 포괄적인 활용방안을 모색한다. 의미망 구축의 기반이 되는 지식은 각종 사전(dictionary)를 이용했으며, 구축하고 있는 의미망의 활용 가능성을 평가하기 위하여 ETRI의 의미기반 정보검색과 언어처리의 큰 문제 중 하나인 단어 중의성 해소(WSD)에서 어떻게 활용되는지를 살핀다. 그리하여 언어자인의 처리 방안 중의 하나인 의미망을 구축함으로써 언어를 효과적으로 처리하기 위한 기본적이면서 중요한 어휘 데이터베이스 마련과 동시에 언어자원 구축의 한 방향을 제시하고자 한다.

  • PDF

A Study on the Computational Model of Word Sense Disambiguation, based on Corpora and Experiments on Native Speaker's Intuition (직관 실험 및 코퍼스를 바탕으로 한 의미 중의성 해소 계산 모형 연구)

  • Kim, Dong-Sung;Choe, Jae-Woong
    • Korean Journal of Cognitive Science
    • /
    • v.17 no.4
    • /
    • pp.303-321
    • /
    • 2006
  • According to Harris'(1966) distributional hypothesis, understanding the meaning of a word is thought to be dependent on its context. Under this hypothesis about human language ability, this paper proposes a computational model for native speaker's language processing mechanism concerning word sense disambiguation, based on two sets of experiments. Among the three computational models discussed in this paper, namely, the logic model, the probabilistic model, and the probabilistic inference model, the experiment shows that the logic model is first applied fer semantic disambiguation of the key word. Nexr, if the logic model fails to apply, then the probabilistic model becomes most relevant. The three models were also compared with the test results in terms of Pearson correlation coefficient value. It turns out that the logic model best explains the human decision behaviour on the ambiguous words, and the probabilistic inference model tomes next. The experiment consists of two pans; one involves 30 sentences extracted from 1 million graphic-word corpus, and the result shows the agreement rate anong native speakers is at 98% in terms of word sense disambiguation. The other pm of the experiment, which was designed to exclude the logic model effect, is composed of 50 cleft sentences.

  • PDF

Context-sensitive Spelling Error Correction using Eojeol N-gram (어절 N-gram을 이용한 문맥의존 철자오류 교정)

  • Kim, Minho;Kwon, Hyuk-Chul;Choi, Sungki
    • Journal of KIISE
    • /
    • v.41 no.12
    • /
    • pp.1081-1089
    • /
    • 2014
  • Context-sensitive spelling-error correction methods are largely classified into rule-based methods and statistical data-based methods, the latter of which is often preferred in research. Statistical error correction methods consider context-sensitive spelling error problems as word-sense disambiguation problems. The method divides a vocabulary pair, for correction, which consists of a correction target vocabulary and a replacement candidate vocabulary, according to the context. The present paper proposes a method that integrates a word-phrase n-gram model into a conventional model in order to improve the performance of the probability model by using a correction vocabulary pair, which was a result of a previous study performed by this research team. The integrated model suggested in this paper includes a method used to interpolate the probability of a sentence calculated through each model and a method used to apply the models, when both methods are sequentially applied. Both aforementioned types of integrated models exhibit relatively high accuracy and reproducibility when compared to conventional models or to a model that uses only an n-gram.

A Development of the Automatic Predicate-Argument Analyzer for Construction of Semantically Tagged Korean Corpus (한국어 의미 표지 부착 말뭉치 구축을 위한 자동 술어-논항 분석기 개발)

  • Cho, Jung-Hyun;Jung, Hyun-Ki;Kim, Yu-Seop
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.43-52
    • /
    • 2012
  • Semantic role labeling is the research area analyzing the semantic relationship between elements in a sentence and it is considered as one of the most important semantic analysis research areas in natural language processing, such as word sense disambiguation. However, due to the lack of the relative linguistic resources, Korean semantic role labeling research has not been sufficiently developed. We, in this paper, propose an automatic predicate-argument analyzer to begin constructing the Korean PropBank which has been widely utilized in the semantic role labeling. The analyzer has mainly two components: the semantic lexical dictionary and the automatic predicate-argument extractor. The dictionary has the case frame information of verbs and the extractor is a module to decide the semantic class of the argument for a specific predicate existing in the syntactically annotated corpus. The analyzer developed in this research will help the construction of Korean PropBank and will finally play a big role in Korean semantic role labeling.

Generalized LR Parser with Conditional Action Model(CAM) using Surface Phrasal Types (표층 구문 타입을 사용한 조건부 연산 모델의 일반화 LR 파서)

  • 곽용재;박소영;황영숙;정후중;이상주;임해창
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.81-92
    • /
    • 2003
  • Generalized LR parsing is one of the enhanced LR parsing methods so that it overcome the limit of one-way linear stack of the traditional LR parser using graph-structured stack, and it has been playing an important role of a firm starting point to generate other variations for NL parsing equipped with various mechanisms. In this paper, we propose a conditional Action Model that can solve the problems of conventional probabilistic GLR methods. Previous probabilistic GLR parsers have used relatively limited contextual information for disambiguation due to the high complexity of internal GLR stack. Our proposed model uses Surface Phrasal Types representing the structural characteristics of the parse for its additional contextual information, so that more specified structural preferences can be reflected into the parser. Experimental results show that our GLR parser with the proposed Conditional Action Model outperforms the previous methods by about 6-7% without any lexical information, and our model can utilize the rich stack information for syntactic disambiguation of probabilistic LR parser.

A Korean Homonym Disambiguation System Using Refined Semantic Information and Thesaurus (정제된 의미정보와 시소러스를 이용한 동형이의어 분별 시스템)

  • Kim Jun-Su;Ock Cheol-Young
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.829-840
    • /
    • 2005
  • Word Sense Disambiguation(WSD) is one of the most difficult problem in Korean information processing. We propose a WSD model with the capability to filter semantic information using the specific characteristics in dictionary dictions, and nth added information, useful to sense determination, such as statistical, distance and case information. we propose a model, which can resolve the issues resulting from the scarcity of semantic information data based on the word hierarchy system (thesaurus) developed by Ulsan University's UOU Word Intelligent Network, a dictionary-based toxicological database. Among the WSD models elaborated by this study, the one using statistical information, distance and case information along with the thesaurus (hereinafter referred to as 'SDJ-X model') performed the best. In an experiment conducted on the sense-tagged corpus consisting of 1,500,000 eojeols, provided by the Sejong project, the SDJ-X model recorded improvements over the maximum frequency word sense determination (maximum frequency determination, MFC, accuracy baseline) of $18.87\%$ ($21.73\%$ for nouns and inter-eojeot distance weights by $10.49\%$ ($8.84\%$ for nouns, $11.51\%$ for verbs). Finally, the accuracy level of the SDJ-X model was higher than that recorded by the model using only statistical information, distance and case information, without the thesaurus by a margin of $6.12\%$ ($5.29\%$ for nouns, $6.64\%$ for verbs).