• Title/Summary/Keyword: Dependency-based parsing

Search Result 32, Processing Time 0.026 seconds

Proper Noun Embedding Model for the Korean Dependency Parsing

  • Nam, Gyu-Hyeon;Lee, Hyun-Young;Kang, Seung-Shik
    • Journal of Multimedia Information System
    • /
    • v.9 no.2
    • /
    • pp.93-102
    • /
    • 2022
  • Dependency parsing is a decision problem of the syntactic relation between words in a sentence. Recently, deep learning models are used for dependency parsing based on the word representations in a continuous vector space. However, it causes a mislabeled tagging problem for the proper nouns that rarely appear in the training corpus because it is difficult to express out-of-vocabulary (OOV) words in a continuous vector space. To solve the OOV problem in dependency parsing, we explored the proper noun embedding method according to the embedding unit. Before representing words in a continuous vector space, we replace the proper nouns with a special token and train them for the contextual features by using the multi-layer bidirectional LSTM. Two models of the syllable-based and morpheme-based unit are proposed for proper noun embedding and the performance of the dependency parsing is more improved in the ensemble model than each syllable and morpheme embedding model. The experimental results showed that our ensemble model improved 1.69%p in UAS and 2.17%p in LAS than the same arc-eager approach-based Malt parser.

Korean Dependency Parsing using Pointer Networks (포인터 네트워크를 이용한 한국어 의존 구문 분석)

  • Park, Cheoneum;Lee, Changki
    • Journal of KIISE
    • /
    • v.44 no.8
    • /
    • pp.822-831
    • /
    • 2017
  • In this paper, we propose a Korean dependency parsing model using multi-task learning based pointer networks. Multi-task learning is a method that can be used to improve the performance by learning two or more problems at the same time. In this paper, we perform dependency parsing by using pointer networks based on this method and simultaneously obtaining the dependency relation and dependency label information of the words. We define five input criteria to perform pointer networks based on multi-task learning of morpheme in dependency parsing of a word. We apply a fine-tuning method to further improve the performance of the dependency parsing proposed in this paper. The results of our experiment show that the proposed model has better UAS 91.79% and LAS 89.48% than conventional Korean dependency parsing.

Structural Disambiguation of Korean Adverbs Based on Correlative Relation and Morphological Context

  • Seo, Young-Ae;Park, Sang-Kyu;Choi, Key-Sun
    • ETRI Journal
    • /
    • v.28 no.6
    • /
    • pp.803-806
    • /
    • 2006
  • This letter addresses a structural disambiguation method for Korean adverbs based on the correlative relation constraints between adverbs and modifiees, and the morphological context information of sentences. Using the proposed method, we improved the dependency parsing accuracy of adverbs from 79.2 to 89%. The experimental result shows that the proposed method is especially expert in parsing adverbs which can modify multiple word classes or have a long distance dependency relation to their modifiees.

  • PDF

Query-based Answer Extraction using Korean Dependency Parsing (의존 구문 분석을 이용한 질의 기반 정답 추출)

  • Lee, Dokyoung;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.161-177
    • /
    • 2019
  • In this paper, we study the performance improvement of the answer extraction in Question-Answering system by using sentence dependency parsing result. The Question-Answering (QA) system consists of query analysis, which is a method of analyzing the user's query, and answer extraction, which is a method to extract appropriate answers in the document. And various studies have been conducted on two methods. In order to improve the performance of answer extraction, it is necessary to accurately reflect the grammatical information of sentences. In Korean, because word order structure is free and omission of sentence components is frequent, dependency parsing is a good way to analyze Korean syntax. Therefore, in this study, we improved the performance of the answer extraction by adding the features generated by dependency parsing analysis to the inputs of the answer extraction model (Bidirectional LSTM-CRF). The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. In this study, we compared the performance of the answer extraction model when inputting basic word features generated without the dependency parsing and the performance of the model when inputting the addition of the Eojeol tag feature and dependency graph embedding feature. Since dependency parsing is performed on a basic unit of an Eojeol, which is a component of sentences separated by a space, the tag information of the Eojeol can be obtained as a result of the dependency parsing. The Eojeol tag feature means the tag information of the Eojeol. The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. From the dependency parsing result, a graph is generated from the Eojeol to the node, the dependency between the Eojeol to the edge, and the Eojeol tag to the node label. In this process, an undirected graph is generated or a directed graph is generated according to whether or not the dependency relation direction is considered. To obtain the embedding of the graph, we used Graph2Vec, which is a method of finding the embedding of the graph by the subgraphs constituting a graph. We can specify the maximum path length between nodes in the process of finding subgraphs of a graph. If the maximum path length between nodes is 1, graph embedding is generated only by direct dependency between Eojeol, and graph embedding is generated including indirect dependencies as the maximum path length between nodes becomes larger. In the experiment, the maximum path length between nodes is adjusted differently from 1 to 3 depending on whether direction of dependency is considered or not, and the performance of answer extraction is measured. Experimental results show that both Eojeol tag feature and dependency graph embedding feature improve the performance of answer extraction. In particular, considering the direction of the dependency relation and extracting the dependency graph generated with the maximum path length of 1 in the subgraph extraction process in Graph2Vec as the input of the model, the highest answer extraction performance was shown. As a result of these experiments, we concluded that it is better to take into account the direction of dependence and to consider only the direct connection rather than the indirect dependence between the words. The significance of this study is as follows. First, we improved the performance of answer extraction by adding features using dependency parsing results, taking into account the characteristics of Korean, which is free of word order structure and omission of sentence components. Second, we generated feature of dependency parsing result by learning - based graph embedding method without defining the pattern of dependency between Eojeol. Future research directions are as follows. In this study, the features generated as a result of the dependency parsing are applied only to the answer extraction model in order to grasp the meaning. However, in the future, if the performance is confirmed by applying the features to various natural language processing models such as sentiment analysis or name entity recognition, the validity of the features can be verified more accurately.

Korean Transition-based Dependency Parsing with Recurrent Neural Network (순환 신경망을 이용한 전이 기반 한국어 의존 구문 분석)

  • Li, Jianri;Lee, Jong-Hyeok
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.567-571
    • /
    • 2015
  • Transition-based dependency parsing requires much time and efforts to design and select features from a very large number of possible combinations. Recent studies have successfully applied Multi-Layer Perceptrons (MLP) to find solutions to this problem and to reduce the data sparseness. However, most of these methods have adopted greedy search and can only consider a limited amount of information from the context window. In this study, we use a Recurrent Neural Network to handle long dependencies between sub dependency trees of current state and current transition action. The results indicate that our method provided a higher accuracy (UAS) than an MLP based model.

Modification Distance Model using Headible Path Contexts for Korean Dependency Parsing (지배가능 경로 문맥을 이용한 의존 구문 분석의 수식 거리 모델)

  • Woo, Yeon-Moon;Song, Young-In;Park, So-Young;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.2
    • /
    • pp.140-149
    • /
    • 2007
  • This paper presents a statistical model for Korean dependency-based parsing. Although Korean is one of free word order languages, it has the feature of which some word order is preferred to local contexts. Earlier works proposed parsing models using modification lengths due to this property. Our model uses headible path contexts for modification length probabilities. Using a headible path of a dependent it is effective for long distance relation because the large surface context for a dependent are abbreviated as its headible path. By combined with lexical bigram dependency, our probabilistic model achieves 86.9% accuracy in eojoel analysis for KAIST corpus, more improvement especially for long distance dependencies.

Dependency Grammar and the Parsing of Chinese Sentences

  • Lai, Bong-Ycung-Tom;Huang, Changning
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 1994.02a
    • /
    • pp.63-72
    • /
    • 1994
  • Dependency Grammar has been used by Iinguists as the basis of the syntactic components of their grammar formalisms. It has also been used in natural langauge parsing. In China, attempts have been made to use this grammar formalism to parse Chinese sentences using corpus based techniques. This paper reviews the properties of Dependency Grammar as embodied in four axioms for the well-formedness conditions for dependency structures. It is shown that allowing mul tiple governors as done by some followers of this formalism is unnecessary. The practice of augmenting Dependency Grammar with functional labels is discussed in the light of building functional structures when the sentence is parsed. This will also facilitate semantic interpretion.retion.

  • PDF

Korean Dependency Parsing Using Sequential Parsing Method Based on Pointer Network (순차적 구문 분석 방법을 반영한 포인터 네트워크 기반의 한국어 의존 구문 분석기)

  • Han, Janghoon;Park, Yeongjoon;Jeong, Younghoon;Lee, Inkwon;Han, Jungwook;Park, Seojun;Kim, Juae;Seo, Jeongyeon
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.533-536
    • /
    • 2019
  • 의존 구문 분석은 문장 구성 성분 간의 의존 관계를 분석하는 태스크로, 자연어 이해의 대표적인 과제 중 하나이다. 본 논문에서는 한국어 의존 구문 분석의 성능 향상을 위해 Deep Bi-Affine Network와 Left to Right Dependency Parser를 적용하고, 새롭게 한국어의 언어적 특징을 반영한 Right to Left Dependency Parser 모델을 제안한다. 3개의 의존 구문 분석 모델에 단어 표현을 생성하는 방법으로 ELMo, BERT 임베딩 방법을 적용하고 여러 종류의 모델을 앙상블하여 세종 의존 구문 분석 데이터에 대해 UAS 94.50, LAS 92.46 성능을 얻을 수 있었다.

  • PDF

Automatic Acquisition of Paraphrases Using Bilingual Dependency Relations

  • Hwang, Young-Sook;Kim, Young-Kil
    • ETRI Journal
    • /
    • v.30 no.1
    • /
    • pp.155-157
    • /
    • 2008
  • This letter introduces a new method to automatically acquire paraphrases using bilingual corpora. It utilizes the bilingual dependency relations obtained by projecting a monolingual dependency parse onto the other language's sentence based on statistical alignment techniques. Since the proposed paraphrasing method can clearly disambiguate the sense of the original phrases using the bilingual context of dependency relations, it would be possible to obtain interchangeable paraphrases under a given context. Through experiments with parallel corpora of Korean and English language pairs, we demonstrate that our method effectively extracts paraphrases with high precision, achieving success rates of 94.3% and 84.6%, respectively, for Korean and English.

  • PDF