• Title/Summary/Keyword: zero pronoun

Search Result 9, Processing Time 0.023 seconds

Deep Neural Architecture for Recovering Dropped Pronouns in Korean

  • Jung, Sangkeun;Lee, Changki
    • ETRI Journal
    • /
    • v.40 no.2
    • /
    • pp.257-265
    • /
    • 2018
  • Pronouns are frequently dropped in Korean sentences, especially in text messages in the mobile phone environment. Restoring dropped pronouns can be a beneficial preprocessing task for machine translation, information extraction, spoken dialog systems, and many other applications. In this work, we address the problem of dropped pronoun recovery by resolving two simultaneous subtasks: detecting zero-pronoun sentences and determining the type of dropped pronouns. The problems are statistically modeled by encoding the sentence and classifying types of dropped pronouns using a recurrent neural network (RNN) architecture. Various RNN-based encoding architectures were investigated, and the stacked RNN was shown to be the best model for Korean zero-pronoun recovery. The proposed method does not require any manual features to be implemented; nevertheless, it shows good performance.

Automatic Acquisition of Lexical-Functional Grammar Resources from a Japanese Dependency Corpus

  • Oya, Masanori;Genabith, Josef Van
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.375-384
    • /
    • 2007
  • This paper describes a method for automatic acquisition of wide-coverage treebank-based deep linguistic resources for Japanese, as part of a project on treebank-based induction of multilingual resources in the framework of Lexical-Functional Grammar (LFG). We automatically annotate LFG f-structure functional equations (i.e. labelled dependencies) to the Kyoto Text Corpus version 4.0 (KTC4) (Kurohashi and Nagao 1997) and the output of of Kurohashi-Nagao Parser (KNP) (Kurohashi and Nagao 1998), a dependency parser for Japanese. The original KTC4 and KNP provide unlabelled dependencies. Our method also includes zero pronoun identification. The performance of the f-structure annotation algorithm with zero-pronoun identification for KTC4 is evaluated against a manually-corrected Gold Standard of 500 sentences randomly chosen from KTC4 and results in a pred-only dependency f-score of 94.72%. The parsing experiments on KNP output yield a pred-only dependency f-score of 82.08%.

  • PDF

Computational Approach to Zero Pronoun Resolution in Korean Encyclopedia (한국어 백과사전에 등장하는 영대명사(Zero Pronoun)의 복원에 관한 전산학적 연구)

  • Shin, Hyo-Shik;Kang, Young-Soo;Choi, Key-Sun;Song, Man-Suk
    • Annual Conference on Human and Language Technology
    • /
    • 2001.10d
    • /
    • pp.239-243
    • /
    • 2001
  • 이 논문은 한국어 백과사전에 등장하는 질병에 대한 요약문 생성의 일환으로 내용을 비교하고 중복성을 제거하기 위해 논리표현으로의 변환과정에서 중요한 영대명사의 복원을 다룬다. 백과사전의적인 기술 특성상 자주 등장하는 영대명사의 복원을 위해 통사 의미적 혹은 담화적 언어지식에 의존하기보다는 질병에 관한 개념지도를 토대로 복원할 수 있다는 지식기반 방식을 제안한다.

  • PDF

Generation of Zero Pronouns using Center Transition of Preceding Utterances (선행 발화의 중심 전이를 이용한 영형 생성)

  • Roh, Ji-Eun;Na, Seung-Hoon;Lee, Jong-Hyeok
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.10
    • /
    • pp.990-1002
    • /
    • 2005
  • To generate coherent texts, it is important to produce appropriate pronouns to refer to previously-mentioned things in a discourse. Specifically, we focus on pronominalization by zero pronouns which frequently occur in Korean. This paper investigates zero pronouns in Korean based on the cost-based centering theory, especially focusing on the center transitions of adjacent utterances. In previous centering works, only one type of nominal entity has been considered as the target of pronominalization, even though other entities are frequently pronominalized as zero pronouns. To resolve this problem, and explain the reference phenomena of real texts, four types of nominal entity (Npair, Ninter, Nintra, and Nnon) from centering theory are defined with the concept of inter-, intra-, and pairwise salience. For each entity type, a case study of zero phenomena is performed through analyzing corpus and building a pronominalization model. This study shows that the zero phenomena of entities which have been neglected in previous centering works are explained via the renter transition of the second previous utterance. We also show that in Ninter, Nintra, and Nnon, pronominalization accuracy achieved by complex combination of several types of features is completely or nearly achieved by using the second previous utterance's transition across genres.

Antecedent Identification of Zero Subjects using Anaphoricity Information and Centering Theory (조응성 정보와 중심화 이론에 기반한 영형 주어의 선행사 식별)

  • Kim, Kye-Sung;Park, Seong-Bae;Lee, Sang-Jo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.12
    • /
    • pp.873-880
    • /
    • 2013
  • This paper approaches the problem of resolving Korean zero pronouns using Centering Theory modeling local coherence. Centering Theory has been widely used to resolve English pronouns. However, it is much difficult to apply the centering framework for zero pronoun resolution in languages such as Japanese and Korean. Since in particular the use of non-anaphoric zero pronouns without explicit antecedents is not considered in the Centering Theory of Grosz et al., the presence of non-anaphoric cases negatively affects the performance of the resolution system based on Centering Theory. To overcome this, this paper presents a method which determines the intra-sentential anaphoricity of zero pronouns in subject position by using relationships between clauses, and then identifies antecedents of zero subjects. In our experiments, the proposed method outperforms the baseline method relying solely on Centering Theory.

Generation of Natural Referring Expressions by Syntactic Information and Cost-based Centering Model (구문 정보와 비용기반 중심화 이론에 기반한 자연스러운 지시어 생성)

  • Roh Ji-Eun;Lee Jong-Hyeok
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.12
    • /
    • pp.1649-1659
    • /
    • 2004
  • Text Generation is a process of generating comprehensible texts in human languages from some underlying non-linguistic representation of information. Among several sub-processes for text generation to generate coherent texts, this paper concerns referring expression generation which produces different types of expressions to refer to previously-mentioned things in a discourse. Specifically, we focus on pronominalization by zero pronouns which frequently occur in Korean. To build a generation model of referring expressions for Korean, several features are identified based on grammatical information and cost-based centering model, which are applied to various machine learning techniques. We demonstrate that our proposed features are well defined to explain pronominalization, especially pronominalization by zero pronouns in Korean, through 95 texts from three genres - Descriptive texts, News, and Short Aesop's Fables. We also show that our model significantly outperforms previous ones with a 99.9% confidence level by a T-test.

Anaphoricity Determination of Zero Pronouns for Intra-sentential Zero Anaphora Resolution (문장 내 영 조응어 해석을 위한 영대명사의 조응성 결정)

  • Kim, Kye-Sung;Park, Seong-Bae;Park, Se-Young;Lee, Sang-Jo
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.12
    • /
    • pp.928-935
    • /
    • 2010
  • Identifying the referents of omitted elements in a text is an important task to many natural language processing applications such as machine translation, information extraction and so on. These omitted elements are often called zero anaphors or zero pronouns, and are regarded as one of the most common forms of reference. However, since all zero elements do not refer to explicit objects which occur in the same text, recent work on zero anaphora resolution have attempted to identify the anaphoricity of zero pronouns. This paper focuses on intra-sentential anaphoricity determination of subject zero pronouns that frequently occur in Korean. Unlike previous studies on pair-wise comparisons, this study attempts to determine the intra-sentential anaphoricity of zero pronouns by learning directly the structure of clauses in which either non-anaphoric or inter-sentential subject zero pronouns occur. The proposed method outperforms baseline methods, and anaphoricity determination of zero pronouns will play an important role in resolving zero anaphora.

Zero Pronoun Resolution for Korean-English Spoken Language MT (한국어-영어 대화체 번역시스템을 위한 영형 대명사 해소)

  • Park, Arum;Ji, Eun-Byul;Hong, Munpyo
    • Annual Conference on Human and Language Technology
    • /
    • 2011.10a
    • /
    • pp.98-101
    • /
    • 2011
  • 이 논문은 한-영 대화체 번역 시스템에서 영형 대명사 해소를 위한 새로운 방법론을 제시하였다. 영형 대명사는 문맥, 상황, 세상 지식으로부터 추론될 수 있는 문장에서 생략된 요소이다. 이 논문은 특히 주어-대명사 생략 현상에 대해 다루고 있는데, 그 이유는 드라마 대본이나 인스턴트 메신저 채팅과 같은 한국어 대화체에서는 매우 일반적인 현상이기 때문이다. 이 논문에서 우리는 많은 양의 지식을 요구하지 않는 간단한 방법론을 제시하였다. 평가결과 우리의 방법은 0.79의 F-measure 스코어를 달성하였고, 전체번역률의 측면에서는 약 4.1% 정도의 향상효과가 있었다.

  • PDF

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.