• Title/Summary/Keyword: Pointer networks

Search Result 26, Processing Time 0.022 seconds

Korean Coreference Resolution using Stacked Pointer Networks based on Position Encoding (포지션 인코딩 기반 스택 포인터 네트워크를 이용한 한국어 상호참조해결)

  • Park, Cheoneum;Lee, Changki
    • KIISE Transactions on Computing Practices
    • /
    • v.24 no.3
    • /
    • pp.113-121
    • /
    • 2018
  • Position encoding is a method of applying weights according to position of words that appear in a sentence. Pointer networks is a deep learning model that outputs corresponding index with an input sequence. This model can be applied to coreference resolution using attribute. However, the pointer networks has a problem in that its performance is degraded when the length of input sequence is long. To solve this problem, we proposed two contributions to resolve the coreference. First, we applied position encoding and dynamic position encoding to pointer networks. Second, we stack deeply layers of encoder to make high-level abstraction. As results, the position encoding based stacked pointer networks model proposed in this paper had a CoNLL F1 performance of 71.78%, which was improved by 6.01% compared to vanilla pointer networks.

Pointer Networks based on Skip Pointing Model (스킵 포인팅 모델 기반 포인터 네트워크)

  • Park, Cheoneum;Lee, Changki
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.12
    • /
    • pp.625-631
    • /
    • 2016
  • Pointer Networks is a model which generates an output sequence with elements that correspond to an input sequence, based on the attention mechanism. A time complexity of the pointer networks is $O(N^2)$ resulting in longer decoding time of the model. This is because the model calculates attention for each input, if size of the input sequence is N. In this paper, we propose the pointer networks based on skip pointing model, which confirms the necessary input vector at decoding for reducing the decoding time of the pointer networks. Furthermore, experiments were conducted for the pronouns coreference resolution, which uses the method proposed in this paper. Our results show that the processing time per sentence was approximately 1.15 times faster, and the MUC F1 was 83.60%; this was approximately 2.17% improvement and a better performance than the original pointer networks.

Coreference Resolution using Hierarchical Pointer Networks (계층적 포인터 네트워크를 이용한 상호참조해결)

  • Park, Cheoneum;Lee, Changki
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.9
    • /
    • pp.542-549
    • /
    • 2017
  • Sequence-to-sequence models and similar pointer networks suffer from performance degradation when an input is composed of multiple sentences or when the length of the input sentence is long. To solve this problem, this paper proposes a hierarchical pointer network model that uses both the word level and sentence level information to encode input sequences composed of several sentences at the word level and sentence level. We propose a hierarchical pointer network based coreference resolution that performs a coreference resolution for all mentions. The experimental results show that the proposed model has a precision of 87.07%, recall of 65.39% and CoNLL F1 74.61%, which is an improvement of 21.83% compared to an existing rule-based model.

Korean Dependency Parsing using Pointer Networks (포인터 네트워크를 이용한 한국어 의존 구문 분석)

  • Park, Cheoneum;Lee, Changki
    • Journal of KIISE
    • /
    • v.44 no.8
    • /
    • pp.822-831
    • /
    • 2017
  • In this paper, we propose a Korean dependency parsing model using multi-task learning based pointer networks. Multi-task learning is a method that can be used to improve the performance by learning two or more problems at the same time. In this paper, we perform dependency parsing by using pointer networks based on this method and simultaneously obtaining the dependency relation and dependency label information of the words. We define five input criteria to perform pointer networks based on multi-task learning of morpheme in dependency parsing of a word. We apply a fine-tuning method to further improve the performance of the dependency parsing proposed in this paper. The results of our experiment show that the proposed model has better UAS 91.79% and LAS 89.48% than conventional Korean dependency parsing.

Mention Detection Using Pointer Networks for Coreference Resolution

  • Park, Cheoneum;Lee, Changki;Lim, Soojong
    • ETRI Journal
    • /
    • v.39 no.5
    • /
    • pp.652-661
    • /
    • 2017
  • A mention has a noun or noun phrase as its head and constructs a chunk that defines any meaning, including a modifier. Mention detection refers to the extraction of mentions from a document. In mentions, coreference resolution refers to determining any mentions that have the same meaning. Pointer networks, which are models based on a recurrent neural network encoder-decoder, outputs a list of elements corresponding to an input sequence. In this paper, we propose mention detection using pointer networks. This approach can solve the problem of overlapped mention detection, which cannot be solved by a sequence labeling approach. The experimental results show that the performance of the proposed mention detection approach is F1 of 80.75%, which is 8% higher than rule-based mention detection, and the performance of the coreference resolution has a CoNLL F1 of 56.67% (mention boundary), which is 7.68% higher than coreference resolution using rule-based mention detection.

Coreference Resolution for Korean Pronouns using Pointer Networks (포인터 네트워크를 이용한 한국어 대명사 상호참조해결)

  • Park, Cheoneum;Lee, Changki
    • Journal of KIISE
    • /
    • v.44 no.5
    • /
    • pp.496-502
    • /
    • 2017
  • Pointer Networks is a deep-learning model for the attention-mechanism outputting of a list of elements that corresponds to the input sequence and is based on a recurrent neural network (RNN). The coreference resolution for pronouns is the natural language processing (NLP) task that defines a single entity to find the antecedents that correspond to the pronouns in a document. In this paper, a pronoun coreference-resolution method that finds the relation between the antecedents and the pronouns using the Pointer Networks is proposed; furthermore, the input methods of the Pointer Networks-that is, the chaining order between the words in an entity-are proposed. From among the methods that are proposed in this paper, the chaining order Coref2 showed the best performance with an F1 of MUC 81.40 %. The method showed performances that are 31.00 % and 19.28 % better than the rule-based (50.40 %) and statistics-based (62.12 %) coreference resolution systems, respectively, for the Korean pronouns.

Mention Detection with Pointer Networks (포인터 네트워크를 이용한 멘션탐지)

  • Park, Cheoneum;Lee, Changki
    • Journal of KIISE
    • /
    • v.44 no.8
    • /
    • pp.774-781
    • /
    • 2017
  • Mention detection systems use nouns or noun phrases as a head and construct a chunk of text that defines any meaning, including a modifier. The term "mention detection" relates to the extraction of mentions in a document. In the mentions, a coreference resolution pertains to finding out if various mentions have the same meaning to each other. A pointer network is a model based on a recurrent neural network (RNN) encoder-decoder, and outputs a list of elements that correspond to input sequence. In this paper, we propose the use of mention detection using pointer networks. Our proposed model can solve the problem of overlapped mention detection, an issue that could not be solved by sequence labeling when applying the pointer network to the mention detection. As a result of this experiment, performance of the proposed mention detection model showed an F1 of 80.07%, a 7.65%p higher than rule-based mention detection; a co-reference resolution performance using this mention detection model showed a CoNLL F1 of 52.67% (mention boundary), and a CoNLL F1 of 60.11% (head boundary) that is high, 7.68%p, or 1.5%p more than coreference resolution using rule-based mention detection.

Pointer-Generator Networks for Community Question Answering Summarization (Pointer-Generator Networks를 이용한 cQA 시스템 질문 요약)

  • kim, Won-Woo;Kim, Seon-Hoon;Jang, Heon-Seok;Kang, In-Ho;Park, Kwang-Hyun
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.126-131
    • /
    • 2018
  • cQA(Community-based Question Answering) 시스템은 사용자들이 질문을 남기고 답변을 작성하는 시스템이다. cQA는 사용자의 편의를 위해 기존의 축적된 질문을 검색하거나 카테고리로 분류하는 기능을 제공한다. 질문의 길이가 길 경우 검색이나 카테고리 분류의 정확도가 떨어지는 한계가 있는데, 이를 극복하기 위해 cQA 질문을 요약하는 모델을 구축할 필요가 있다. 하지만 이러한 모델을 구축하려면 대량의 요약 데이터를 확보해야 하는 어려움이 존재한다. 본 논문에서는 이러한 어려움을 극복하기 위해 cQA의 질문 제목, 본문으로 데이터를 확보하고 필터링을 통해 요약 데이터 셋을 만들었다. 또한 본문의 대표 단어를 이용하여 추상 요약을 하기 위해 딥러닝 기반의 Pointer-generator model을 사용하였다. 실험 결과, 기존의 추출 요약 방식보다 딥러닝 기반의 추상 요약 방식의 성능이 더 좋았으며 Pointer-generator model이 보다 좋은 성능을 보였다.

  • PDF

Korean Dependency Parsing Using Stack-Pointer Networks and Subtree Information (스택-포인터 네트워크와 부분 트리 정보를 이용한 한국어 의존 구문 분석)

  • Choi, Yong-Seok;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.6
    • /
    • pp.235-242
    • /
    • 2021
  • In this work, we develop a Korean dependency parser based on a stack-pointer network that consists of a pointer network and an internal stack. The parser has an encoder and decoder and builds a dependency tree for an input sentence in a depth-first manner. The encoder of the parser encodes an input sentence, and the decoder selects a child for the word at the top of the stack at each step. Since the parser has the internal stack where a search path is stored, the parser can utilize information of previously derived subtrees when selecting a child node. Previous studies used only a grandparent and the most recently visited sibling without considering a subtree structure. In this paper, we introduce graph attention networks that can represent a previously derived subtree. Then we modify our parser based on the stack-pointer network to utilize subtree information produced by the graph attention networks. After training the dependency parser using Sejong and Everyone's corpus, we evaluate the parser's performance. Experimental results show that the proposed parser achieves better performance than the previous approaches at sentence-level accuracies when adopting 2-depth graph attention networks.

CEM-PF: Cost-Effective Mobility Management Scheme Based on Pointer Forwarding in Proxy Mobile IPv6 Networks (프록시 모바일 IPv6 네트워크에서 포인터 포워딩에 기반한 비용효과적인 이동성관리 기법)

  • Park, Seung-Yoon;Jeong, Jong-Pil
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.4
    • /
    • pp.81-93
    • /
    • 2012
  • We propose efficient mobility management schemes based on pointer forwarding for Proxy Mobile IPv6 Networks(PMIPv6) with the objective to reduce the overall network traffic incurred by mobility management and packet delivery. The proposed schemes are per-user-based, i.e., the optimal threshold of the forwarding chain length that minimizes the overall network traffic is dynamically determined for each individual mobile user, based on the user's specific mobility and service patterns. We demonstrate that there exists an optimal threshold of the forwarding chain length, given a set of parameters characterizing the specific mobility and service patterns of a mobile user. We also demonstrate that our schemes yield significantly better performance than schemes that apply a static threshold to all mobile users. A comparative analysis shows that our pointer forwarding schemes outperform routing-based mobility management protocols for PMIPv6.