• Title/Summary/Keyword: LSTM CRF

Search Result 60, Processing Time 0.401 seconds

Korean Dependency Parsing using Second-Order TreeCRF (Second-Order TreeCRF를 이용한 한국어 의존 파싱)

  • Min, Jinwoo;Na, Seung-Hoon;Shin, Jong-Hoon;Kim, Young-Kil
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.108-111
    • /
    • 2020
  • 한국어 의존 파싱은 전이 기반 방식과 그래프 기반 방식의 두 갈래로 연구되어 왔으며 현재 가장 높은 성능을 보이고 있는 그래프 기반 파서인 Biaffine 어텐션 모델은 입력 시퀀스를 다층의 LSTM을 통해 인코딩 한 후 각각 별도의 MLP를 적용하여 의존소와 지배소에 대한 표상을 얻고 이를 Biaffine 어텐션을 통해 모든 의존소에 대한 지배소의 점수를 얻는 모델이다. 위의 Biaffine 어텐션 모델은 별도의 High-Order 정보를 활용하지 않는 first-order 파싱 모델이며 학습과정에서 어떠한 트리 관련 손실을 얻지 않는다. 본 연구에서는 같은 부모를 공유하는 형제 노드에 대한 점수를 모델링하고 정답 트리에 대한 조건부 확률을 모델링 하는 Second-Order TreeCRF 모델을 한국어 의존 파싱에 적용하여 실험 결과를 보인다.

  • PDF

Named-entity Recognition Using Bidirectional LSTM CRFs (Bidirectional LSTM CRFs를 이용한 한국어 개체명 인식)

  • Song, Chi-Yun;Yang, Sung-Min;Kang, Sangwoo
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.321-323
    • /
    • 2017
  • 개체명 인식은 문서 내에서 고유한 의미를 갖는 인명, 기관명, 지명, 시간, 날짜 등을 추출하여 그 종류를 결정하는것을 의미한다. Bidirectional LSTM CRFs 모델은 연속성을 갖는 데이터에 가장 적합한 RNN기반의 심층 학습모델로서 개체명 인식 연구에 가장 우수한 성능을 보여준다. 본 논문에서는 한국어 개체명 인식을 위하여 Bidirectional LSTM CRFs 모델을 사용하고, 입력 자질로 단어뿐만 아니라 품사 임베딩 모델과, 개체명 사전을 활용하여 입력 자질을 구성한다. 또한 입력 자질에 대한 벡터의 크기를 최적화 하여 기본 모델보다 성능이 향상되었음을 증명하였다.

  • PDF

Named-entity Recognition Using Bidirectional LSTM CRFs (Bidirectional LSTM CRFs를 이용한 한국어 개체명 인식)

  • Song, Chi-Yun;Yang, Sung-Min;Kang, Sangwoo
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.321-323
    • /
    • 2017
  • 개체명 인식은 문서 내에서 고유한 의미를 갖는 인명, 기관명, 지명, 시간, 날짜 등을 추출하여 그 종류를 결정하는 것을 의미한다. Bidirectional LSTM CRFs 모델은 연속성을 갖는 데이터에 가장 적합한 RNN기반의 심층 학습모델로서 개체명 인식 연구에 가장 우수한 성능을 보여준다. 본 논문에서는 한국어 개체명 인식을 위하여 Bidirectional LSTM CRFs 모델을 사용하고, 입력 자질로 단어뿐만 아니라 품사 임베딩 모델과, 개체명 사전을 활용하여 입력 자질을 구성한다. 또한 입력 자질에 대한 벡터의 크기를 최적화 하여 기본 모델보다 성능이 향상되었음을 증명하였다.

  • PDF

Korean Semantic Role Labeling using Stacked Bidirectional LSTM-CRFs (Stacked Bidirectional LSTM-CRFs를 이용한 한국어 의미역 결정)

  • Bae, Jangseong;Lee, Changki
    • Journal of KIISE
    • /
    • v.44 no.1
    • /
    • pp.36-43
    • /
    • 2017
  • Syntactic information represents the dependency relation between predicates and arguments, and it is helpful for improving the performance of Semantic Role Labeling systems. However, syntax analysis can cause computational overhead and inherit incorrect syntactic information. To solve this problem, we exclude syntactic information and use only morpheme information to construct Semantic Role Labeling systems. In this study, we propose an end-to-end SRL system that only uses morpheme information with Stacked Bidirectional LSTM-CRFs model by extending the LSTM RNN that is suitable for sequence labeling problem. Our experimental results show that our proposed model has better performance, as compare to other models.

Korean Named-entity Recognition Using CNN-CRFs (CNN-CRFs를 이용한 한국어 개체명 인식기)

  • You, Yeon-Soo;Park, Hyuk-Ro
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.78-80
    • /
    • 2019
  • 개체명 인식 연구에서 우수한 성능을 보이고 있는 bi-LSTM-CRFs 모델은 처리 속도가 느린 단점이 있고, CNN-CRFs 모델은 한국어 말뭉치를 사용하여 제대로 분석되지 않았다. 본 논문에서는 한국어 개체명 인식 말뭉치를 이용한 CNN-CRFs 모델의 음절 단위 한국어 개체명 인식 방법을 제안한다. 실험 결과 bi-LSTM-CRFs 모델보다 CNN-CRFs 모델의 F1 score가 0.4% 높았고, 27.5% 빠른 처리 속도를 보였다.

  • PDF

Encoding Dictionary Feature for Deep Learning-based Named Entity Recognition

  • Ronran, Chirawan;Unankard, Sayan;Lee, Seungwoo
    • International Journal of Contents
    • /
    • v.17 no.4
    • /
    • pp.1-15
    • /
    • 2021
  • Named entity recognition (NER) is a crucial task for NLP, which aims to extract information from texts. To build NER systems, deep learning (DL) models are learned with dictionary features by mapping each word in the dataset to dictionary features and generating a unique index. However, this technique might generate noisy labels, which pose significant challenges for the NER task. In this paper, we proposed DL-dictionary features, and evaluated them on two datasets, including the OntoNotes 5.0 dataset and our new infectious disease outbreak dataset named GFID. We used (1) a Bidirectional Long Short-Term Memory (BiLSTM) character and (2) pre-trained embedding to concatenate with (3) our proposed features, named the Convolutional Neural Network (CNN), BiLSTM, and self-attention dictionaries, respectively. The combined features (1-3) were fed through BiLSTM - Conditional Random Field (CRF) to predict named entity classes as outputs. We compared these outputs with other predictions of the BiLSTM character, pre-trained embedding, and dictionary features from previous research, which used the exact matching and partial matching dictionary technique. The findings showed that the model employing our dictionary features outperformed other models that used existing dictionary features. We also computed the F1 score with the GFID dataset to apply this technique to extract medical or healthcare information.

Expansion of Word Representation for Named Entity Recognition Based on Bidirectional LSTM CRFs (Bidirectional LSTM CRF 기반의 개체명 인식을 위한 단어 표상의 확장)

  • Yu, Hongyeon;Ko, Youngjoong
    • Journal of KIISE
    • /
    • v.44 no.3
    • /
    • pp.306-313
    • /
    • 2017
  • Named entity recognition (NER) seeks to locate and classify named entities in text into pre-defined categories such as names of persons, organizations, locations, expressions of times, etc. Recently, many state-of-the-art NER systems have been implemented with bidirectional LSTM CRFs. Deep learning models based on long short-term memory (LSTM) generally depend on word representations as input. In this paper, we propose an approach to expand word representation by using pre-trained word embedding, part of speech (POS) tag embedding, syllable embedding and named entity dictionary feature vectors. Our experiments show that the proposed approach creates useful word representations as an input of bidirectional LSTM CRFs. Our final presentation shows its efficacy to be 8.05%p higher than baseline NERs with only the pre-trained word embedding vector.

Neural Model for Named Entity Recognition Considering Aligned Representation

  • Sun, Hongyang;Kim, Taewhan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.613-616
    • /
    • 2018
  • Sequence tagging is an important task in Natural Language Processing (NLP), in which the Named Entity Recognition (NER) is the key issue. So far the most widely adopted model for NER in NLP is that of combining the neural network of bidirectional long short-term memory (BiLSTM) and the statistical sequence prediction method of Conditional Random Field (CRF). In this work, we improve the prediction accuracy of the BiLSTM by supporting an aligned word representation mechanism. We have performed experiments on multilingual (English, Spanish and Dutch) datasets and confirmed that our proposed model outperformed the existing state-of-the-art models.

A Study on the Identification Method of Security Threat Information Using AI Based Named Entity Recognition Technology (인공지능 기반 개체명 인식 기술을 활용한 보안 위협 정보 식별 방안 연구)

  • Taehyeon Kim;Joon-Hyung Lim;Taeeun Kim;Ieck-chae Euom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.4
    • /
    • pp.577-586
    • /
    • 2024
  • As new technologies are developed, new security threats such as the emergence of AI technologies that create ransomware are also increasing. New security equipment such as XDR has been developed to cope with these security threats, but when using various security equipment together rather than a single security equipment environment, there is a difficulty in creating numerous regular expressions for identifying and classifying essential data. To solve this problem, this paper proposes a method of identifying essential information for identifying threat information by introducing artificial intelligence-based entity name recognition technology in various security equipment usage environments. After analyzing the security equipment log data to select essential information, the storage format of information and the tag list for utilizing artificial intelligence were defined, and the method of identifying and extracting essential data is proposed through entity name recognition technology using artificial intelligence. As a result of various security equipment log data and 23 tag-based entity name recognition tests, the weight average of f1-score for each tag is 0.44 for Bi-LSTM-CRF and 0.99 for BERT-CRF. In the future, we plan to study the process of integrating the regular expression-based threat information identification and extraction method and artificial intelligence-based threat information and apply the process based on new data.

Bi-directional LSTM-CNN-CRF for Korean Named Entity Recognition System with Feature Augmentation (자질 보강과 양방향 LSTM-CNN-CRF 기반의 한국어 개체명 인식 모델)

  • Lee, DongYub;Yu, Wonhee;Lim, HeuiSeok
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.12
    • /
    • pp.55-62
    • /
    • 2017
  • The Named Entity Recognition system is a system that recognizes words or phrases with object names such as personal name (PS), place name (LC), and group name (OG) in the document as corresponding object names. Traditional approaches to named entity recognition include statistical-based models that learn models based on hand-crafted features. Recently, it has been proposed to construct the qualities expressing the sentence using models such as deep-learning based Recurrent Neural Networks (RNN) and long-short term memory (LSTM) to solve the problem of sequence labeling. In this research, to improve the performance of the Korean named entity recognition system, we used a hand-crafted feature, part-of-speech tagging information, and pre-built lexicon information to augment features for representing sentence. Experimental results show that the proposed method improves the performance of Korean named entity recognition system. The results of this study are presented through github for future collaborative research with researchers studying Korean Natural Language Processing (NLP) and named entity recognition system.