DOI QR코드

DOI QR Code

Bi-directional LSTM-CNN-CRF for Korean Named Entity Recognition System with Feature Augmentation

자질 보강과 양방향 LSTM-CNN-CRF 기반의 한국어 개체명 인식 모델

  • Received : 2017.10.24
  • Accepted : 2017.12.20
  • Published : 2017.12.28

Abstract

The Named Entity Recognition system is a system that recognizes words or phrases with object names such as personal name (PS), place name (LC), and group name (OG) in the document as corresponding object names. Traditional approaches to named entity recognition include statistical-based models that learn models based on hand-crafted features. Recently, it has been proposed to construct the qualities expressing the sentence using models such as deep-learning based Recurrent Neural Networks (RNN) and long-short term memory (LSTM) to solve the problem of sequence labeling. In this research, to improve the performance of the Korean named entity recognition system, we used a hand-crafted feature, part-of-speech tagging information, and pre-built lexicon information to augment features for representing sentence. Experimental results show that the proposed method improves the performance of Korean named entity recognition system. The results of this study are presented through github for future collaborative research with researchers studying Korean Natural Language Processing (NLP) and named entity recognition system.

Keywords

Named Entity Recognition;Natural Language Processing;Deep Learning;Feature Augmentation

References

  1. L. Ratinov and D. Roth, 2009. "Design challenges and misconceptions in named entity recognition," In Proceedings of CoNLL, pp 147-155, 2009.
  2. A. McCallum, D. Freitag, and F. Pereira. "Maximum entropy Markov models for information extraction and segmentation," Proceedings of ICML, 2000.
  3. G. Luo, X. Huang, C Lin, and Z. Nie, "Joint entity recognition anddisambiguation," In Proceedings of EMNLP-2015, pp 879-888, 2015.
  4. X. M, F. Xia, "Unsupervised de- pendency parsing with transferring distribution via parallel guidance and entropy regularization," In Proceedings of ACL, pp 1337-1348, 2014.
  5. A. Graves, A. Mohamed, G. Hinton, "Speech recognition with deep recurrent neural networks," In Proceedings of ICASSP, pp 6645-6649, IEEE, 2013.
  6. J. P. Chiu, E. Nichols, "Named entity recognition with bidirectional lstm-cnns," arXiv preprint arXiv:1511.08308, 2015.
  7. K. Cho, B. Merrie, D. Bah-danau, Y. Bengio, "On the properties of neural machine translation: Encoder-decoder approaches," Syntax, Semantics and Structure in Statistical Translation, pp 103, 2014.
  8. R. J. Pennington, C. Manning, "Glove: Global vectors for word representation," 2014.
  9. T. Mikolov, K. Chen, G. Corrado, J. Dean, "Efficient Estimation of Word Representations in Vector Space," In Proceedings of Workshop at ICLR, 2013.
  10. P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov. "Enriching word vectors with subword information," Transactions of the Association for Computational Linguistics, 5:135-146, 2017.
  11. Z. Huang, W. Xu, K. Yu. "Bidirectional LSTM-CRF models for sequence tagging," CoRR, abs/1508.01991, 2015
  12. Ma, X. and Hovy, "End-to-end sequence labeling via bi-directional LSTM-CNNs- CRF," In Proc. of ACL, 2016.
  13. K. Yoon, "Convolutional neural networks for sentence classification," arXivpreprint arXiv:1408.5882 2014.
  14. J. P. Chiu and E. Nichols, "Named en- tity recognition with bidirectional lstm-cnns," arXiv preprint arXiv:1511.08308, 2015.
  15. A. Graves and J. Schmidhuber, "Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures," Neural Networks, 2005.
  16. T. Mikolov, A. Deoras, D. Povey, L. Burget, J. Eernocky. "Strategies for Training Large Scale Neural Network Language Models," Proceedings of ASRU, 2011.
  17. R. Pascanu, T. Mikolov, and Y. Bengio, "On the difficulty of training recurrent neural networks," arXiv preprint arXiv:1211.5063, 2012.