DOI QR코드

DOI QR Code

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents

복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론

  • Park, Jongin (Graduate School of Business IT, Kookmin University) ;
  • Kim, Namgyu (School of Management Information Systems, Kookmin University)
  • 박종인 (국민대학교 비즈니스IT전문대학원) ;
  • 김남규 (국민대학교 경영대학 경영정보학부)
  • Received : 2019.06.26
  • Accepted : 2019.09.19
  • Published : 2019.09.30

Abstract

According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

텍스트 데이터에 대한 다양한 분석을 위해 최근 비정형 텍스트 데이터를 구조화하는 방안에 대한 연구가 활발하게 이루어지고 있다. doc2Vec으로 대표되는 기존 문서 임베딩 방법은 문서가 포함한 모든 단어를 사용하여 벡터를 만들기 때문에, 문서 벡터가 핵심 단어뿐 아니라 주변 단어의 영향도 함께 받는다는 한계가 있다. 또한 기존 문서 임베딩 방법은 하나의 문서가 하나의 벡터로 표현되기 때문에, 다양한 주제를 복합적으로 갖는 복합 문서를 정확하게 사상하기 어렵다는 한계를 갖는다. 본 논문에서는 기존의 문서 임베딩이 갖는 이러한 두 가지 한계를 극복하기 위해 다중 벡터 문서 임베딩 방법론을 새롭게 제안한다. 구체적으로 제안 방법론은 전체 단어가 아닌 핵심 단어만 이용하여 문서를 벡터화하고, 문서가 포함하는 다양한 주제를 분해하여 하나의 문서를 여러 벡터의 집합으로 표현한다. KISS에서 수집한 총 3,147개의 논문에 대한 실험을 통해 복합 문서를 단일 벡터로 표현하는 경우의 벡터 왜곡 현상을 확인하였으며, 복합 문서를 의미적으로 분해하여 다중 벡터로 나타내는 제안 방법론에 의해 이러한 왜곡 현상을 보정하고 각 문서를 더욱 정확하게 임베딩할 수 있음을 확인하였다.

Keywords

References

  1. Aggarwal, C. C. and C. Zhai, Mining Text Data, Springer, Boston, 2012.
  2. Bengio, Y., R. Ducharme, P. Vincent, and C. Janvin, "A Neural Probabilistic Language Model," The Journal of Machine Learning Research, Vol.3, (2003), 1137-1155.
  3. Firth, J. R., "A Synopsis of Linguistic Theory 1930-1955", Studies in Linguistic Analysis, Blackwell, Oxford, 1957.
  4. Hinton, G. E., "Learning Distributed Representations of Concepts," Proceedings of the 8th Annual Conference of the Cognitive Science Society, Vol.1, (1986), 1-12.
  5. Hotho, A., A. Nurnberger, and G. Paass, "A Brief Survey of Text Mining," LDV-Forum, Vol.20, No.1(2005), 19-62.
  6. Kenter, T. and M. Rijke, "Short Text Similarity with Word Embedding," Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, (2015), 1411-1420.
  7. Kim, N., D. Lee, H. Choi, and W. X. S. Wong, "Investigations on Techniques and Applications of Text Analytics," The Journal of The Korean Institute of Communication Sciences, Vol.42, No.2(2017), 471-492. https://doi.org/10.7840/kics.2017.42.2.471
  8. Kim, Y., "Convolutional Neural Networks for Sentence Classification," Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing(EMNLP), (2014), 1746-1751.
  9. Kiros, R., Y. Zhu, R. Salakhutdinov, R. S. Zemel, A. Torralba, R. Urtasun, and S. Fidler, "Skip-Thought Vectors," Proceedings of the 28th International Conference on Neural Information Processing Systems, Vol.2, (2015), 3294-3302.
  10. Lai, S., L. Xu, K. Liu, and J. Zhao, "Recurrent Convolutional Neural Network for Text Classification," Proceedings of the 29th AAAI Conference on Artificial Intelligence, (2015), 2267-2273.
  11. Liu, J., W. Chang, Y. Wu, and Y. Yang, "Deep Learning for Extreme Multi-label Text Classification," Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, (2017), 115-124.
  12. Mikolov, T., A. Deoras, D. Povey, L. Burget, and J. Cernocky, "Strategies for Training Large Scale Neural Network Language Models," 2011 IEEE Workshop on Automatic Speech Recognition & Understanding, (2011), 196-201.
  13. Mikolov, T., I. Sutskever, K. Chen, G. Corrado and J. Dean, "Distributed representations of words and phrases and their compositionality," Proceedings of the 26th International Conference on Neural Information Processing Systems, Vol.2, (2013), 3111-3119.
  14. Quoc, L. and T. Mikolov, "Distributed Representations of Sentences and Documents," Proceedings of the 31st International Conference on Machine Learning, Vol.32, (2014), 1188-1196.
  15. Salton, G., A. Wong, and C. S. Yang, "A Vector Space Model for Automatic Indexing," Communications of the ACM, Vol.18, No.11(1975), 613-620. https://doi.org/10.1145/361219.361220
  16. Tan, A., "Text Mining: The State of the Art and the Challenges," Proceedings of the Pacific Asia Conference on Knowledge Discovery and Data Mining PAKDD'99 workshop on Knowledge Discovery from Advanced Databases, (1999), 65-70.
  17. Turian, J., L. Ratinov, and Y. Bengio, "Word Representations: A Simple and General Method for Semi-Supervised Learning," Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, (2010), 384-394.
  18. Yu, H., S. Lee, and Y. Ko, "Incremental Clustering and Multi-Document Summarization for Issue Analysis based on Real-time News," Journal of Korean Institute of Information Scientists and Engineers, Vol.46, No.4(2019), 355-362.

Cited by

  1. A Hybrid Collaborative Filtering-based Product Recommender System using Search Keywords vol.26, pp.1, 2019, https://doi.org/10.13088/jiis.2020.26.1.151