DOI QR코드

DOI QR Code

HTML 태그 깊이 임베딩: 웹 문서 기계 독해 성능 개선을 위한 BERT 모델의 입력 임베딩 기법

HTML Tag Depth Embedding: An Input Embedding Method of the BERT Model for Improving Web Document Reading Comprehension Performance

  • 목진왕 (백석대학교 컴퓨터공학부) ;
  • 장현재 (백석대학교 컴퓨터공학부) ;
  • 이현섭 (백석대학교 첨단IT학부)
  • Mok, Jin-Wang (Division of Computer Engineering, Baekseok University) ;
  • Jang, Hyun Jae (Division of Computer Engineering, Baekseok University) ;
  • Lee, Hyun-Seob (Division of Advanced IT, Baekseok University)
  • 투고 : 2022.07.18
  • 심사 : 2022.09.06
  • 발행 : 2022.10.31

초록

최근 종단 장치(Edge Device)의 수가 증가함에 따라 빅데이터가 생성되었고 특히 정제되지 않은 HTML 문서가 증가하고 있다. 따라서 자연어 처리 모델을 이용해 HTML 문서 내에서 중요한 정보를 찾아내는 기계 독해(Machine Reading Comprehension) 기술이 중요해지고 있다. 본 논문에서는 기계 독해의 여러 연구에서 준수한 성능을 보이는 BERT(Bidirectional Encoder Representations from Transformers) 모델이 HTML 문서 구조의 깊이를 효과적으 로 학습할 수 있는 HTDE(HTML Tag Depth Embedding Method)를 제안하였다. HTDE는 BERT의 각 입력 토큰에 대하여 HTML 문서로부터 태그 스택을 생성하고 깊이 정보를 추출한다. 그리고 BERT의 입력 임베딩에 토큰의 깊이를 입력으로하는 HTML 임베딩을 더한다. 이 방법은 문서 구조를 토큰 단위로 표현하여 주변 토큰과의 관계를 식별할 수 있기 때문에 HTML 문서에 대한 BERT의 정확도를 향상시키는 효과가 있다. 마지막으로 실험을 통해 BERT의 기존 임베딩 기법에 비해 HTML 구조에 대한 모델 예측 정확도가 향상됨을 증명하였다.

Recently the massive amount of data has been generated because of the number of edge devices increases. And especially, the number of raw unstructured HTML documents has been increased. Therefore, MRC(Machine Reading Comprehension) in which a natural language processing model finds the important information within an HTML document is becoming more important. In this paper, we propose HTDE(HTML Tag Depth Embedding Method), which allows the BERT to train the depth of the HTML document structure. HTDE makes a tag stack from the HTML document for each input token in the BERT and then extracts the depth information. After that, we add a HTML embedding layer that takes the depth of the token as input to the step of input embedding of BERT. Since tokenization using HTDE identifies the HTML document structures through the relationship of surrounding tokens, HTDE improves the accuracy of BERT for HTML documents. Finally, we demonstrated that the proposed idea showing the higher accuracy compared than the accuracy using the conventional embedding of BERT.

키워드

과제정보

본 논문은 2022년도 교육부의 재원으로 한국연구재단의 지원을 받아 수행된 지자체-대학 협력기반 지역혁신 사업(2021RIS-004), 기초연구 사업(NRF-2021R1I1A3061020)과 과학기술정보통신부의 재원으로 한국 연구재단의 지원(NRF-2021R1C1C2012843)을 받아 수행되었음.

참고문헌

  1. J.Lee, "Analysis of the Hardware Structures of the IoT Device Platforms for the Minimal Power Consumption" Journal of Internet of Things and Convergence, Vol.6, No.2, pp.11-18. 2020.
  2. H.S.Lee, "A Prediction-Based Data Read Ahead Policy using Decision Tree for improving the performance of NAND flash memory based storage devices" Journal of Internet of Things and Convergence, Vol.8, No.4, pp.9-15. 2022. https://doi.org/10.20465/KIOTS.2022.8.4.009
  3. S.H.Lee and D.W.Lee, "A Software Engineering-Based Software Development Progress Analysis in IoT Environment" Journal of Internet of Things and Convergence, Vol.6, No.2, pp.87-92. 2020. https://doi.org/10.1007/978-3-030-44907-0_5
  4. T.Kwiatkowski, J.Palomaki, O.Redfield, M.Collins, A.Parikh, C.Alberti, D.Epstein, I.Polosukhin, J.Devlin, K.Lee, K.Toutanova, L.Jones, M.Kelcey, M.W.Chang, A.M.Dai, J.Uszkoreit, Q.Le and S.Petrov, "Natural questions: a benchmark for question answering research." Transactions of the Association for Computational Linguistics, Vol.7, pp.453-466. 2019. https://doi.org/10.1162/tacl_a_00276
  5. D.Chen, A.Fisch, J.Weston and A.Bordes, "Reading wikipedia to answer open-domain questions." Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vol.1, pp.1870-1879. 2017.
  6. E.Choi, H.He, M.Iyyer, M.Yatskar, W.Yih, Y.Choi, P.Liang and L.Zettlemoyer, "QuAC: Question answering in context." Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp.2174-2184, 2018.
  7. A.Asai, X.Yu, J.Kasai and H.Hajishirzi, "One question answering model for many languages with cross-lingual dense passage retrieval." Advances in Neural Information Processing Systems, Vol.34, pp.7547-7560, 2021.
  8. J.Devlin, M.W.Chang, K.Lee and K.Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding." Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol.1, pp.4171-4186, 2019.
  9. B.Wang, L.Shang, C.Lioma, X.Jiang, H.Yang, Q.Liu, and J.G.Simonsen, "On position embeddings in bert." International Conference on Learning Representations, 2020.
  10. P.Rajpurkar, J.Zhang, K.Lopyrev, and P.Liang, "Squad: 100,000+ questions for machine comprehension of text." Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp.2383-2392, 2016.
  11. Z.Yang, P.Qi, S.Zhang, Y.Bengio, W.W.Cohen, R.Salakhutdinov and C.D.Manning, "HotpotQA: A dataset for diverse, explainable multi-hop question answering." Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp.2369-2380, 2018.
  12. S.Y.Lim, M.J.Kim, and J.Y.Lee. "Korquad: Korean qa dataset for machine comprehension." Proceeding of the Conference of the Korea Information Science Society, Vol.45, No.2, pp.539-541, 2018.
  13. Y.M.Kim, S.Y.Lim, H.J.Lee, S.Y.Park and M.J.Kim, "KorQuAD 2.0: Korean QA dataset for web document machine comprehension." Proceeding of the Conference of the Korea Information Science Society, Vol.47, No.06, pp.577-586, 2020.
  14. I.Sutskever, O.Vinyals, and Q.V.Le, "Sequence to sequence learning with neural networks." In Proceedings of the 27th International Conference on Neural Information Processing Systems, Vol.2, pp.3104-3112, 2014.
  15. D.Bahdanau, K.H.Cho and Y.Bengio, "Neural machine translation by jointly learning to align and translate.", 3rd International Conference on Learning Representations, 2015.
  16. A.Vaswani, N.Shazeer, N.Parmar, J.Uszkoreit, L.Jones, A.N.Gomez, L.Kaiser and I.Polosukhin, "Attention is all you need." Advances in neural information processing systems, Vol.30, pp.5998-6008, 2017.