DOI QR코드

DOI QR Code

효율적인 트랜스포머를 이용한 팩트체크 자동화 모델

Automated Fact Checking Model Using Efficient Transfomer

  • Yun, Hee Seung (Department of Computer Engineering, Chung-Ang University) ;
  • Jung, Jason J. (Department of Computer Engineering, Chung-Ang University)
  • 투고 : 2021.07.16
  • 심사 : 2021.07.29
  • 발행 : 2021.09.30

초록

Nowadays, fake news from newspapers and social media is a serious issue in news credibility. Some of machine learning methods (such as LSTM, logistic regression, and Transformer) has been applied for fact checking. In this paper, we present Transformer-based fact checking model which improves computational efficiency. Locality Sensitive Hashing (LSH) is employed to efficiently compute attention value so that it can reduce the computation time. With LSH, model can group semantically similar words, and compute attention value within the group. The performance of proposed model is 75% for accuracy, 42.9% and 75% for Fl micro score and F1 macro score, respectively.

키워드

과제정보

This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2017S1A6A3A01078538).

참고문헌

  1. N. Kotonya and F. Toni, "Explainable Automated FactChecking: A Survey," Proceedings of the 28th International Conference on Computational Linguistics, Barcelona (online), pp. 5430-5443, 2020.
  2. L. Wu, Y. Rao, Y. Zhao, H. Liang, and A. Nazir, "DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim Verification," in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, online, pp. 1024-1035, 2020.
  3. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention Is All You Need," in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, pp. 6000-6010, 2017.
  4. J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding," Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, Minneapolis: MN, pp. 4171-4186, 2019.
  5. Wikipedia, Locality Sensitive Hashing [Internet]. Available: https://en.wikipedia.org/wiki/Locality-sensitive_hashing.
  6. N. Kitaev, L. Kaiser, and A. Levskaya, "Reformer: The Efficient Transformer," in International Conference on Learning Representations, online, 2020.
  7. E. Kochkina, M. Liakata, and A. Zubiaga. "All-in-one: Multi-task Learning for Rumour Verification," Proceedings of the 27th International Conference on Computational Linguistics, Santa Fe: NM, pp. 3402-3413, 2018.