Acknowledgement
본 논문은 교육부 및 한국연구재단의 4단계 두뇌한국21 사업(4단계 BK21 사업)으로 지원된 연구임 본 논문은 과학기술정보통신부 및 정보통신산업진흥원의 '고성능 컴퓨팅 지원' 사업으로부터 지원받아 수행하였음
References
- Y. Liu, "Fine-tune BERT for extractive summarization," arXiv:1903.10318, 2019.
- J. Xu and G. Durrett, "Neural extractive text summarization with syntactic compression," arXiv:1902.00863, 2019.
- M. Zhong, P. Liu, Y. Chen, D. Wang, X. Qiu, and X. Huang, "Extractive summarization as text matching," arXiv:2004.08795, 2020
- R. Nallapati, B. Zhou, C. Gulcehre, and B. Xiang, "Abstractive text summarization using sequence-to-sequence rnns and beyond," arXiv:1602.06023, 2016
- A. M. Rush, S. Chopra, and J. Weston, "A neural attention model for abstractive sentence summarization," arXiv:1509.00685, 2015
- A. See, P. J. Liu, and C. D. Manning, "Get to the point: Summarization with pointer-generator networks," arXiv:1704.04368, 2017
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L Kaiser, and I. Polosukhin, "Attention is all you need," Advances in Neural Information Processing Systems, vol. 30, 2017.
- J. Zhang, Y. Zhao, M. Saleh, and P. Liu, "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization," Proceedings of the 37th International Conference on Machine Learning, Vol. 119, pp. 11328-11339, 2020.
- C. Lin, "Rouge: A package for automatic evaluation of summaries," Text Summarization Branches Out, pp. 74-81, Barcelona, Spain, Jul. 2004.
- H. P. Luhn, "A statistical approach to mechanized encoding and searching of literary information," IBM Journal of Research and Development, vol. 1, no. 4, pp. 309-317, 1957. https://doi.org/10.1147/rd.14.0309
- M. A. Fattah and F. Ren, "GA, MR, FFNN, PNN and GMM based models for automatic text summarization," Comput. Speech Lang., vol. 23, no. 1, pp. 126-144, 2009. https://doi.org/10.1016/j.csl.2008.04.002
- R. Mihalcea and P. Tarau, "Textrank: Bringing order into text," Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pp. 404-411, Barcelona, Spain, Jul. 2004.
- L. Page, S. Brin, R. Motwani, and T. Winograd, "The PageRank Citation Ranking: Bringing Order to the Web," Stanford University technical report, 1998.
- 차준석, 김정인, 김판구, "단어 간 의미적 연관성을 고려한 어휘 체인 기반의 개선된 자동 문서요약 방법," 스마트미디어저널, vol. 6, no. 1, pp. 22-29, 2017
- R. Nallapati, B. Zhou, and M. Ma, "Classify or select: Neural architectures for extractive document summarization," arXiv:1611.04244, 2016.
- A. Khan and N. Salim, "A review on abstractive summarization methods," Journal of Theoretical and Applied Information Technology, vol. 59, no. 1, pp. 64-72, 2014.
- 이태석, 선충녕, 정영임, 강승식, "미등록 어휘에 대한 선택적 복사를 적용한 문서 자동요약," 스마트미디어저널, vol. 8, no. 2, pp. 58-65, 2019
- S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural Comput., vol. 9, no 8, pp. 1735-1780, 1997. https://doi.org/10.1162/neco.1997.9.8.1735
- I. Sutskever, O. Vinyals, and Q. V. Le, "Sequence to sequence learning with neural networks," Advances in Neural Information Processing Systems, vol. 27, 2014.
- D. Bahdanau, K. Cho and Y. Bengio, "Neural machine translation by jointly learning to align and translate," arXiv:1409.0473, 2014.
- A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, "Improving language understanding by generative pre-training," 2018.
- A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, "Language models are unsupervised multitask learners," OpenAI Blog, vol. 1, no. 8, pp. 9, 2019.
- T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, and A. Askell, "Language models are few-shot learners," Advances in Neural Information Processing Systems, vol. 33, pp. 1877-1901, 2020.
- R. Thoppilan, D. De Freitas, J. Hall, N. Shazeer, A. Kulshreshtha, H. Cheng, A. Jin, T. Bos, L. Baker, and Y. Du, "Lamda: Language models for dialog applications," arXiv:2201.08239, 2022.
- H. Touvron, T. Lavril, G. Izacard, X. Martinet, M. Lachaux, T. Lacroix, B. Roziere, N. Goyal, E. Hambro, and F. Azhar, "Llama: Open and efficient foundation language models," arXiv:2302.13971, 2023.
- A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H.W. Chung, C. Sutton, and S. Gehrmann, "Palm: Scaling language modeling with pathways," arXiv:2204.02311, 2022.
- J. Devlin, M. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv:1810.04805, 2018.
- M. Joshi, D. Chen, Y. Liu, D. S. Weld, L. Zettlemoyer, and O. Levy, "Spanbert: Improving pre-training by representing and predicting spans," Transactions of the Association for Computational Linguistics, vol. 8, pp. 64-77, 2020. https://doi.org/10.1162/tacl_a_00300
- 김은희, 신주현, 임명진, "ELMo 임베딩 기반 문장 중요도를 고려한 중심 문장 추출방법," 스마트미디어저널, vol. 10, no. 1, pp. 39-46, 2021.
- N. Reimers and I. Gurevych, "Sentence-bert: Sentence embeddings using siamese bert-networks," arXiv:1908.10084, 2019.
- C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P.J. Liu, "Exploring the limits of transfer learning with a unified text-to-text transformer," The Journal of Machine Learning Research, vol. 21, no. 1, pp. 5485-5551, 2020.