• Title/Summary/Keyword: 단일 문서 요약

Search Result 12, Processing Time 0.014 seconds

Cross-Lingual Style-Based Title Generation Using Multiple Adapters (다중 어댑터를 이용한 교차 언어 및 스타일 기반의 제목 생성)

  • Yo-Han Park;Yong-Seok Choi;Kong Joo Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.341-354
    • /
    • 2023
  • The title of a document is the brief summarization of the document. Readers can easily understand a document if we provide them with its title in their preferred styles and the languages. In this research, we propose a cross-lingual and style-based title generation model using multiple adapters. To train the model, we need a parallel corpus in several languages with different styles. It is quite difficult to construct this kind of parallel corpus; however, a monolingual title generation corpus of the same style can be built easily. Therefore, we apply a zero-shot strategy to generate a title in a different language and with a different style for an input document. A baseline model is Transformer consisting of an encoder and a decoder, pre-trained by several languages. The model is then equipped with multiple adapters for translation, languages, and styles. After the model learns a translation task from parallel corpus, it learns a title generation task from monolingual title generation corpus. When training the model with a task, we only activate an adapter that corresponds to the task. When generating a cross-lingual and style-based title, we only activate adapters that correspond to a target language and a target style. An experimental result shows that our proposed model is only as good as a pipeline model that first translates into a target language and then generates a title. There have been significant changes in natural language generation due to the emergence of large-scale language models. However, research to improve the performance of natural language generation using limited resources and limited data needs to continue. In this regard, this study seeks to explore the significance of such research.

English-Korean Neural Machine Translation using MASS (MASS를 이용한 영어-한국어 신경망 기계 번역)

  • Jung, Young-Jun;Park, Cheon-Eum;Lee, Chang-Ki;Kim, Jun-Seok
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.236-238
    • /
    • 2019
  • 신경망 기계 번역(Neural Machine Translation)은 주로 지도 학습(Supervised learning)을 이용한 End-to-end 방식의 연구가 이루어지고 있다. 그러나 지도 학습 방법은 데이터가 부족한 경우에는 낮은 성능을 보이기 때문에 BERT와 같은 대량의 단일 언어 데이터로 사전학습(Pre-training)을 한 후에 미세조정(Finetuning)을 하는 Transfer learning 방법이 자연어 처리 분야에서 주로 연구되고 있다. 최근에 발표된 MASS 모델은 언어 생성 작업을 위한 사전학습 방법을 통해 기계 번역과 문서 요약에서 높은 성능을 보였다. 본 논문에서는 영어-한국어 기계 번역 성능 향상을 위해 MASS 모델을 신경망 기계 번역에 적용하였다. 실험 결과 MASS 모델을 이용한 영어-한국어 기계 번역 모델의 성능이 기존 모델들보다 좋은 성능을 보였다.

  • PDF