DOI QR코드

DOI QR Code

언어적 특성과 서비스를 고려한 딥러닝 기반 한국어 방언 기계번역 연구

Deep Learning-based Korean Dialect Machine Translation Research Considering Linguistics Features and Service

  • 임상범 (강남대학교 소프트웨어응용학부) ;
  • 박찬준 (고려대학교 컴퓨터학과) ;
  • 양영욱 (한신대학교 컴퓨터학과)
  • Lim, Sangbeom (Department of Software Application, Kangnam University) ;
  • Park, Chanjun (Department of Computer Science and Engineering, Korea University) ;
  • Yang, Yeongwook (Department of Computer Science and Engineering, Hanshin University)
  • 투고 : 2021.11.23
  • 심사 : 2022.02.20
  • 발행 : 2022.02.28

초록

본 논문은 방언 연구, 보존, 의사소통의 중요성을 바탕으로 소외될 수 있는 방언 사용자들을 위한 한국어 방언 기계번역 연구를 진행하였다. 사용한 방언 데이터는 최상위 행정구역을 기반으로 배포된 AIHUB 방언 데이터를 사용하였다. 방언 데이터를 바탕으로 Transformer 기반의 copy mechanism을 적용하여 방언 기계번역기의 성능 향상을 도모하는 모델링 연구와 모델 배포의 효율성을 도모하는 Many-to-one 기반의 방언 기계 번역기를 제안한다. 본 논문은 one-to-one 모델과 many-to-one 모델의 성능을 비교 분석하고 이를 다양한 언어학적 시각으로 분석하였다. 실험 결과 BLEU점수를 기준으로 본 논문이 제안하는 방법론을 적용한 one-to-one 기계번역기의 성능 향상과 many-to-one 기계번역기의 유의미한 성능을 도출하였다.

Based on the importance of dialect research, preservation, and communication, this paper conducted a study on machine translation of Korean dialects for dialect users who may be marginalized. For the dialect data used, AIHUB dialect data distributed based on the highest administrative district was used. We propose a many-to-one dialect machine translation that promotes the efficiency of model distribution and modeling research to improve the performance of the dialect machine translation by applying Copy mechanism. This paper evaluates the performance of the one-to-one model and the many-to-one model as a BLEU score, and analyzes the performance of the many-to-one model in the Korean dialect from a linguistic perspective. The performance improvement of the one-to-one machine translation by applying the methodology proposed in this paper and the significant high performance of the many-to-one machine translation were derived.

키워드

과제정보

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1C1C2004868).

참고문헌

  1. E. Benmamoun. (2000). The feature structure of functional categories: A comparative study of Arabic dialects. Oxford University Press.
  2. C. Park, K. Kim, Y. Yang, M. Kang & H. Lim. (2021). Neural spelling correction: translating incorrect sentences to correct sentences for multimedia. Multimedia Tools and Applications, 80(26), 34591-34608. https://doi.org/10.1007/s11042-020-09148-2
  3. C. Park, C. Lee, Y. Yang & H. Lim. (2020). Ancient Korean neural machine translation. IEEE Access, 8, 116617-116625. https://doi.org/10.1109/access.2020.3004879
  4. A. Vaswani et al. (2017). Attention is all you need. Advances in neural information processing systems, 5998-6008.
  5. J. Gu, Z. Lu, H. Li & V. O. K. Li. (2016). Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603. 06393.
  6. S. R. Kudugunta, A. Bapna, I. Caswell, N. Arivazhagan, & O. Firat (2019). Investigating multilingual NMT representations at scale. arXiv preprint arXiv:1909. 02197.
  7. R. Aharoni, M. Johnson, & O. Firat (2019). Massively multilingual neural machine translation. arXiv preprint arXiv:1903. 00089.
  8. Korean Language Research Institute. (1988). Establishment of standard language regulation and Korean spelling. Seoul : Ministry of Education, Republic of Korea
  9. K. Park, Y. J. Choe & J. Ham (2019). Jejueo Datasets for Machine Translation and Speech Synthesis. arXiv preprint arXiv:1911. 12071.
  10. W. Salloum & N. Habash. (2012). Elissa: A dialectal to standard Arabic machine translation system. Proceedings of COLING 2012: Demonstration Papers, 385-392.
  11. K. P. Scannell. (2006). Machine translation for closely related language pairs. Proceedings of the Workshop Strategies for developing machine translation for minority languages, 103-109.
  12. I. Guellil, F. Azouaou & M. Abbas. (2017). Neural vs statistical translation of algerian arabic dialect written with arabizi and arabic letter. The 31st pacific asia conference on language, information and computation paclic
  13. W. Farhan, B. Talafha, A. Abuammar, R. Jaikat, M. Al-Ayyoub, A. B. Tarakji & A. Toma (2020). Unsupervised dialectal neural machine translation. Information Processing & Management, 57(3), 102181. https://doi.org/10.1016/j.ipm.2019.102181
  14. Y. Wan, B. Yang, D. F. Wong, L. S. Chao, H. Du & B. C. H. Ao (2020). Unsupervised Neural Dialect Translation with Commonality and Diversity Modeling. Proceedings of the AAAI Conference on Artificial Intelligence, 34(5), 9130-9137.
  15. S. Lim, C. Park, J. Jo & Y. Yang (2021). Deep Learning based Korean Dialect Machine Translation Research. Proceedings of the 33th Annual Conference on Human and Cognitive Language Technology.
  16. M. Johnson et al. (2017). Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5, 339-351. https://doi.org/10.1162/tacl_a_00065
  17. T. Kudo & J. Richardson (2018). Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808. 06226.
  18. S. Kim (2016). A Contrastive Analysis of the Noun Structure in German, English and French. Yongbong Journal of Humanities, 49.
  19. J. Devlin, M. W. Chang, K. Lee & K. Toutanova (2018). Bert: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:1810. 04805