과제정보
This research was supported by the MSIT(Ministry of Science and ICT), Korea, under the ITRC(Information Technology Research Center) support program(IITP-2020-2018-0-01405) supervised by the IITP(Institute for Information & Communications Technology Planning & Evaluation) and this research was supported by the MSIT(Ministry of Science and ICT), Korea, under the ICT Creative Consilience program(IITP-2021-0-01819) supervised by the IITP(Institute for Information & communications Technology Planning & Evaluation)
참고문헌
- L. Specia, F. Blain, V. Logacheva, R. Astudillo & A. Martins. (2018). Findings of the wmt 2018 shared task on quality estimation. Association for Computational Linguistics. DOI : 10.18653/v1/W18-6451
- E. Fonseca, L. Yankovskaya, A. F. Martins, M. Fishel & C. Federmann. (2019). Findings of the WMT 2019 shared tasks on quality estimation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), 1-10. DOI : 10.18653/v1/W19-5401
- L. Specia, K. Shah, J. G. De Souza & T. Cohn (2013). QuEst-A translation quality estimation framework. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 79-84.
- L. Specia, C. Scarton & G. H. Paetzold (2018). Quality estimation for machine translation. Synthesis Lectures on Human Language Technologies, 11(1), 1-162. DOI : 10.2200/S00854ED1V01Y201805HLT039
- L. Specia, D. Raj & M. Turchi (2010). Machine translation evaluation versus quality estimation. Machine translation, 24(1), 39-50. DOI : 10.1007/s10590-010-9077-2
- D. Lee. (2020). Two-phase cross-lingual language model fine-tuning for machine translation quality estimation. In Proceedings of the Fifth Conference on Machine Translation. (pp. 1024-1028).
- Y. Baek, Z. M. Kim, J. Moon, H. Kim & E. Park. (2020). Patquest: Papago translation quality estimation. In Proceedings of the Fifth Conference on Machine Translation. (pp. 991-998).
- G. Lample & A. Conneau. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
- J. Devlin, M. W. Chang, K. Lee & K. Toutanova. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. DOI : 10.18653/v1/N19-1423
- A. Conneau et al. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. DOI : 10.18653/v1/P19-4007
- Y. Liu et al. (2020). Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8, 726-742. https://doi.org/10.1162/tacl_a_00343
- E. Bicici. & A. Way. (2014). Referential translation machines for predicting translation quality. Association for Computational Linguistics. DOI : 10.18653/v1/w15-3035
- R. Soricut, N. Bach & Z. Wang. (2012). The SDL language weaver systems in the WMT12 quality estimation shared task. In Proceedings of the Seventh Workshop on Statistical Machine Translation. (pp. 145-151).
- N. Q. Luong, B. Lecouteux & L. Besacier. (2013). LIG system for WMT13 QE task: Investigating the usefulness of features in word confidence estimation for MT. In 8th Workshop on Statistical Machine Translation. (pp. 386-391).
- C. Hardmeier, J. Nivre & J. Tiedemann. (2012). Tree kernels for machine translation quality estimation. In Seventh Workshop on Statistical Machine Translation, Montreal, Canada, June 7-8, 2012. (pp. 109-113). Association for Computational Linguistics.
- R. N. Patel. (2016). Translation quality estimation using recurrent neural network. arXiv preprint arXiv:1610.04841. DOI : 10.18653/v1/W16-2389
- H. Kim & J. H. Lee. (2016). Recurrent neural network based translation quality estimation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers. (pp. 787-792). DOI : 10.18653/v1/w16-2384
- K. Cho et al. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. DOI : 10.3115/v1/d14-1179
- S. Hochreiter & J. Schmidhuber. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. DOI : 10.1162/neco.1997.9.8.1735
- H. Kim, J. H. Lee & S. H. Na. (2017, September). Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In Proceedings of the Second Conference on Machine Translation. (pp. 562-568). DOI : 10.18653/v1/w17-4763
- J. Wang, K. Fan, B. Li, F. Zhou, B. Chen, Y. Shi & L. Si. (2018). Alibaba submission for WMT18 quality estimation task. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers. (pp. 809-815). DOI : 10.18653/v1/w18-6465
- A. Vaswani et al. (2017). Attention is all you need. In Advances in neural information processing systems (pp. 5998-6008).
- F. Kepler et al. (2019). Unbabel's Participation in the WMT19 Translation Quality Estimation Shared Task. arXiv preprint arXiv:1907.10352. DOI : 10.18653/v1/W19-5406
- H. Kim, J. H. Lim, H. K. Kim & S. H. Na. (2019). QE BERT: bilingual BERT using multi-task learning for neural quality estimation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2). (pp. 85-89). DOI : 10.18653/v1/W19-5407
- T. Ranasinghe, C. Orasan & R. Mitkov. (2020). TransQuest at WMT2020: Sentence-Level Direct Assessment. arXiv preprint arXiv:2010.05318.
- D. Lee. (2020). Two-phase cross-lingual language model fine-tuning for machine translation quality estimation. In Proceedings of the Fifth Conference on Machine Translation. (pp. 1024-1028).
- M. Wang et al. (2020, November). Hw-tsc's participation at wmt 2020 quality estimation shared task. In Proceedings of the Fifth Conference on Machine Translation. (pp. 1056-1061).
- H. Wu et al. (2020, November). Tencent submission for WMT20 Quality Estimation Shared Task. In Proceedings of the Fifth Conference on Machine Translation. (pp. 1062-1067).
- M. Snover, B. Dorr, R. Schwartz, L. Micciulla & J. Makhoul. (2006, August). A study of translation edit rate with targeted human annotation. In Proceedings of association for machine translation in the Americas (Vol. 200, No. 6).
- G. Wenzek et al. (2019). Ccnet: Extracting high quality monolingual datasets from web crawl data. arXiv preprint arXiv:1911.00359.
- T. Pires, E. Schlinger & D. Garrette. (2019). How multilingual is multilingual bert?. arXiv preprint arXiv:1906.01502. DOI : 10.18653/v1/p19-1493
- M. Lewis et al. (2019). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. DOI : 10.18653/v1/2020.acl-main.703
- T. Wolf et al. (2019). HuggingFace's Transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
- C. Park & H. Lim. (2020). A Study on the Performance Improvement of Machine Translation Using Public Korean-English Parallel Corpus. Journal of Digital Convergence, 18(6), 271-277. https://doi.org/10.14400/JDC.2020.18.6.271
- C. Park, Y. Yang, K. Park & H. Lim. (2020). Decoding strategies for improving low-resource machine translation. Electronics, 9(10), 1562. https://doi.org/10.3390/electronics9101562