• Title/Summary/Keyword: corpus-based translation studies

Search Result 8, Processing Time 0.019 seconds

Translating English By-Phrase Passives into Korean: A Parallel Corpus Analysis (영한 병렬 코퍼스에 나타난 영어 수동문의 한국어 번역)

  • Lee, Seung-Ah
    • Journal of English Language & Literature
    • /
    • v.56 no.5
    • /
    • pp.871-905
    • /
    • 2010
  • This paper is motivated by Watanabe's (2001) observation that English byphrase passives are sometimes translated into Japanese object topicalization constructions. That is, the original English sentence in the passive may be translated into the active voice with the logical object topicalized. A number of scholars, including Chomsky (1981) and Baker (1992), have remarked that languages have various ways to avoid focusing on the logical subject. The aim of the present study is to examine the translation equivalents of the English by-phrase passives in an English-Korean parallel corpus compiled by the author. A small sample of articles from Newsweek magazine and its published Korean translation reveals that there are indeed many ways to translate English by-phrase passives, including object topicalization (12.5%). Among the 64 translated sentences analyzed and classified, 12 (18.8%) examples were problematic in terms of agent defocusing, which is the primary function of passives. Of these 12 instances, five cases were identified where an alternative translation would be more suitable. The results suggest that the functional characteristics of English by-phrase passives should be highlighted in translator training as well as language teaching.

Filter-mBART Based Neural Machine Translation Using Parallel Corpus Filtering (병렬 말뭉치 필터링을 적용한 Filter-mBART기반 기계번역 연구)

  • Moon, Hyeonseok;Park, Chanjun;Eo, Sugyeong;Park, JeongBae;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.5
    • /
    • pp.1-7
    • /
    • 2021
  • In the latest trend of machine translation research, the model is pretrained through a large mono lingual corpus and then finetuned with a parallel corpus. Although many studies tend to increase the amount of data used in the pretraining stage, it is hard to say that the amount of data must be increased to improve machine translation performance. In this study, through an experiment based on the mBART model using parallel corpus filtering, we propose that high quality data can yield better machine translation performance, even utilizing smaller amount of data. We propose that it is important to consider the quality of data rather than the amount of data, and it can be used as a guideline for building a training corpus.

A study on performance improvement considering the balance between corpus in Neural Machine Translation (인공신경망 기계번역에서 말뭉치 간의 균형성을 고려한 성능 향상 연구)

  • Park, Chanjun;Park, Kinam;Moon, Hyeonseok;Eo, Sugyeong;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.5
    • /
    • pp.23-29
    • /
    • 2021
  • Recent deep learning-based natural language processing studies are conducting research to improve performance by training large amounts of data from various sources together. However, there is a possibility that the methodology of learning by combining data from various sources into one may prevent performance improvement. In the case of machine translation, data deviation occurs due to differences in translation(liberal, literal), style(colloquial, written, formal, etc.), domains, etc. Combining these corpora into one for learning can adversely affect performance. In this paper, we propose a new Corpus Weight Balance(CWB) method that considers the balance between parallel corpora in machine translation. As a result of the experiment, the model trained with balanced corpus showed better performance than the existing model. In addition, we propose an additional corpus construction process that enables coexistence with the human translation market, which can build high-quality parallel corpus even with a monolingual corpus.

A Comparative Study on Korean Connective Morpheme '-myenseo' to the Chinese expression - based on Korean-Chinese parallel corpus (한국어 연결어미 '-면서'와 중국어 대응표현의 대조연구 -한·중 병렬 말뭉치를 기반으로)

  • YI, CHAO
    • Cross-Cultural Studies
    • /
    • v.37
    • /
    • pp.309-334
    • /
    • 2014
  • This study is based on the Korean-Chinese parallel corpus, utilizing the Korean connective morpheme '-myenseo' and contrasting with the Chinese expression. Korean learners often struggle with the use of Korean Connective Morpheme especially when there is a lexical gap between their mother language. '-myenseo' is of the most use Korean Connective Morpheme, it usually contrast to the Chinese coordinating conjunction. But according to the corpus, the contrastive Chinese expression to '-myenseo' is more than coordinating conjunction. So through this study, can help the Chinese Korean language learners learn easier while studying '-myenseo', because the variety Chinese expression are found from the parallel corpus that related to '-myenseo'. In this study, firstly discussed the semantic features and syntactic characteristics of '-myenseo'. The significant semantic features of '-myenseo' are 'simultaneous' and 'conflict'. So in this chapter the study use examples of usage to analyse the specific usage of '-myenseo'. And then this study analyse syntactic characteristics of '-myenseo' through the subject constraint, predicate constraints, temporal constraints, mood constraints, negatives constraints. then summarize them into a table. And the most important part of this study is Chapter 4. In this chapter, it contrasted the Korean connective morpheme '-myenseo' to the Chinese expression by analysing the Korean-Chinese parallel corpus. As a result of the analysis, the frequency of the Chinese expression that contrasted to '-myenseo' is summarized into

    . It can see from the table that the most common Chinese expression comparative to '-myenseo' is non-marker patterns. That means the connection of sentence in Korean can use connective morpheme what is a clarifying linguistic marker, but in Chinese it often connect the sentence by their intrinsic logical relationships. So the conclusion of this chapter is that '-myenseo' can be comparative to Chinese conjunction, expression, non-marker patterns and liberal translation patterns, which are more than Chinese conjunction that discovered before. In the last Chapter, as the conclusion part of this study, it summarized and suggest the limitations and the future research direction.

  • Korean Text to Gloss: Self-Supervised Learning approach

    • Thanh-Vu Dang;Gwang-hyun Yu;Ji-yong Kim;Young-hwan Park;Chil-woo Lee;Jin-Young Kim
      • Smart Media Journal
      • /
      • v.12 no.1
      • /
      • pp.32-46
      • /
      • 2023
    • Natural Language Processing (NLP) has grown tremendously in recent years. Typically, bilingual, and multilingual translation models have been deployed widely in machine translation and gained vast attention from the research community. On the contrary, few studies have focused on translating between spoken and sign languages, especially non-English languages. Prior works on Sign Language Translation (SLT) have shown that a mid-level sign gloss representation enhances translation performance. Therefore, this study presents a new large-scale Korean sign language dataset, the Museum-Commentary Korean Sign Gloss (MCKSG) dataset, including 3828 pairs of Korean sentences and their corresponding sign glosses used in Museum-Commentary contexts. In addition, we propose a translation framework based on self-supervised learning, where the pretext task is a text-to-text from a Korean sentence to its back-translation versions, then the pre-trained network will be fine-tuned on the MCKSG dataset. Using self-supervised learning help to overcome the drawback of a shortage of sign language data. Through experimental results, our proposed model outperforms a baseline BERT model by 6.22%.

    A Corpus-based Study of Translation Universals in English Translations of Korean Newspaper Texts (한국 신문의 영어 번역에 나타난 번역 보편소의 코퍼스 기반 분석)

    • Goh, Gwang-Yoon;Lee, Younghee (Cheri)
      • Cross-Cultural Studies
      • /
      • v.45
      • /
      • pp.109-143
      • /
      • 2016
    • This article examines distinctive linguistic shifts of translational English in an effort to verify the validity of the translation universals hypotheses, including simplification, explicitation, normalization and leveling-out, which have been most heavily explored to date. A large-scale study involving comparable corpora of translated and non-translated English newspaper texts has been carried out to typify particular linguistic attributes inherent in translated texts. The main findings are as follows. First, by employing the parameters of STTR, top-to-bottom frequency words, and mean values of sentence lengths, the translational instances of simplification have been detected across the translated English newspaper corpora. In contrast, the portion of function words produced contrary results, which in turn suggests that this feature might not constitute an effective test of the hypothesis. Second, it was found that the use of connectives was more salient in original English newspaper texts than translated English texts, being incompatible with the explicitation hypothesis. Third, as an indicator of translational normalization, lexical bundles were found to be more pervasive in translated texts than in non-translated texts, which is expected from and therefore support the normalization hypothesis. Finally, the standard deviations of both STTR and mean sentence lengths turned out to be higher in translated texts, indicating that the translated English newspaper texts were less leveled out within the same corpus group, which is opposed to what the leveling-out hypothesis postulates. Overall, the results suggest that not all four hypotheses may qualify for the label translation universals, or at least that some translational predictors are not feasible enough to evaluate the effectiveness of the translation universals hypotheses.

    A Study on the Performance Analysis of Entity Name Recognition Techniques Using Korean Patent Literature

    • Gim, Jangwon
      • Journal of Advanced Information Technology and Convergence
      • /
      • v.10 no.2
      • /
      • pp.139-151
      • /
      • 2020
    • Entity name recognition is a part of information extraction that extracts entity names from documents and classifies the types of extracted entity names. Entity name recognition technologies are widely used in natural language processing, such as information retrieval, machine translation, and query response systems. Various deep learning-based models exist to improve entity name recognition performance, but studies that compared and analyzed these models on Korean data are insufficient. In this paper, we compare and analyze the performance of CRF, LSTM-CRF, BiLSTM-CRF, and BERT, which are actively used to identify entity names using Korean data. Also, we compare and evaluate whether embedding models, which are variously used in recent natural language processing tasks, can affect the entity name recognition model's performance improvement. As a result of experiments on patent data and Korean corpus, it was confirmed that the BiLSTM-CRF using FastText method showed the highest performance.

    Deletion-Based Sentence Compression Using Sentence Scoring Reflecting Linguistic Information (언어 정보가 반영된 문장 점수를 활용하는 삭제 기반 문장 압축)

    • Lee, Jun-Beom;Kim, So-Eon;Park, Seong-Bae
      • KIPS Transactions on Software and Data Engineering
      • /
      • v.11 no.3
      • /
      • pp.125-132
      • /
      • 2022
    • Sentence compression is a natural language processing task that generates concise sentences that preserves the important meaning of the original sentence. For grammatically appropriate sentence compression, early studies utilized human-defined linguistic rules. Furthermore, while the sequence-to-sequence models perform well on various natural language processing tasks, such as machine translation, there have been studies that utilize it for sentence compression. However, for the linguistic rule-based studies, all rules have to be defined by human, and for the sequence-to-sequence model based studies require a large amount of parallel data for model training. In order to address these challenges, Deleter, a sentence compression model that leverages a pre-trained language model BERT, is proposed. Because the Deleter utilizes perplexity based score computed over BERT to compress sentences, any linguistic rules and parallel dataset is not required for sentence compression. However, because Deleter compresses sentences only considering perplexity, it does not compress sentences by reflecting the linguistic information of the words in the sentences. Furthermore, since the dataset used for pre-learning BERT are far from compressed sentences, there is a problem that this can lad to incorrect sentence compression. In order to address these problems, this paper proposes a method to quantify the importance of linguistic information and reflect it in perplexity-based sentence scoring. Furthermore, by fine-tuning BERT with a corpus of news articles that often contain proper nouns and often omit the unnecessary modifiers, we allow BERT to measure the perplexity appropriate for sentence compression. The evaluations on the English and Korean dataset confirm that the sentence compression performance of sentence-scoring based models can be improved by utilizing the proposed method.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.