• Title/Summary/Keyword: Model Translation

Search Result 471, Processing Time 0.037 seconds

Simultaneous neural machine translation with a reinforced attention mechanism

  • Lee, YoHan;Shin, JongHun;Kim, YoungKil
    • ETRI Journal
    • /
    • v.43 no.5
    • /
    • pp.775-786
    • /
    • 2021
  • To translate in real time, a simultaneous translation system should determine when to stop reading source tokens and generate target tokens corresponding to a partial source sentence read up to that point. However, conventional attention-based neural machine translation (NMT) models cannot produce translations with adequate latency in online scenarios because they wait until a source sentence is completed to compute alignment between the source and target tokens. To address this issue, we propose a reinforced learning (RL)-based attention mechanism, the reinforced attention mechanism, which allows a neural translation model to jointly train the stopping criterion and a partial translation model. The proposed attention mechanism comprises two modules, one to ensure translation quality and the other to address latency. Different from previous RL-based simultaneous translation systems, which learn the stopping criterion from a fixed NMT model, the modules can be trained jointly with a novel reward function. In our experiments, the proposed model has better translation quality and comparable latency compared to previous models.

A Statistical Model for Choosing the Best Translation of Prepositions. (통계 정보를 이용한 전치사 최적 번역어 결정 모델)

  • 심광섭
    • Language and Information
    • /
    • v.8 no.1
    • /
    • pp.101-116
    • /
    • 2004
  • This paper proposes a statistical model for the translation of prepositions in English-Korean machine translation. In the proposed model, statistical information acquired from unlabeled Korean corpora is used to choose the best translation from several possible translations. Such information includes functional word-verb co-occurrence information, functional word-verb distance information, and noun-postposition co-occurrence information. The model was evaluated with 443 sentences, each of which has a prepositional phrase, and we attained 71.3% accuracy.

  • PDF

Retrieval Model Based on Word Translation Probabilities and the Degree of Association of Query Concept (어휘 번역확률과 질의개념연관도를 반영한 검색 모델)

  • Kim, Jun-Gil;Lee, Kyung-Soon
    • The KIPS Transactions:PartB
    • /
    • v.19B no.3
    • /
    • pp.183-188
    • /
    • 2012
  • One of the major challenge for retrieval performance is the word mismatch between user's queries and documents in information retrieval. To solve the word mismatch problem, we propose a retrieval model based on the degree of association of query concept and word translation probabilities in translation-based model. The word translation probabilities are calculated based on the set of a sentence and its succeeding sentence pair. To validate the proposed method, we experimented on TREC AP test collection. The experimental results show that the proposed model achieved significant improvement over the language model and outperformed translation-based language model.

Optimized Chinese Pronunciation Prediction by Component-Based Statistical Machine Translation

  • Zhu, Shunle
    • Journal of Information Processing Systems
    • /
    • v.17 no.1
    • /
    • pp.203-212
    • /
    • 2021
  • To eliminate ambiguities in the existing methods to simplify Chinese pronunciation learning, we propose a model that can predict the pronunciation of Chinese characters automatically. The proposed model relies on a statistical machine translation (SMT) framework. In particular, we consider the components of Chinese characters as the basic unit and consider the pronunciation prediction as a machine translation procedure (the component sequence as a source sentence, the pronunciation, pinyin, as a target sentence). In addition to traditional features such as the bidirectional word translation and the n-gram language model, we also implement a component similarity feature to overcome some typos during practical use. We incorporate these features into a log-linear model. The experimental results show that our approach significantly outperforms other baseline models.

English-Korean speech translation corpus (EnKoST-C): Construction procedure and evaluation results

  • Jeong-Uk Bang;Joon-Gyu Maeng;Jun Park;Seung Yun;Sang-Hun Kim
    • ETRI Journal
    • /
    • v.45 no.1
    • /
    • pp.18-27
    • /
    • 2023
  • We present an English-Korean speech translation corpus, named EnKoST-C. End-to-end model training for speech translation tasks often suffers from a lack of parallel data, such as speech data in the source language and equivalent text data in the target language. Most available public speech translation corpora were developed for European languages, and there is currently no public corpus for English-Korean end-to-end speech translation. Thus, we created an EnKoST-C centered on TED Talks. In this process, we enhance the sentence alignment approach using the subtitle time information and bilingual sentence embedding information. As a result, we built a 559-h English-Korean speech translation corpus. The proposed sentence alignment approach showed excellent performance of 0.96 f-measure score. We also show the baseline performance of an English-Korean speech translation model trained with EnKoST-C. The EnKoST-C is freely available on a Korean government open data hub site.

Deep Learning-based Korean Dialect Machine Translation Research Considering Linguistics Features and Service (언어적 특성과 서비스를 고려한 딥러닝 기반 한국어 방언 기계번역 연구)

  • Lim, Sangbeom;Park, Chanjun;Yang, Yeongwook
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.2
    • /
    • pp.21-29
    • /
    • 2022
  • Based on the importance of dialect research, preservation, and communication, this paper conducted a study on machine translation of Korean dialects for dialect users who may be marginalized. For the dialect data used, AIHUB dialect data distributed based on the highest administrative district was used. We propose a many-to-one dialect machine translation that promotes the efficiency of model distribution and modeling research to improve the performance of the dialect machine translation by applying Copy mechanism. This paper evaluates the performance of the one-to-one model and the many-to-one model as a BLEU score, and analyzes the performance of the many-to-one model in the Korean dialect from a linguistic perspective. The performance improvement of the one-to-one machine translation by applying the methodology proposed in this paper and the significant high performance of the many-to-one machine translation were derived.

A System Model for Storage Independent Use of SPARQL-to-SQL Translation Algorithm (SPARQL-to-SQL 변환 알고리즘의 저장소 독립적 활용을 위한 시스템 모델)

  • Son, Ji-Seong;Jeong, Dong-Won;Baik, Doo-Kwon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.5
    • /
    • pp.467-471
    • /
    • 2008
  • With active research on Web ontology, various storages and query languages have been developed to store Web Ontology. As SPARQL usage increases and most of storages are based on relational data base, the necessity of SPARQL-to-SQL translation algorithm development becomes issued. Even though several translation algorithms have been proposed, there still remain the following problems. They do not support fully SPARQL clauses and they are dependent on a specific storage model. This paper proposes a new model to use a specific translation algorithm independently on storages.

English-to-Korean Machine Translation and the Problem of Anaphora Resolution (영한기계번역과 대용어 조응문제에 대한 고찰)

  • Ruslan Mitkov
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.351-357
    • /
    • 1994
  • At least two projects for English-to-Korean translation have been already in action for the last few years, but so far no attention has been paid to the problem of resolving pronominal reference and a default pronoun translation has been considered instead. In this paper we argue that pronous cannot be handled trivially in an English-to-Korean translation and one cannot bypass the task of resolving anaphoric reference if aiming at good and natural translation. In addition, we propose lexical transfer rules for English-to-Korean anaphor translation and outline an anaphora resolution model for an English-to-Korean MT system in operation.

  • PDF

Translation:Mapping and Evaluation (번역: 대응과 평가)

  • 장석진
    • Language and Information
    • /
    • v.2 no.1
    • /
    • pp.1-41
    • /
    • 1998
  • Evaluation of multilingual translation fundamentally involves measurement of meaning equivalences between the formally mapped discourses/texts of SL(source language) and TL(target language) both represented by a metalanguage called IL(interlingua). Unlike a usaal uni-directional MT(machine translation) model(e.g.:SL $\rightarrow$ analysis $\rightarrow$ transfer $\rightarrow$ generation $\rightarrow$ TL), a bi-directional(by 'negotiation') model(i.e.: SL $\rightarrow$ IL/S $\leftrightarrow$ IL $\leftrightarrow$ IL/T \leftarrow TL) is proposed here for the purpose of evaluating multilingual, not merely bilingual, translation. The IL, as conceived of in this study, is an English-based predicate logic represented in the framework of MRS(minimal recursion semantics), an MT-oriented off-shoot of HPSG(Head-driven Phrase Structure Grammar). In addition, a list of semantic and pragmatic checkpoints are set up, some being optional depending on the kind and use of the translation, so sa to have the evaluation of translation fine-grained by computing matching or mismatching of such checkpoints.

  • PDF

Explaining the Translation Error Factors of Machine Translation Services Using Self-Attention Visualization (Self-Attention 시각화를 사용한 기계번역 서비스의 번역 오류 요인 설명)

  • Zhang, Chenglong;Ahn, Hyunchul
    • Journal of Information Technology Services
    • /
    • v.21 no.2
    • /
    • pp.85-95
    • /
    • 2022
  • This study analyzed the translation error factors of machine translation services such as Naver Papago and Google Translate through Self-Attention path visualization. Self-Attention is a key method of the Transformer and BERT NLP models and recently widely used in machine translation. We propose a method to explain translation error factors of machine translation algorithms by comparison the Self-Attention paths between ST(source text) and ST'(transformed ST) of which meaning is not changed, but the translation output is more accurate. Through this method, it is possible to gain explainability to analyze a machine translation algorithm's inside process, which is invisible like a black box. In our experiment, it was possible to explore the factors that caused translation errors by analyzing the difference in key word's attention path. The study used the XLM-RoBERTa multilingual NLP model provided by exBERT for Self-Attention visualization, and it was applied to two examples of Korean-Chinese and Korean-English translations.