• Title/Summary/Keyword: Model Translation

Search Result 471, Processing Time 0.029 seconds

Symbolizing Numbers to Improve Neural Machine Translation (숫자 기호화를 통한 신경기계번역 성능 향상)

  • Kang, Cheongwoong;Ro, Youngheon;Kim, Jisu;Choi, Heeyoul
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1161-1167
    • /
    • 2018
  • The development of machine learning has enabled machines to perform delicate tasks that only humans could do, and thus many companies have introduced machine learning based translators. Existing translators have good performances but they have problems in number translation. The translators often mistranslate numbers when the input sentence includes a large number. Furthermore, the output sentence structure completely changes even if only one number in the input sentence changes. In this paper, first, we optimized a neural machine translation model architecture that uses bidirectional RNN, LSTM, and the attention mechanism through data cleansing and changing the dictionary size. Then, we implemented a number-processing algorithm specialized in number translation and applied it to the neural machine translation model to solve the problems above. The paper includes the data cleansing method, an optimal dictionary size and the number-processing algorithm, as well as experiment results for translation performance based on the BLEU score.

Automatic Post Editing Research (기계번역 사후교정(Automatic Post Editing) 연구)

  • Park, Chan-Jun;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.5
    • /
    • pp.1-8
    • /
    • 2020
  • Machine translation refers to a system where a computer translates a source sentence into a target sentence. There are various subfields of machine translation. APE (Automatic Post Editing) is a subfield of machine translation that produces better translations by editing the output of machine translation systems. In other words, it means the process of correcting errors included in the translations generated by the machine translation system to make proofreading. Rather than changing the machine translation model, this is a research field to improve the translation quality by correcting the result sentence of the machine translation system. Since 2015, APE has been selected for the WMT Shaed Task. and the performance evaluation uses TER (Translation Error Rate). Due to this, various studies on the APE model have been published recently, and this paper deals with the latest research trends in the field of APE.

Translation Technique of Requirement Model using Natural Language (자연어를 이용한 요구사항 모델의 번역 기법)

  • Oh, Jung-Sup;Lee, Hye-Ryun;Yim, Kang-Bin;Choi, Kyung-Hee;Jung, Ki-Hyun
    • The KIPS Transactions:PartD
    • /
    • v.15D no.5
    • /
    • pp.647-658
    • /
    • 2008
  • Customers' requirements written in a natural language are rewritten to modeling language in development phases. In many cases, those who participate in development cannot understand requirements written in modeling language. This paper proposes the translation technique from the requirement model which is written by REED(REquirement EDitor) tool into a natural language in order to help for the customer understanding requirement model. This technique consists of three phases: $1^{st}$ phase is generating the IORT(Input-Output Relation Tree), $2^{nd}$ phase is generating the RTT(Requirement Translation Tree), $3^{rd}$ phase is translating into a natural language.

The Construction of a German-Korean Machine Translation System for Nominal Phrases (독-한 명사구 기계번역시스템의 구축)

  • Lee, Minhaeng;Choi, Sung-Kwon;Choi, Kyung-Eun
    • Language and Information
    • /
    • v.2 no.1
    • /
    • pp.79-105
    • /
    • 1998
  • This paper aims to describe a German-Korean machine translation system for nominal phrases. Besides, we have two subgoals. First, we are going to revea linguistic differences between two languages and propose a language-informational method fo overcome the differences. The method is based on an integrated model of translation knowledge, efficient information structure, and concordance selection. Then, we will show the statistical results about translation experiment and its evaluation as an evidence for the adequacy of our linguistic method and translation system itself.

  • PDF

Automatically Extracting Unknown Translations Using Phrase Alignment (정렬기법을 이용한 미등록 대역어의 자동 추출)

  • Kim, Jae-Hoon;Yang, Sung-Il
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.231-240
    • /
    • 2007
  • In this paper, we propose an automatic extraction model for unknown translations and implement an unknown translation extraction system using the proposed model. The proposed model as a phrase-alignment model is incorporated with three models: a phrase-boundary model, a language model, and a translation model. Using the proposed model we implement the system for extracting unknown translations, which consists of three parts: construction of parallel corpora, alignment of Korean and English words, extraction of unknown translations. To evaluate the performance of the proposed system we have established the reference corpus for extracting unknown translation, which comprises of 2,220 parallel sentences including about 1,500 unknown translations. Through several experiments, we have observed that the proposed model is very useful for extracting unknown translations. In the future, researches on objective evaluation and establishment of parallel corpora with good quality should be performed and studies on improving the performance of unknown translation extraction should be kept up.

Korean Text to Gloss: Self-Supervised Learning approach

  • Thanh-Vu Dang;Gwang-hyun Yu;Ji-yong Kim;Young-hwan Park;Chil-woo Lee;Jin-Young Kim
    • Smart Media Journal
    • /
    • v.12 no.1
    • /
    • pp.32-46
    • /
    • 2023
  • Natural Language Processing (NLP) has grown tremendously in recent years. Typically, bilingual, and multilingual translation models have been deployed widely in machine translation and gained vast attention from the research community. On the contrary, few studies have focused on translating between spoken and sign languages, especially non-English languages. Prior works on Sign Language Translation (SLT) have shown that a mid-level sign gloss representation enhances translation performance. Therefore, this study presents a new large-scale Korean sign language dataset, the Museum-Commentary Korean Sign Gloss (MCKSG) dataset, including 3828 pairs of Korean sentences and their corresponding sign glosses used in Museum-Commentary contexts. In addition, we propose a translation framework based on self-supervised learning, where the pretext task is a text-to-text from a Korean sentence to its back-translation versions, then the pre-trained network will be fine-tuned on the MCKSG dataset. Using self-supervised learning help to overcome the drawback of a shortage of sign language data. Through experimental results, our proposed model outperforms a baseline BERT model by 6.22%.

Study on Decoding Strategies in Neural Machine Translation (인공신경망 기계번역에서 디코딩 전략에 대한 연구)

  • Seo, Jaehyung;Park, Chanjun;Eo, Sugyeong;Moon, Hyeonseok;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.69-80
    • /
    • 2021
  • Neural machine translation using deep neural network has emerged as a mainstream research, and an abundance of investment and studies on model structure and parallel language pair have been actively undertaken for the best performance. However, most recent neural machine translation studies pass along decoding strategy to future work, and have insufficient a variety of experiments and specific analysis on it for generating language to maximize quality in the decoding process. In machine translation, decoding strategies optimize navigation paths in the process of generating translation sentences and performance improvement is possible without model modifications or data expansion. This paper compares and analyzes the significant effects of the decoding strategy from classical greedy decoding to the latest Dynamic Beam Allocation (DBA) in neural machine translation using a sequence to sequence model.

A Speech Translation System for Hotel Reservation (호텔예약을 위한 음성번역시스템)

  • 구명완;김재인;박상규;김우성;장두성;홍영국;장경애;김응인;강용범
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.4
    • /
    • pp.24-31
    • /
    • 1996
  • In this paper, we present a speech translation system for hotel reservation, KT_STS(Korea Telecom Speech Translation System). KT-STS is a speech-to-speech translation system which translates a spoken utterance in Korean into one in Japanese. The system has been designed around the task of hotel reservation(dialogues between a Korean customer and a hotel reservation de나 in Japan). It consists of a Korean speech recognition system, a Korean-to-Japanese machine translation system and a korean speech synthesis system. The Korean speech recognition system is an HMM(Hidden Markov model)-based speaker-independent, continuous speech recognizer which can recognize about 300 word vocabularies. Bigram language model is used as a forward language model and dependency grammar is used for a backward language model. For machine translation, we use dependency grammar and direct transfer method. And Korean speech synthesizer uses the demiphones as a synthesis unit and the method of periodic waveform analysis and reallocation. KT-STS runs in nearly real time on the SPARC20 workstation with one TMS320C30 DSP board. We have achieved the word recognition rate of 94. 68% and the sentence recognition rate of 82.42% after the speech recognition tests. On Korean-to-Japanese translation tests, we achieved translation success rate of 100%. We had an international joint experiment in which our system was connected with another system developed by KDD in Japan using the leased line.

  • PDF

Third grade students' fraction concept learning based on Lesh translation model (Lesh 표상 변환(translation) 모델을 적용한 3학년 학생들의 분수개념 학습)

  • Han, Hye-Sook
    • Communications of Mathematical Education
    • /
    • v.23 no.1
    • /
    • pp.129-144
    • /
    • 2009
  • The purpose of the study was to investigate the effects of the use of RNP curriculum based on Lesh translation model on third grade students' understandings of fraction concepts and problem solving ability. Students' conceptual understandings of fractions and problem solving ability were improved by the use of the curriculum. Various manipulative experiences and translation processes between and among representations facilitated students' conceptual understandings of fractions and contributed to the development of problem solving strategies. Expecially, in problem situations including fraction ordering which was not covered during the study, mental images of fractions constructed by the experiences with manipulatives played a central role as a problem solving strategy.

  • PDF

A Survey of Machine Translation and Parts of Speech Tagging for Indian Languages

  • Khedkar, Vijayshri;Shah, Pritesh
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.245-253
    • /
    • 2022
  • Commenced in 1954 by IBM, machine translation has expanded immensely, particularly in this period. Machine translation can be broken into seven main steps namely- token generation, analyzing morphology, lexeme, tagging Part of Speech, chunking, parsing, and disambiguation in words. Morphological analysis plays a major role when translating Indian languages to develop accurate parts of speech taggers and word sense. The paper presents various machine translation methods used by different researchers for Indian languages along with their performance and drawbacks. Further, the paper concentrates on parts of speech (POS) tagging in Marathi dialect using various methods such as rule-based tagging, unigram, bigram, and more. After careful study, it is concluded that for machine translation, parts of speech tagging is a major step. Also, for the Marathi language, the Hidden Markov Model gives the best results for parts of speech tagging with an accuracy of 93% which can be further improved according to the dataset.