• Title/Summary/Keyword: sentence translation

Search Result 106, Processing Time 0.02 seconds

Sentence Translation and Vocabulary Retention in an EFL Reading Class

  • Kim, Boram
    • English Language & Literature Teaching
    • /
    • v.18 no.2
    • /
    • pp.67-84
    • /
    • 2012
  • The present study investigated the effect of sentence translation as a production task on short-term and long-term retention of foreign vocabulary. 87 EFL university students at a beginning level, enrolled in reading class participated in the study. The study compared the performance of three groups on vocabulary recall: (1) Control group, (2) Translation group, and (3) Copy group. During the treatment sessions, translation group translated L1 sentences into English, while copy group simply copied given English sentences with each target word. Results of the immediate test were collected each week from week 2 to week 5 and analyzed by one-way ANOVA. Results revealed that regarding short-term vocabulary retention, participants in rote-copy condition outperformed those in translation group. Four weeks later a delayed test was administered to measure long-term vocabulary retention. In contrast, the results of two-way repeated measures ANOVA showed that long-term vocabulary retention of translation group was significantly greater than copy group. The findings suggest that although sentence translation is rather challenging to low-level learners, it may facilitate long-term retention of new vocabulary given the more elaborate and deeper processing the task entails.

  • PDF

English-Korean speech translation corpus (EnKoST-C): Construction procedure and evaluation results

  • Jeong-Uk Bang;Joon-Gyu Maeng;Jun Park;Seung Yun;Sang-Hun Kim
    • ETRI Journal
    • /
    • v.45 no.1
    • /
    • pp.18-27
    • /
    • 2023
  • We present an English-Korean speech translation corpus, named EnKoST-C. End-to-end model training for speech translation tasks often suffers from a lack of parallel data, such as speech data in the source language and equivalent text data in the target language. Most available public speech translation corpora were developed for European languages, and there is currently no public corpus for English-Korean end-to-end speech translation. Thus, we created an EnKoST-C centered on TED Talks. In this process, we enhance the sentence alignment approach using the subtitle time information and bilingual sentence embedding information. As a result, we built a 559-h English-Korean speech translation corpus. The proposed sentence alignment approach showed excellent performance of 0.96 f-measure score. We also show the baseline performance of an English-Korean speech translation model trained with EnKoST-C. The EnKoST-C is freely available on a Korean government open data hub site.

Automatic Post Editing Research (기계번역 사후교정(Automatic Post Editing) 연구)

  • Park, Chan-Jun;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.5
    • /
    • pp.1-8
    • /
    • 2020
  • Machine translation refers to a system where a computer translates a source sentence into a target sentence. There are various subfields of machine translation. APE (Automatic Post Editing) is a subfield of machine translation that produces better translations by editing the output of machine translation systems. In other words, it means the process of correcting errors included in the translations generated by the machine translation system to make proofreading. Rather than changing the machine translation model, this is a research field to improve the translation quality by correcting the result sentence of the machine translation system. Since 2015, APE has been selected for the WMT Shaed Task. and the performance evaluation uses TER (Translation Error Rate). Due to this, various studies on the APE model have been published recently, and this paper deals with the latest research trends in the field of APE.

Simultaneous neural machine translation with a reinforced attention mechanism

  • Lee, YoHan;Shin, JongHun;Kim, YoungKil
    • ETRI Journal
    • /
    • v.43 no.5
    • /
    • pp.775-786
    • /
    • 2021
  • To translate in real time, a simultaneous translation system should determine when to stop reading source tokens and generate target tokens corresponding to a partial source sentence read up to that point. However, conventional attention-based neural machine translation (NMT) models cannot produce translations with adequate latency in online scenarios because they wait until a source sentence is completed to compute alignment between the source and target tokens. To address this issue, we propose a reinforced learning (RL)-based attention mechanism, the reinforced attention mechanism, which allows a neural translation model to jointly train the stopping criterion and a partial translation model. The proposed attention mechanism comprises two modules, one to ensure translation quality and the other to address latency. Different from previous RL-based simultaneous translation systems, which learn the stopping criterion from a fixed NMT model, the modules can be trained jointly with a novel reward function. In our experiments, the proposed model has better translation quality and comparable latency compared to previous models.

E-book to sign-language translation program based on morpheme analysis (형태소 분석 기반 전자책 수화 번역 프로그램)

  • Han, Sol-Ee;Kim, Se-A;Hwang, Gyung-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.2
    • /
    • pp.461-467
    • /
    • 2017
  • As the number of smart devices increases, e-book contents and services are proliferating. However, the text based e-book is difficult for a hearing-impairment person to understand. In this paper, we developed an android based application in which we can choose an e-book text file and each sentence is translated to sign-language elements which are shown in videos that are retrieved from the sign-language contents server. We used the korean sentence to sign-language translation algorithm based on the morpheme analysis. The proposed translation algorithm consists of 3 stages. Firstly, some elements in a sentence are removed for typical sign-language usages. Secondly, the tense of the sentence and the expression alteration are applied. Finally, the honorific forms are considered and word positions in the sentence are revised. We also proposed a new method to evaluate the performance of the translation algorithm and demonstrated the superiority of the algorithm through the translation results of 100 reference sentences.

Sentence Type Identification in Korean Applications to Korean-Sign Language Translation and Korean Speech Synthesis (한국어 문장 유형의 자동 분류 한국어-수화 변환 및 한국어 음성 합성에의 응용)

  • Chung, Jin-Woo;Lee, Ho-Joon;Park, Jong-C.
    • Journal of the HCI Society of Korea
    • /
    • v.5 no.1
    • /
    • pp.25-35
    • /
    • 2010
  • This paper proposes a method of automatically identifying sentence types in Korean and improving naturalness in sign language generation and speech synthesis using the identified sentence type information. In Korean, sentences are usually categorized into five types: declarative, imperative, propositive, interrogative, and exclamatory. However, it is also known that these types are quite ambiguous to identify in dialogues. In this paper, we present additional morphological and syntactic clues for the sentence type and propose a rule-based procedure for identifying the sentence type using these clues. The experimental results show that our method gives a reasonable performance. We also describe how the sentence type is used to generate non-manual signals in Korean-Korean sign language translation and appropriate intonation in Korean speech synthesis. Since the method of using sentence type information in speech synthesis and sign language generation is not much studied previously, it is anticipated that our method will contribute to research on generating more natural speech and sign language expressions.

  • PDF

The Expression of Ending Sentence in Family Conversations in the Virtual Language - Focusing on Politeness and Sentence-final Particle with Instructional Media - (가상세계 속에 보인 일본어의 가족 간의 문말 표현에 대해 - 교수매체로서의 문말의 정중체와 종조사 사용에 대해)

  • Yang, Jung-Soon
    • Cross-Cultural Studies
    • /
    • v.39
    • /
    • pp.433-460
    • /
    • 2015
  • This paper was analyzed the politeness and the expression of ending sentence in family conversations in the virtual language of cartoon characters. Younger speakers have a tendency to unite sentence-final particle to the polite form, older speakers have a tendency to unite it to the plain form in the historical genre. But younger speakers and older speakers unite sentence-final particle to the plain form in other fiction genres. Using terms of respect is determined by circumstances and charactonym. Comparing the translation of conversations with the original, there were the different aspects of translated works. When Japanese instructors are used to study Japanese as the instructional media, they give a supplementary explanation to students. 'WA' 'KASIRA' that a female speaker usually uses are used by a male speaker, 'ZO' 'ZE' that a male speaker usually uses are used by a female speaker in the virtual language of cartoons. In the field of the translation, it is translated 'KANA' 'KASIRA' into 'KA?', 'WA' 'ZO' 'ZE' into 'A(EO)?', 'WAYO' 'ZEYO' into AYO(EOYO)'. When we use sentence-final particle in the virtual language of cartoon, we need to supply supplementary explanations and further examinations.

Symbolizing Numbers to Improve Neural Machine Translation (숫자 기호화를 통한 신경기계번역 성능 향상)

  • Kang, Cheongwoong;Ro, Youngheon;Kim, Jisu;Choi, Heeyoul
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1161-1167
    • /
    • 2018
  • The development of machine learning has enabled machines to perform delicate tasks that only humans could do, and thus many companies have introduced machine learning based translators. Existing translators have good performances but they have problems in number translation. The translators often mistranslate numbers when the input sentence includes a large number. Furthermore, the output sentence structure completely changes even if only one number in the input sentence changes. In this paper, first, we optimized a neural machine translation model architecture that uses bidirectional RNN, LSTM, and the attention mechanism through data cleansing and changing the dictionary size. Then, we implemented a number-processing algorithm specialized in number translation and applied it to the neural machine translation model to solve the problems above. The paper includes the data cleansing method, an optimal dictionary size and the number-processing algorithm, as well as experiment results for translation performance based on the BLEU score.

A Design of Japanese Analyzer for Japanese to Korean Translation System (일반 번역시스탬을 위한 일본어 해석기 설계)

  • 강석훈;최병욱
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.1
    • /
    • pp.136-146
    • /
    • 1995
  • In this paper, a Japanese morphological analyzer for Japanese to Korean Machine Translation System is designed. The analyzer reconstructs the Japanese input sentence into word phrases that include grammatical and dictionary informations. Thus we propose the algorithm to separate morphemes and then connect them by reference to a corresponding Korean word phrases. And we define the connector to control Japanese word phrases It is used in controlling the start and the end point of the word phrase in the Japanese sentence which is without a space. The proposed analyzer uses the analysis dictionary to perform more efficient analysis than the existing analyzer. And we can decrease the number of its dictionary searches. Since the analyzer, proposed in this paper, for Japanese to Korean Machine Translation System processes each word phrase in consideration of the corresponding Korean word phrase, it can generate more accurate Korean expressions than the existing one which places great importance on the generation of the entire sentence structure.

  • PDF

Optimized Chinese Pronunciation Prediction by Component-Based Statistical Machine Translation

  • Zhu, Shunle
    • Journal of Information Processing Systems
    • /
    • v.17 no.1
    • /
    • pp.203-212
    • /
    • 2021
  • To eliminate ambiguities in the existing methods to simplify Chinese pronunciation learning, we propose a model that can predict the pronunciation of Chinese characters automatically. The proposed model relies on a statistical machine translation (SMT) framework. In particular, we consider the components of Chinese characters as the basic unit and consider the pronunciation prediction as a machine translation procedure (the component sequence as a source sentence, the pronunciation, pinyin, as a target sentence). In addition to traditional features such as the bidirectional word translation and the n-gram language model, we also implement a component similarity feature to overcome some typos during practical use. We incorporate these features into a log-linear model. The experimental results show that our approach significantly outperforms other baseline models.