• Title/Summary/Keyword: Rule-Based Machine Translation

Search Result 17, Processing Time 0.022 seconds

An Analysis System of Prepositional Phrases in English-to-Korean Machine Translation (영한 기계번역에서 전치사구를 해석하는 시스템)

  • Gang, Won-Seok
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.7
    • /
    • pp.1792-1802
    • /
    • 1996
  • The analysis of prepositional phrases in English-to Korean machine translation has problem on the PP-attachment resolution, semantic analysis, and acquisition of information. This paper presents an analysis system for prepositional phrases, which solves the problem. The analysis system consists of the PP-attachment resolution hybrid system, semantic analysis system, and semantic feature generator that automatically generates input information. It provides objectiveness in analyzing prepositional phrases with the automatic generation of semantic features. The semantic analysis system enables to generate natural Korean expressions through selection semantic roles of prepositional phrases. The PP-attachment resolution hybrid system has the merit of the rule-based and neural network-based method.

  • PDF

Building an Annotated English-Vietnamese Parallel Corpus for Training Vietnamese-related NLPs

  • Dien Dinh;Kiem Hoang
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.103-109
    • /
    • 2004
  • In NLP (Natural Language Processing) tasks, the highest difficulty which computers had to face with, is the built-in ambiguity of Natural Languages. To disambiguate it, formerly, they based on human-devised rules. Building such a complete rule-set is time-consuming and labor-intensive task whilst it doesn't cover all the cases. Besides, when the scale of system increases, it is very difficult to control that rule-set. So, recently, many NLP tasks have changed from rule-based approaches into corpus-based approaches with large annotated corpora. Corpus-based NLP tasks for such popular languages as English, French, etc. have been well studied with satisfactory achievements. In contrast, corpus-based NLP tasks for Vietnamese are at a deadlock due to absence of annotated training data. Furthermore, hand-annotation of even reasonably well-determined features such as part-of-speech (POS) tags has proved to be labor intensive and costly. In this paper, we present our building an annotated English-Vietnamese parallel aligned corpus named EVC to train for Vietnamese-related NLP tasks such as Word Segmentation, POS-tagger, Word Order transfer, Word Sense Disambiguation, English-to-Vietnamese Machine Translation, etc.

  • PDF

Development of Korean-to-English and English-to-Korean Mobile Translator for Smartphone (스마트폰용 영한, 한영 모바일 번역기 개발)

  • Yuh, Sang-Hwa;Chae, Heung-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.3
    • /
    • pp.229-236
    • /
    • 2011
  • In this paper we present light weighted English-to-Korean and Korean-to-English mobile translators on smart phones. For natural translation and higher translation quality, translation engines are hybridized with Translation Memory (TM) and Rule-based translation engine. In order to maximize the usability of the system, we combined an Optical Character Recognition (OCR) engine and Text-to-Speech (TTS) engine as a Front-End and Back-end of the mobile translators. With the BLEU and NIST evaluation metrics, the experimental results show our E-K and K-E mobile translation equality reach 72.4% and 77.7% of Google translators, respectively. This shows the quality of our mobile translators almost reaches the that of server-based machine translation to show its commercial usefulness.

Using Machine Translation Agent Based on Ontology Study of Real Translation (온톨로지 기반의 지능형 번역 에이전트를 이용한 실시간 번역 연구)

  • Kim Su-Gyeong;Kim Gyeong-A;An Gi-Hong
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2006.06a
    • /
    • pp.229-233
    • /
    • 2006
  • 기계번역(Machine Translaton, MT), 다국어 정보 검색, 의미 정보 검색 등에 대한 연구는 시소러스, 지식베이스, 사전 검색, 의미망, 코퍼스등과 같은 다양한 방법으로 이루어지고 있다. 시맨틱 웹이 등장과 시맨틱 웹 기반 기술의 발전에 따라 위 연구들을 시맨틱 웹에 적용시킬 필요성도 제안되었다. 특히 한국어 시소러스, 워드넷(WordNet), 전자 세종 사전, 가도까와(Kadokawa) 시소러스와 같은 지식베이스가 개발되었으나 활용 분야에 따라 그 구축 방법론이 다르게 적용되어, 위 연구에 효과적으로 통용될 수 있는 지식베이스는 실질적으로 구축되지 못한 실정이다. 따라서 본 연구에서는 세종 사전과 가도까와 시소러스, 한/일 기계 번역 사전 그리고 전문 용어 사전을 기반으로 한국어와 일본어 지식베이스를 위한 사전 온톨로지 서버를 정의하여 의미 정보를 구성하고, Semantic Web Rule Markup Language (이하 SWRL)을 이용해 구문 정보 규칙을 정의한다. 그리고 SWRL 기반 정방향 추론 엔진을 이용하여 번역에 필요한 추론 엔진을 구성하고 문장 구문형성 규칙 추론 엔진을 통해 사용자에게 한국어와 일본어의 문장 구성 변환을 제공한다. 본 연구는 현재 기계 번역이 갖고 있는 다의성, 술부 어순의 차이, 경어체 등 아직 해결해야 할 많은 부분들에 대한 해결 방안으로서 시맨틱 웹 기반 기술과의 활용방안을 제시하고자 한다.

  • PDF

Syntactic Rule Compiler in Rule-based English-Korean Machine Translation (규칙 기반 영한 기계번역에서의 구문 규칙 컴파일러)

  • Kim, Sung-Dong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.1315-1317
    • /
    • 2013
  • 규칙 기반의 영한 기계번역 시스템의 구문 분석 시스템은 영어의 구문 구조를 기술하는 규칙 부분과 규칙을 적용하여 차트 파싱을 수행하는 실행 부분으로 구성된다. 구문 규칙은 문맥 자유 문법의 형식으로 기술되는데, 기술된 구문 규칙을 적용하여 파싱을 실행하는 실행 부분은 C 언어 함수로 표현되므로, 구문 규칙을 C 언어 함수로 변환해야 한다. 본 논문에서는 문맥 자유 문법 형식으로 기술된 구문 규칙을 C 언어 함수로 변환하는 도구인 구문 규칙 컴파일러를 개발하였다. 구문 규칙 컴파일러는 자동적으로 구문 규칙을 C 언어 함수로 변환함으로써 영한 기계번역 시스템의 성능 개선 과정에서 빈번하게 발생하는 구문 규칙의 생성과 수정을 용이하게 하여 번역 성능을 개선하는 작업을 지원한다.

Implementing Korean Partial Parser based on Rules (규칙에 기반한 한국어 부분 구문분석기의 구현)

  • Lee, Kong-Joo;Kim, Jae-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.389-396
    • /
    • 2003
  • In this paper, we present a Korean partial parser based on rules, which is used for running applications such as a grammar checker and a machine translation. Basically partial parsers construct one or more morphemes and/or words into one syntactical unit, but not complete syntactic trees, and accomplish some additional operations for syntactical parsing. The system described in this paper adopts a set of about 140 manually-written rules for partial parsing. Each rule consists of conditional statements and action statement that defines which one is head node and also describes an additional action to do if necessary. To observe that this approach can improve the efficiency of overall processing, we make simple experiments. The experimental results have shown that the average number of edges generated in processing without the partial parser is about 2 times more than that with the partial parser.

Deletion-Based Sentence Compression Using Sentence Scoring Reflecting Linguistic Information (언어 정보가 반영된 문장 점수를 활용하는 삭제 기반 문장 압축)

  • Lee, Jun-Beom;Kim, So-Eon;Park, Seong-Bae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.125-132
    • /
    • 2022
  • Sentence compression is a natural language processing task that generates concise sentences that preserves the important meaning of the original sentence. For grammatically appropriate sentence compression, early studies utilized human-defined linguistic rules. Furthermore, while the sequence-to-sequence models perform well on various natural language processing tasks, such as machine translation, there have been studies that utilize it for sentence compression. However, for the linguistic rule-based studies, all rules have to be defined by human, and for the sequence-to-sequence model based studies require a large amount of parallel data for model training. In order to address these challenges, Deleter, a sentence compression model that leverages a pre-trained language model BERT, is proposed. Because the Deleter utilizes perplexity based score computed over BERT to compress sentences, any linguistic rules and parallel dataset is not required for sentence compression. However, because Deleter compresses sentences only considering perplexity, it does not compress sentences by reflecting the linguistic information of the words in the sentences. Furthermore, since the dataset used for pre-learning BERT are far from compressed sentences, there is a problem that this can lad to incorrect sentence compression. In order to address these problems, this paper proposes a method to quantify the importance of linguistic information and reflect it in perplexity-based sentence scoring. Furthermore, by fine-tuning BERT with a corpus of news articles that often contain proper nouns and often omit the unnecessary modifiers, we allow BERT to measure the perplexity appropriate for sentence compression. The evaluations on the English and Korean dataset confirm that the sentence compression performance of sentence-scoring based models can be improved by utilizing the proposed method.