• Title/Summary/Keyword: English sentence processing

Search Result 45, Processing Time 0.024 seconds

English-Korean Transfer Dictionary Extension Tool in English-Korean Machine Translation System (영한 기계번역 시스템의 영한 변환사전 확장 도구)

  • Kim, Sung-Dong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.1
    • /
    • pp.35-42
    • /
    • 2013
  • Developing English-Korean machine translation system requires the construction of information about the languages, and the amount of information in English-Korean transfer dictionary is especially critical to the translation quality. Newly created words are out-of-vocabulary words and they appear as they are in the translated sentence, which decreases the translation quality. Also, compound nouns make lexical and syntactic analysis complex and it is difficult to accurately translate compound nouns due to the lack of information in the transfer dictionary. In order to improve the translation quality of English-Korean machine translation, we must continuously expand the information of the English-Korean transfer dictionary by collecting the out-of-vocabulary words and the compound nouns frequently used. This paper proposes a method for expanding of the transfer dictionary, which consists of constructing corpus from internet newspapers, extracting the words which are not in the existing dictionary and the frequently used compound nouns, attaching meaning to the extracted words, and integrating with the transfer dictionary. We also develop the tool supporting the expansion of the transfer dictionary. The expansion of the dictionary information is critical to improving the machine translation system but requires much human efforts. The developed tool can be useful for continuously expanding the transfer dictionary, and so it is expected to contribute to enhancing the translation quality.

Accuracy Improvement of an Automated Scoring System through Removing Duplicately Reported Errors (영작문 자동 채점 시스템에서의 중복 보고 오류 제거를 통한 성능 향상)

  • Lee, Hyun-Ah;Kim, Jee-Eun;Lee, Kong-Joo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.2
    • /
    • pp.173-180
    • /
    • 2009
  • The purpose of developing an automated scoring system for English composition is to score English writing tests and to give diagnostic feedback to the test-takers without human's efforts. The system developed through our research detects grammatical errors of a single sentence on morphological, syntactic and semantic stages, respectively, and those errors are calculated into the final score. The error detecting stages are independent from one another, which causes duplicating the identical errors with different labels at different stages. These duplicated errors become a hindering factor to calculating an accurate score. This paper presents a solution to detecting the duplicated errors and improving an accuracy in calculating the final score by eliminating one of the errors.

Sentence-Frame based English-to-Korean Machine Translation (문틀기반 영한 자동번역 시스템)

  • Choi, Sung-Kwon;Seo, Kwang-Jun;Kim, Young-Kil;Seo, Young-Ae;Roh, Yoon-Hyung;Lee, Hyun-Keun
    • Annual Conference on Human and Language Technology
    • /
    • 2000.10d
    • /
    • pp.323-328
    • /
    • 2000
  • 국내에서 영한 자동번역 시스템을 1985 년부터 개발한 지 벌써 15년이 흐르고 있다. 15 년의 영한 자동번역 기술개발에도 불구하고 아직도 영한 자동번역 시스템의 번역품질은 40%를 넘지 못하고 있다. 이렇게 번역품질이 낮은 이유는 다음과 같이 요약할 수 있을 것이다. o 입력문에 대해 파싱할 때 오른쪽 경계를 잘못 인식함으로써 구조적 모호성의 발생문제: 예를 들어 등위 접속절에서 오른쪽 등위절이 등위 접속절에 포함되는 지의 모호성. o 번역 단위로써 전체 문장을 대상으로 한 번역패턴이 아닌 구나 절과 같은 부분적인 번역패턴으로 인한 문장 전체의 잘못된 번역 결과 발생. o 점차 증가하는 대용량 번역지식의 구축과 관련해 새로 구축되는 번역 지식과 기구축된 대용량 번역지식들 간의 상호 충돌로 인한 번역 품질의 저하. 이러한 심각한 원인들을 극복하기 위해 본 논문에서는 문틀에 기반한 새로운 영한 자동번역 방법론을 소개하고자 한다. 이 문틀에 기반한 영한 자동번역 방법론은 현재 CNN뉴스 방송 자막을 대상으로 한 영한 자동번역 시스템에서 실제 활용되고 있다. 이 방법론은 기본적으로 data-driven 방법론에 속하다. 문틀 기반 자동번역 방법론은 규칙기반 자동번역 방법론보다는 낮은 단계에서 예제 기반 자동번역 방법론보다는 높은 단계에서 번역을 하는 번역방법론이다. 이 방법론은 영한 자동번역에 뿐만 아니라 다른 언어쌍에서의 번역에도 적용할 수 있을 것이다.

  • PDF

Cognitive Individual Differences and L2 Learners' Processing of Korean Subject-Object Relative Clauses (인지능력의 개별차와 한국어 학습자의 주격-목적격 관계절 프로세싱)

  • Goo, Jaemyung
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.6
    • /
    • pp.493-503
    • /
    • 2018
  • The present study is a conceptual replication of O'Grady, Lee, and Choo's (2003) earlier study designed to investigate two hypotheses (linear distance hypothesis vs. structural distance hypothesis) in relation to L2 Korean learners' processing of Korean subject and object relative clauses (RCs) in a scholarly attempt to explicate Keenan and Comrie's (1977) Noun Phrase Accessibility Hierarchy (NPAH). In addition, the current study is intended to explore any potential role of working memory capacity (WMC) in the processing of Korean subject and/or object RCs. Chinese-speaking learners of Korean taking a language course offered at a local university in Korea participated in this experimental study. Among those recruited, only 23 learners completed the experimental tasks appropriately according to the specific instructions provided on each task, and thus, subsequent statistical analyses were conducted on their data. Fifteen Korean NSs were also recruited for the control group. Two experimental tasks were administerd to the participants: one picture selection task containing the same test items used in O'Grady et al.'s study to measure their processing of subject-object RCs and an operation span (OSPAN) task to measure their WMC. Somewhat differently from O'Grady et al.'s findings, the participating Chinese learners of Korean performed significantly better on object RCs than on subject RCs, seemingly lending support to the linear distance hypothesis. Further analyses, however, suggested that the results in favor of, or relative ease of processing, object relative clauses were due, most likely, to the learners' excessive use of the canonical sentence strategy, which also led to nonsignificant correlations between WMC and learner performance on the picture selection task.

An LSTM Method for Natural Pronunciation Expression of Foreign Words in Sentences (문장에 포함된 외국어의 자연스러운 발음 표현을 위한 LSTM 방법)

  • Kim, Sungdon;Jung, Jaehee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.4
    • /
    • pp.163-170
    • /
    • 2019
  • Korea language has postpositions such as eul, reul, yi, ga, wa, and gwa, which are attached to nouns and add meaning to the sentence. When foreign notations or abbreviations are included in sentences, the appropriate postposition for the pronunciation of the foreign words may not be used. Sometimes, for natural expression of the sentence, two postpositions are used with one in parentheses as in "eul(reul)" so that both postpositions can be acceptable. This study finds examples of using unnatural postpositions when foreign words are included in Korean sentences and proposes a method for using natural postpositions by learning the final consonant pronunciation of nouns. The proposed method uses a recurrent neural network model to naturally express postpositions connected to foreign words. Furthermore, the proposed method is proven by learning and testing with the proposed method. It will be useful for composing perfect sentences for machine translation by using natural postpositions for English abbreviations or new foreign words included in Korean sentences in the future.

Resolving Grammatical Marking Ambiguities of Korean: An Eye-tracking Study (안구운동 추적을 통한 한국어 중의성 해소과정 연구)

  • Kim Youngjin
    • Korean Journal of Cognitive Science
    • /
    • v.15 no.4
    • /
    • pp.49-59
    • /
    • 2004
  • An eye-tracking experiment was conducted to examine resolving processes of grammatical marking ambiguities of Korean. and to evaluate predictions from the garden-path model and the constraint-based models on the processing of Korean morphological information. The complex NP clause structure that can be parsed according to the minimal attachment principle was compared to the embedded relative clause structures that have one of the nominative marker (-ka), the delimiter (-man, which roughly corresponds to the English word 'only'), or the topic marker (-nun) on the first NPs. The results clearly showed that Korean marking ambiguities are resolved by the minimal attachment principle, and the topic marker affects reparsing procedures. The pattern of eye fixation times was more compatible with the garden-path model, and was not consistent with the predictions of the constraint-based accounts. Suggestions for further studies were made.

  • PDF

Korean Text to Gloss: Self-Supervised Learning approach

  • Thanh-Vu Dang;Gwang-hyun Yu;Ji-yong Kim;Young-hwan Park;Chil-woo Lee;Jin-Young Kim
    • Smart Media Journal
    • /
    • v.12 no.1
    • /
    • pp.32-46
    • /
    • 2023
  • Natural Language Processing (NLP) has grown tremendously in recent years. Typically, bilingual, and multilingual translation models have been deployed widely in machine translation and gained vast attention from the research community. On the contrary, few studies have focused on translating between spoken and sign languages, especially non-English languages. Prior works on Sign Language Translation (SLT) have shown that a mid-level sign gloss representation enhances translation performance. Therefore, this study presents a new large-scale Korean sign language dataset, the Museum-Commentary Korean Sign Gloss (MCKSG) dataset, including 3828 pairs of Korean sentences and their corresponding sign glosses used in Museum-Commentary contexts. In addition, we propose a translation framework based on self-supervised learning, where the pretext task is a text-to-text from a Korean sentence to its back-translation versions, then the pre-trained network will be fine-tuned on the MCKSG dataset. Using self-supervised learning help to overcome the drawback of a shortage of sign language data. Through experimental results, our proposed model outperforms a baseline BERT model by 6.22%.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

Korean Semantic Role Labeling Based on Suffix Structure Analysis and Machine Learning (접사 구조 분석과 기계 학습에 기반한 한국어 의미 역 결정)

  • Seok, Miran;Kim, Yu-Seop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.555-562
    • /
    • 2016
  • Semantic Role Labeling (SRL) is to determine the semantic relation of a predicate and its argu-ments in a sentence. But Korean semantic role labeling has faced on difficulty due to its different language structure compared to English, which makes it very hard to use appropriate approaches developed so far. That means that methods proposed so far could not show a satisfied perfor-mance, compared to English and Chinese. To complement these problems, we focus on suffix information analysis, such as josa (case suffix) and eomi (verbal ending) analysis. Korean lan-guage is one of the agglutinative languages, such as Japanese, which have well defined suffix structure in their words. The agglutinative languages could have free word order due to its de-veloped suffix structure. Also arguments with a single morpheme are then labeled with statistics. In addition, machine learning algorithms such as Support Vector Machine (SVM) and Condi-tional Random Fields (CRF) are used to model SRL problem on arguments that are not labeled at the suffix analysis phase. The proposed method is intended to reduce the range of argument instances to which machine learning approaches should be applied, resulting in uncertain and inaccurate role labeling. In experiments, we use 15,224 arguments and we are able to obtain approximately 83.24% f1-score, increased about 4.85% points compared to the state-of-the-art Korean SRL research.

General Relation Extraction Using Probabilistic Crossover (확률적 교차 연산을 이용한 보편적 관계 추출)

  • Je-Seung Lee;Jae-Hoon Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.371-380
    • /
    • 2023
  • Relation extraction is to extract relationships between named entities from text. Traditionally, relation extraction methods only extract relations between predetermined subject and object entities. However, in end-to-end relation extraction, all possible relations must be extracted by considering the positions of the subject and object for each pair of entities, and so this method uses time and resources inefficiently. To alleviate this problem, this paper proposes a method that sets directions based on the positions of the subject and object, and extracts relations according to the directions. The proposed method utilizes existing relation extraction data to generate direction labels indicating the direction in which the subject points to the object in the sentence, adds entity position tokens and entity type to sentences to predict the directions using a pre-trained language model (KLUE-RoBERTa-base, RoBERTa-base), and generates representations of subject and object entities through probabilistic crossover operation. Then, we make use of these representations to extract relations. Experimental results show that the proposed model performs about 3 ~ 4%p better than a method for predicting integrated labels. In addition, when learning Korean and English data using the proposed model, the performance was 1.7%p higher in English than in Korean due to the number of data and language disorder and the values of the parameters that produce the best performance were different. By excluding the number of directional cases, the proposed model can reduce the waste of resources in end-to-end relation extraction.