• Title/Summary/Keyword: 수정규칙

Search Result 274, Processing Time 0.026 seconds

A study on the ambiguous adnominal constructions in product documentation (제품 설명서에 나타나는 중의적 명사 수식 구문 연구 - 통제 언어의 관점에서-)

  • Park, Arum;Ji, Eun-Byul;Hong, Munpyo
    • Annual Conference on Human and Language Technology
    • /
    • 2012.10a
    • /
    • pp.23-28
    • /
    • 2012
  • 번역을 지원하는 도구로 자동 번역 시스템을 효율적으로 활용하기 위해 중요한 것은 자동 번역에 적합하도록 원문을 작성하거나 이미 작성된 원문에 대한 전처리 작업을 하는 것이다. 본 연구의 궁극적인 목표는 제품 설명서 작성자가 통제언어 체커를 통해 통제언어 규칙들을 적용하여 원문을 작성하도록 하는 것이다. 본 논문은 그 중간 단계로써 제품 설명서에 나타나는 문제 사항이 번역 품질에 어떠한 영향을 미치는지 밝혀내는 것을 목적으로 한다. 연구 대상은 제품 설명서에서 자동 번역의 성능을 저해시키는 요소 중 중의적 명사 수식 구문이다. 이러한 명사 수식 구문들은 분석 단계에서 구조적인 모호성을 초래하여 한국어 분석의 정확도를 떨어뜨리기 때문에 결과적으로 번역 품질을 악화시킬 수 있다. 이를 검증하기 위해 우선 제품 설명서 데이터를 분석하여 자동 번역 결과에 부정적인 영향을 미치는 명사 수식 구문을 다음과 같이 4가지로 유형화 하였다. (유형 1) 관형격 명사구 + 명사 병렬 접속, (유형 2) 동사의 관형형이 수식하는 명사구 + 명사 병렬 접속, (유형 3) 관형격 조사 '의' 중복, (유형 4) 병렬 접속어를 잘못 쓴 경우, 각각의 유형에 대해서 한국어 분석 단계에서 발생할 수 있는 문제에 대해 설명하였으며, 문제 사항에 대해 통제언어 규칙을 제시하였다. 통제언어 규칙에 따라 중의적 명사 수식 구문을 수정한 결과, 한국어 원문의 번역결과보다 한국어 수정문의 번역결과가 작성자의 의도를 더 잘 나타낸다는 것을 확인할 수 있었다.

  • PDF

Design of the Fuzzy Traffic Controller by the Input-Output Data Clustering (입출력 데이터 클러스터링에 의한 퍼지 교통 제어기의 설계)

  • 지연상;최완규;이성주
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.3
    • /
    • pp.241-245
    • /
    • 2001
  • The existing fuzzy traffic controllers construct the rule-base based on the intuitive knowledge and experience or the standard rule-base, but the rule-base constructed by the above methods has difficulty in representing exactly and detailedly the control knowledge of the export and the operator. Therefore, in this paper, we propose a method that can improve the performance of the fuzzy traffic control by designing the fuzzy traffic controller which represents the control knowledge more exactly. The proposed method so modifies the position and shape of the fuzzy membership function based on the input-output data clustering that the fuzzy traffic controller can represent the control knowledge more exactly. Our method use the rough control knowledge based on intuitive knowledge and experience as the evaluation function for clustering the input-output data. The fuzzy traffic controller designed by the our method could represent the control knowledge of the expert and the operator more exactly, and it outperformed the existing controller in terms of the number of passed vehicles and the wasted green-time.

  • PDF

Automatic Generation of Information Extraction Rules Through User-interface Agents (사용자 인터페이스 에이전트를 통한 정보추출 규칙의 자동 생성)

  • 김용기;양재영;최중민
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.447-456
    • /
    • 2004
  • Information extraction is a process of recognizing and fetching particular information fragments from a document. In order to extract information uniformly from many heterogeneous information sources, it is necessary to produce information extraction rules called a wrapper for each source. Previous methods of information extraction can be categorized into manual wrapper generation and automatic wrapper generation. In the manual method, since the wrapper is manually generated by a human expert who analyzes documents and writes rules, the precision of the wrapper is very high whereas it reveals problems in scalability and efficiency In the automatic method, the agent program analyzes a set of example documents and produces a wrapper through learning. Although it is very scalable, this method has difficulty in generating correct rules per se, and also the generated rules are sometimes unreliable. This paper tries to combine both manual and automatic methods by proposing a new method of learning information extraction rules. We adopt the scheme of supervised learning in which a user-interface agent is designed to get information from the user regarding what to extract from a document, and eventually XML-based information extraction rules are generated through learning according to these inputs. The interface agent is used not only to generate new extraction rules but also to modify and extend existing ones to enhance the precision and the recall measures of the extraction system. We have done a series of experiments to test the system, and the results are very promising. We hope that our system can be applied to practical systems such as information-mediator agents.

Correction for Hangul Normalization in Unicode (유니코드 환경에서의 올바른 한글 정규화를 위한 수정 방안)

  • Ahn, Dae-Hyuk;Park, Young-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.2
    • /
    • pp.169-177
    • /
    • 2007
  • Hangul text normalization in current Unicode makes wrong Hangul syllable problems when using with precomposed modern Hangul syllables and composing old Hangul by using conjoining-Hangul Jamo and compatibility Hangul Jamo. This problem comes from allowing incorrect normalization form of compatibility Hangul Jamo and Hangul Symbol and also permitting to use conjoining-Hangul Jamo mixture with precomposed Hangul syllable in Unicode Hangul composing rule. It is caused by lack of consideration of old Hangul and/or insufficient understanding of Hangul code processing when writing specification for normalization forms in Unicode. Therefore on this paper, we study Hangul code in Unicode environment, specifically problems of normalization used for Web and XML, IDN in nowadays. Also we propose modification of Hangul normalization methods and Hangul composing rules for correct processing of Hangul normalization in Unicode.

An Automatic Post-processing Method for Speech Recognition using CRFs and TBL (CRFs와 TBL을 이용한 자동화된 음성인식 후처리 방법)

  • Seon, Choong-Nyoung;Jeong, Hyoung-Il;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.9
    • /
    • pp.706-711
    • /
    • 2010
  • In the applications of a human speech interface, reducing the error rate in recognition is the one of the main research issues. Many previous studies attempted to correct errors using post-processing, which is dependent on a manually constructed corpus and correction patterns. We propose an automatically learnable post-processing method that is independent of the characteristics of both the domain and the speech recognizer. We divide the entire post-processing task into two steps: error detection and error correction. We consider the error detection step as a classification problem for which we apply the conditional random fields (CRFs) classifier. Furthermore, we apply transformation-based learning (TBL) to the error correction step. Our experimental results indicate that the proposed method corrects a speech recognizer's insertion, deletion, and substitution errors by 25.85%, 3.57%, and 7.42%, respectively.

An Experimental Study on the Application of NTCIP to Korean Traffic Signal Control System (교통신호제어시스템 NTCIP 통신규약 적용성 실험 연구)

  • Go, Gwang-Yong;Jeong, Jun-Ha;Lee, Seung-Hwan;An, Gye-Hyeong
    • Journal of Korean Society of Transportation
    • /
    • v.24 no.5 s.91
    • /
    • pp.19-33
    • /
    • 2006
  • This paper presents the results of an experimental study on the application of NTCIP protocol to Korean traffic signal control system. For this study the communication Protocol of the existing traffic signal control system was adjusted to meet NTCIP standard. Management information base for Korea real-time traffic signal control system, message library of OER, traffic control center management software supporting SNMP/SFMP Protocol, and agent softwares for local controllers were developed during the experimental study. The applicability test of the adjusted system by NTCIP standard was performed. Fifty eight Percent of communication packets were lost at 2.400bps communication speed, which made the operation impossible. The experimentations with communication speeds 4,800bps and 9,600bps did not cause problems. In conclusion, to apply the NTCIP standard to domestic real-time traffic control system, communication environments need to be upgraded to 4,800bps or higher.

Rule-based Speech Recognition Error Correction for Mobile Environment (모바일 환경을 고려한 규칙기반 음성인식 오류교정)

  • Kim, Jin-Hyung;Park, So-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.10
    • /
    • pp.25-33
    • /
    • 2012
  • In this paper, we propose a rule-based model to correct errors in a speech recognition result in the mobile device environment. The proposed model considers the mobile device environment with limited resources such as processing time and memory, as follows. In order to minimize the error correction processing time, the proposed model removes some processing steps such as morphological analysis and the composition and decomposition of syllable. Also, the proposed model utilizes the longest match rule selection method to generate one error correction candidate per point, assumed that an error occurs. For the purpose of deploying memory resource, the proposed model uses neither the Eojeol dictionary nor the morphological analyzer, and stores a combined rule list without any classification. Considering the modification and maintenance of the proposed model, the error correction rules are automatically extracted from a training corpus. Experimental results show that the proposed model improves 5.27% on the precision and 5.60% on the recall based on Eojoel unit for the speech recognition result.

Allocating CO2 Emission by Sector: A Claims Problem Approach (Claims problem을 활용한 부문별 온실가스 감축목표 분석)

  • Yunji Her
    • Environmental and Resource Economics Review
    • /
    • v.31 no.4
    • /
    • pp.733-753
    • /
    • 2022
  • Korean government established the Nationally Determined Contribution (NDC) in 2015. After revising in 2019, the government updated an enhanced target at the end of last year. When the NDC is addressed, the emission targets of each sector, such as power generation, industry, and buildings, are also set. This paper analyzes the emission target of each sector by applying a claims problem or bankruptcy problem developed from cooperative game theory. The five allocation rules from a claims problem are introduced and the properties of each rule are considered axiomatically. This study applies the five rules on allocating carbon emission by sector under the NDC target and compares the results with the announced government target. For the power generation sector, the government target is set lower than the emissions allocated by the five rules. On the other hand, the government target for the industry sector is higher than the results of the five rules. In other sectors, the government's targets are similar to the results of the rule that allocates emissions in proportion to each claim.

Generalization of error decision rules in a grammar checker using Korean WordNet, KorLex (명사 어휘의미망을 활용한 문법 검사기의 문맥 오류 결정 규칙 일반화)

  • So, Gil-Ja;Lee, Seung-Hee;Kwon, Hyuk-Chul
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.405-414
    • /
    • 2011
  • Korean grammar checkers typically detect context-dependent errors by employing heuristic rules that are manually formulated by a language expert. These rules are appended each time a new error pattern is detected. However, such grammar checkers are not consistent. In order to resolve this shortcoming, we propose new method for generalizing error decision rules to detect the above errors. For this purpose, we use an existing thesaurus KorLex, which is the Korean version of Princeton WordNet. KorLex has hierarchical word senses for nouns, but does not contain any information about the relationships between cases in a sentence. Through the Tree Cut Model and the MDL(minimum description length) model based on information theory, we extract noun classes from KorLex and generalize error decision rules from these noun classes. In order to verify the accuracy of the new method in an experiment, we extracted nouns used as an object of the four predicates usually confused from a large corpus, and subsequently extracted noun classes from these nouns. We found that the number of error decision rules generalized from these noun classes has decreased to about 64.8%. In conclusion, the precision of our grammar checker exceeds that of conventional ones by 6.2%.

Analysis of Predicate/Arguments Syntactico-Semantic Relation for the Extension of a Korean Grammar Checker (한국어 문법 검사기의 기능 확장을 위한 서술어와 논항의 통사.의미적 관계 분석)

  • Nam, Hyeon-Suk;Son, Hun-Seok;Choi, Seong-Pil;Park, Yong-Uk;So, Gil-Ja;Gwon, Hyeok-Cheol
    • Annual Conference on Human and Language Technology
    • /
    • 1997.10a
    • /
    • pp.403-408
    • /
    • 1997
  • 언어의 내적 특성을 반영하는 의미 문체의 검사 및 교정은 언어의 형태적인 면과 관련있는 단순한 철자 검사 및 교정에 비해 더 난해하고 복잡한 양상을 띤다. 본 논문이 제안하는 의미 정보를 이용한 명사 분류 방법은 의미와 문체 오류의 포착과 수정 기능을 향상시키기 위한 방법의 하나이다. 이 논문은 문맥상 용법이 어긋나는 서술어를 교정하기 위해 명사 의미 분류방법을 서술어/논항의 통사 의미적 관계 분석에 이용하여 의미 규칙을 세우는 과정을 서술한다. 여기서 논항인 명사의 의미 정보를 체계적으로 분류하기 위해 시소러스 기법과 의미망을 응용한다. 서술어와 논항 사이의 통사 의미적 관계에 따라 의미 문체 오류를 검사하고 교정함으로써 규칙들을 일반화하여 구축하게 하고 이미 존재하고 있는 규칙을 단순화함으로써 한국어 문법 검사기의 기능을 보완한다.

  • PDF