• Title/Summary/Keyword: Lexical Information

Search Result 323, Processing Time 0.019 seconds

Lexical and Semantic Incongruities between the Lexicons of English and Korean

  • Lee, Yae-Sheik
    • Language and Information
    • /
    • v.5 no.2
    • /
    • pp.21-37
    • /
    • 2001
  • Pustejovsky (1995) rekindled debate on the dual problems of how to represent lexical meaning and on the information that is to be encoded in a lexicon. For natural language processing such as machine translation, these are important issues. When a lexical-conceptual mismatch occurs in translation of corresponding words from two different languages, the appropriate representation of their meanings is very important. This paper proposes a new formalism for representing lexical entries by first analysing observable mismatches in comparable pairs of nouns, verbs, and adjectives in English and Korean. Inherent mis-interpretations and mis-readings in each pair are identified. Then, concept theories such as those presented by Ganter and Wille (1996) and Priss (1998) are extended in order to reflect the cognitivist view that meaning resides in concept, and also to incorporate the propositions of the so-called ‘multiple inheritance’system. An alternative to the formalism of Pustejovsky (1995) and Pollard & Sag (1994) is then proposed. Finally, representative examples of lexical mismatches are analysed using the new model.

  • PDF

A Study of the Interface between Korean Sentence Parsing and Lexical Information (한국어 문장분석과 어휘정보의 연결에 관한 연구)

  • 최병진
    • Language and Information
    • /
    • v.4 no.2
    • /
    • pp.55-68
    • /
    • 2000
  • The efficiency and stability of an NLP system depends crucially on how is lexicon is orga- nized . Then lexicon ought to encode linguistic generalizations and exceptions thereof. Nowadays many computational linguists tend to construct such lexical information in an inheritance hierarchy DATR is good for this purpose In this research I will construct a DATR-lexicon in order to parse sentences in Korean using QPATR is implemented on the basis of a unification based grammar developed in Dusseldorf. In this paper I want to show the interface between a syntactic parser(QPATR) and DTAR-formalism representing lexical information. The QPATR parse can extract the lexical information from the DATR lexicon which is organised hierarchically.

  • PDF

The Role of Pitch and Length in Spoken Word Recognition: Differences between Seoul and Daegu Dialects (말소리 단어 재인 시 높낮이와 장단의 역할: 서울 방언과 대구 방언의 비교)

  • Lee, Yoon-Hyoung;Pak, Hyen-Sou
    • Phonetics and Speech Sciences
    • /
    • v.1 no.2
    • /
    • pp.85-94
    • /
    • 2009
  • The purpose of this study was to see the effects of pitch and length patterns on spoken word recognition. In Experiment 1, a syllable monitoring task was used to see the effects of pitch and length on the pre-lexical level of spoken word recognition. For both Seoul dialect speakers and Daegu dialect speakers, pitch and length did not affect the syllable detection processes. This result implies that there is little effect of pitch and length in pre-lexical processing. In Experiment 2, a lexical decision task was used to see the effect of pitch and length on the lexical access level of spoken word recognition. In this experiment, word frequency (low and high) as well as pitch and length was manipulated. The results showed that pitch and length information did not play an important role for Seoul dialect speakers, but that it did affect lexical decision processing for Daegu dialect speakers. Pitch and length seem to affect lexical access during the word recognition process of Daegu dialect speakers.

  • PDF

A Study on the Utilization Plan of Lexical Resources for Disaster and Safety Information Management Based on Current Status Analysis (재난안전정보 관리를 위한 어휘자원 현황분석 및 활용방안)

  • Jeong, Him-Chan;Kim, Tae-Young;Kim, Yong;Oh, Hyo-Jung
    • Journal of the Korean Society for information Management
    • /
    • v.34 no.2
    • /
    • pp.137-158
    • /
    • 2017
  • Disaster has a direct influence on the lives of the people, the body, and the property. For effective and rapid disaster responses, coordination process based on sharing and utilizing disaster information is the essential requirement Disaster and safety control agencies produce and manage heterogeneous information. They also develop and use word dictionaries individually. This is a major obstacle to retrieve and access disaster and safety information in terms of practitioners. To solve this problem, standardization of lexical resources related disaster and safety is essentially required. In this paper, we conducted current status analysis about lexical resources in disaster and safety domain. Consequently, we identified the characteristics according to lexical groups. And then we proposed the utilization plan of lexical resources for disaster and safety information management.

Korean Lexical Disambiguation Based on Statistical Information (통계정보에 기반을 둔 한국어 어휘중의성해소)

  • 박하규;김영택
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.2
    • /
    • pp.265-275
    • /
    • 1994
  • Lexical disambiguation is one of the most basic areas in natural language processing such as speech recognition/synthesis, information retrieval, corpus tagging/ etc. This paper describes a Korean lexical disambiguation mechanism where the disambigution is perfoemed on the basis of the statistical information collected from corpora. In this mechanism, the token tags corresponding to the results of the morphological analysis are used instead of part of speech tags for the purpose of detail disambiguation. The lexical selection function proposed shows considerably high accuracy, since the lexical characteristics of Korean such as concordance of endings or postpositions are well reflected in it. Two disambiguation methods, a unique selection method and a multiple selection method, are provided so that they can be properly according to the application areas.

  • PDF

Spam-mail Filtering based on Lexical Information and Thesaurus (어휘정보와 시소러스에 기반한 스팸메일 필터링)

  • Kang Shin-Jae;Kim Jong-Wan
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.11 no.1
    • /
    • pp.13-20
    • /
    • 2006
  • In this paper, we constructed a spam-mail filtering system based on the lexical and conceptual information. There are two kinds of information that can distinguish the spam mail from the legitimate mil. The definite information is the mail sender's information, URL, a certain spam keyword list, and the less definite information is the word lists and concept codes extracted from the mail body. We first classified the spam mail by using the definite information, and then used the less definite information. We used the lexical information and concept codes contained in the email body for SVM learning. According to our results the spam precision was increased if more lexical information was used as features, and the spam recall was increased when the concept codes were included in features as well.

  • PDF

The Extraction of Head words in Definition for Construction of a Semi-automatic Lexical-semantic Network of Verbs (동사 어휘의미망의 반자동 구축을 위한 사전정의문의 중심어 추출)

  • Kim Hae-Gyung;Yoon Ae-Sun
    • Language and Information
    • /
    • v.10 no.1
    • /
    • pp.47-69
    • /
    • 2006
  • Recently, there has been a surge of interests concerning the construction and utilization of a Korean thesaurus. In this paper, a semi-automatic method for generating a lexical-semantic network of Korean '-ha' verbs is presented through an analysis of the lexical definitions of these verbs. Initially, through the use of several tools that can filter out and coordinate lexical data, pairs constituting a word and a definition were prepared for treatment in a subsequent step. While inspecting the various definitions of each verb, we extracted and coordinated the head words from the sentences that constitute the definition of each word. These words are thought to be the main conceptual words that represent the sense of the current verb. Using these head words and related information, this paper shows that the creation of a thesaurus could be achieved without any difficulty in a semi-automatic fashion.

  • PDF

Analyzing the correlation of Spam Recall and Thesaurus

  • Kang, Sin-Jae;Kim, Jong-Wan
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.21-25
    • /
    • 2005
  • In this paper, we constructed a two-phase spam-mail filtering system based on the lexical and conceptual information. There are two kinds of information that can distinguish the spam mail from the legitimate mail. The definite information is the mail sender's information, URL, a certain spam list, and the less definite information is the word list and concept codes extracted from the mail body. We first classified the spam mail by using the definite information, and then used the less definite information. We used the lexical information and concept codes contained in the email body for SVM learning in the $2^{nd}$ phase. According to our results the spam precision was increased if more lexical information was used as features, and the spam recall was increased when the concept codes were included in features as well.

  • PDF

Corpus-Based Ambiguity-Driven Learning of Context- Dependent Lexical Rules for Part-of-Speech Tagging (품사태킹을 위한 어휘문맥 의존규칙의 말뭉치기반 중의성주도 학습)

  • 이상주;류원호;김진동;임해창
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.1
    • /
    • pp.178-178
    • /
    • 1999
  • Most stochastic taggers can not resolve some morphological ambiguities that can be resolved only by referring to lexical contexts because they use only contextual probabilities based ontag n-grams and lexical probabilities. Existing lexical rules are effective for resolving such ambiguitiesbecause they can refer to lexical contexts. However, they have two limitations. One is that humanexperts tend to make erroneous rules because they are deterministic rules. Another is that it is hardand time-consuming to acquire rules because they should be manually acquired. In this paper, wepropose context-dependent lexical rules, which are lexical rules based on the statistics of a taggedcorpus, and an ambiguity-driven teaming method, which is the method of automatically acquiring theproposed rules from a tagged corpus. By using the proposed rules, the proposed tagger can partiallyannotate an unseen corpus with high accuracy because it is a kind of memorizing tagger that canannotate a training corpus with 100% accuracy. So, the proposed tagger is useful to improve theaccuracy of a stochastic tagger. And also, it is effectively used for detecting and correcting taggingerrors in a manually tagged corpus. Moreover, the experimental results show that the proposed methodis also effective for English part-of-speech tagging.

A Constraint on Lexical Transfer: Implications for Computer-Assisted Translation(CAT)

  • Park, Kabyong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.11
    • /
    • pp.9-16
    • /
    • 2016
  • The central goal of the current paper is to investigate lexical transfer between Korean and English and to identify rule-governed behavior and to provide implications for development of computer-assisted translation(CAT) software for the two languages. It will be shown that Sankoff and Poplack's Free Morpheme Constraint can not account for all the range of data. A constraint is proposed that a set of case-assigners such as verbs, INFL, prepositions, and the possessive marker may not undergo lexical transfer. The translation software is also expected to be equipped with the proposed claim that English verbs are actually borrowed as nouns or as defective verbs to escape from the direct attachment of inflectional morphemes.