• Title/Summary/Keyword: Lexical Information

Search Result 323, Processing Time 0.019 seconds

The automatic Lexical Knowledge acquisition using morpheme information and Clustering techniques (어절 내 형태소 출현 정보와 클러스터링 기법을 이용한 어휘지식 자동 획득)

  • Yu, Won-Hee;Suh, Tae-Won;Lim, Heui-Seok
    • The Journal of Korean Association of Computer Education
    • /
    • v.13 no.1
    • /
    • pp.65-73
    • /
    • 2010
  • This study offered lexical knowledge acquisition model of unsupervised learning method in order to overcome limitation of lexical knowledge hand building manual of supervised learning method for research of natural language processing. The offered model obtains the lexical knowledge from the lexical entry which was given by inputting through the process of vectorization, clustering, lexical knowledge acquisition automatically. In the process of obtaining the lexical knowledge acquisition of model, some parts of lexical knowledge dictionary which changes in the number of lexical knowledge and characteristics of lexical knowledge appeared by parameter changes were shown. The experimental results show that is possibility of automatic building of Machine-readable dictionary, because observed to the number of lexical class information cluster collected constant. also building of lexical ditionary including left-morphosyntactic information and right-morphosyntactic information is reflected korean characteristic.

  • PDF

Research on Keyword-Overlap Similarity Algorithm Optimization in Short English Text Based on Lexical Chunk Theory

  • Na Li;Cheng Li;Honglie Zhang
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.631-640
    • /
    • 2023
  • Short-text similarity calculation is one of the hot issues in natural language processing research. The conventional keyword-overlap similarity algorithms merely consider the lexical item information and neglect the effect of the word order. And some of its optimized algorithms combine the word order, but the weights are hard to be determined. In the paper, viewing the keyword-overlap similarity algorithm, the short English text similarity algorithm based on lexical chunk theory (LC-SETSA) is proposed, which introduces the lexical chunk theory existing in cognitive psychology category into the short English text similarity calculation for the first time. The lexical chunks are applied to segment short English texts, and the segmentation results demonstrate the semantic connotation and the fixed word order of the lexical chunks, and then the overlap similarity of the lexical chunks is calculated accordingly. Finally, the comparative experiments are carried out, and the experimental results prove that the proposed algorithm of the paper is feasible, stable, and effective to a large extent.

Lexical Semantic Information and Pitch Accent in English (영어 어휘 의미 정보와 피치 액센트)

  • Jeon, Yoon-Shil;Kim, Kee-Ho;Lee, Yong-Jae
    • Speech Sciences
    • /
    • v.10 no.3
    • /
    • pp.187-209
    • /
    • 2003
  • In this paper, we examine if the lexical information of the verb and its noun object affects the pitch accent patterns of the verb phrase focus. Three types of verb-object combinations with different semantic weights are discussed: when the verbs have optional direct objects, when the objects have the greater semantic weight relative to verbs, and when the verbs and the objects have equal semantic weight. Argument-structure-based works note that the pitch accent location in a focused phrase is closely related to the argument structure and contextual information. For example, it has been argued that contextually new noun objects receive accent while given noun objects don't. Contrary to nouns, verbs can be accented or not in verb phrase focus regardless of whether they are given information or new information (Selkirk 1984, 1992). However, the production experiment in this paper shows that the accenting of verbs is not fully optional, but influenced by the lexical semantic information of the verbs. The accenting of noun objects with given information is possible and the deaccenting of new noun objects also occurs depending on the lexical information of the noun objects. The results demonstrate that in addition to argument structure and information by means of context sentences, the lexical semantic information of words influences the pitch accent location in focused phrase.

  • PDF

Linear Precedence in Morphosyntactic and Semantic Processes in Korean Sentential Processing as Revealed by Event-related Potential

  • Kim, Choong-Myung
    • International Journal of Contents
    • /
    • v.10 no.4
    • /
    • pp.30-37
    • /
    • 2014
  • The current study was conducted to examine the temporal and spatial activation sequences related to morphosyntactic, semantic and orthographic-lexical sentences, focusing on the morphological-orthographic and lexical-semantic deviation processes in Korean language processing. The Event-related Potentials (ERPs) of 15 healthy students were adopted to explore the processing of head-final critical words in a sentential plausibility task. Specifically, it was examined whether the ERP-pattern to orthographic-lexical violation might show linear precedence over other processes, or the presence of additivity across combined processing components. For the morphosyntactic violation, fronto-central LAN followed by P600 was found, while semantic violation elicited N400, as expected. Activation of P600 was distributed in the left frontal and central sites, while N400 appeared even in frontal sites other than the centro-parietal areas. Most importantly, the orthographic-lexical violation process revealed by earlier N2 with fronto-central activity was shown to be complexes of morphological and semantic functions from the same critical word. The present study suggests that there is a linear precedence over the morphological deviation and its lexical semantic processing based on the immediate possibility of lexical information, followed by sentential semantics. Finally, late syntactic integration processes were completed, showing different topographic activation in order of importance of ongoing sentential information.

A Structural Analysis of Dictionary Text for the Construction of Lexical Data Base (어휘정보구축을 위한 사전텍스트의 구조분석 및 변환)

  • 최병진
    • Language and Information
    • /
    • v.6 no.2
    • /
    • pp.33-55
    • /
    • 2002
  • This research aims at transforming the definition tort of an English-English-Korean Dictionary (EEKD) which is encoded in EST files for the purpose of publishing into a structured format for Lexical Data Base (LDB). The construction of LDB is very time-consuming and expensive work. In order to save time and efforts in building new lexical information, the present study tries to extract useful linguistic information from an existing printed dictionary. In this paper, the process of extraction and structuring of lexical information from a printed dictionary (EEKD) as a lexical resource is described. The extracted information is represented in XML format, which can be transformed into another representation for different application requirements.

  • PDF

An Effective Estimation method for Lexical Probabilities in Korean Lexical Disambiguation (한국어 어휘 중의성 해소에서 어휘 확률에 대한 효과적인 평가 방법)

  • Lee, Ha-Gyu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.6
    • /
    • pp.1588-1597
    • /
    • 1996
  • This paper describes an estimation method for lexical probabilities in Korean lexical disambiguation. In the stochastic to lexical disambiguation lexical probabilities and contextual probabilities are generally estimated on the basis of statistical data extracted form corpora. It is desirable to apply lexical probabilities in terms of word phrases for Korean because sentences are spaced in the unit of word phrase. However, Korean word phrases are so multiform that there are more or less chances that lexical probabilities cannot be estimated directly in terms of word phrases though fairly large corpora are used. To overcome this problem, similarity for word phrases is defined from the lexical analysis point of view in this research and an estimation method for Korean lexical probabilities based on the similarity is proposed. In this method, when a lexical probability for a word phrase cannot be estimated directly, it is estimated indirectly through the word phrase similar to the given one. Experimental results show that the proposed approach is effective for Korean lexical disambiguation.

  • PDF

한중일영 다국어 어휘 데이터베이스의 모형

  • 차재은;강범모
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2002.06a
    • /
    • pp.48-67
    • /
    • 2002
  • This paper is a report on part of the results of a research project entitled "Research and Model Development for a Multi-Lingual Lexical Database". It Is a six-year project in which we aim to construct a model of a multilingual lexical database of Korean, Chinese, Japanese, and English. Now we have finished the first two-year stage of the project In this paper, we present the goal of the project, the construction model of items in the lexical database, and the possible (semi-)automatic methods of acquisition of lexical information. As an appendix, we present some sample items of the database as an i1lustration.

  • PDF

Analyzing the Effect of Lexical and Conceptual Information in Spam-mail Filtering System

  • Kang Sin-Jae;Kim Jong-Wan
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.2
    • /
    • pp.105-109
    • /
    • 2006
  • In this paper, we constructed a two-phase spam-mail filtering system based on the lexical and conceptual information. There are two kinds of information that can distinguish the spam mail from the ham (non-spam) mail. The definite information is the mail sender's information, URL, a certain spam keyword list, and the less definite information is the word list and concept codes extracted from the mail body. We first classified the spam mail by using the definite information, and then used the less definite information. We used the lexical information and concept codes contained in the email body for SVM learning in the 2nd phase. According to our results the ham misclassification rate was reduced if more lexical information was used as features, and the spam misclassification rate was reduced when the concept codes were included in features as well.

A Computational Model for Lexical Acquisition in Korean (한국어 어휘습득의 계산주의적 모델)

  • Yo, Won-Hee;Park, Ki-Nam;Lyu, Ki-Gon;Lim, Heui-Seok;Nam, Ki-Chun
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.135-137
    • /
    • 2007
  • This study has experimented and materialized a computational lexical processing model which hybridizes full model and decomposition model as applying lexical acquisition, one of early stages of human lexical processes, to Korean. As the result of the study, we could simulate the lexical acquisition process of linguistic input through experiments and studying, and suggest a theoretical foundation for the order of acquitting certain grammatical categories. Also, the model of this study has shown proofs with which we can infer the type of the mental lexicon of the human cerebrum through fu1l-list dictionary and decomposition dictionary which were automatically produced in the study.

  • PDF

Lexical Mismatches between English and Korean: with Particular Reference to Polysemous Nouns and Verbs

  • Lee, Yae-Sheik
    • Language and Information
    • /
    • v.4 no.1
    • /
    • pp.43-65
    • /
    • 2000
  • Along with the flourishign development of computational linguistics, research on the meanings of individual words has started to resume. Polyusemous words are especially brought into focus since their multiple senses have placed a real challenge to linguists and computer scientists. This paper mainly concerns the following three questions with regard to the treatments of such polysemous nouns and verbs in English and Korean. Firstly, what types of information should be represented in individual lexical entries for those polysemous words\ulcorner Secondly, how different are corresponding polysemous lexical entries in both languages\ulcorner Thirdly, what does a mental lexicon look like with regard to polysemous lexical entries\ulcorner For the first and second questions, Pustejosky's (1995) Generative Lexicon Theory (hereafter GLT) will be discussed in detail: the main focus falls on developing alternative way of representing (polysemous) lexical entries. For the third question, a brief discussion is made on mapping between concepts and their lexicalizations. Furthermore, a conceptual graph around conept 'bake' is depicted in terms of Sowa(2000)

  • PDF