• 제목/요약/키워드: Lexical Information

검색결과 323건 처리시간 0.018초

어절 내 형태소 출현 정보와 클러스터링 기법을 이용한 어휘지식 자동 획득 (The automatic Lexical Knowledge acquisition using morpheme information and Clustering techniques)

  • 유원희;서태원;임희석
    • 컴퓨터교육학회논문지
    • /
    • 제13권1호
    • /
    • pp.65-73
    • /
    • 2010
  • 본 논문은 자연어처리 연구를 위하여 지도학습(supervised learning)방식의 어휘지식(lexical knowledge) 수동 구축 방법의 한계점을 극복하기 위하여 비지도학습(unsupervised learning)방식의 자동 어휘지식 획득 모델을 제안한다. 제안하는 모델은 벡터화, 클러스터링, 어휘지식 획득 과정을 통하여 입력으로 주어지는 어휘목록에서 어휘지식을 자동으로 획득한다. 모델의 어휘지식 획득 과정에서 파라미터 변화에 따른 어휘지식 개수의 변화와 어휘지식의 특징이 나타나는 어휘 지식 사전의 일부 모습을 보인다. 실험결과 어휘지식 중 하나로 획득되는 어휘범주 지식의 클러스터가 일정한 개수에서 수렴하는 것이 관찰되어 어휘지식을 필요로 하는 전자사전 자동구축의 가능성을 확인하였다. 또한 한국어 특성이 반영되어 좌 우 통사정보가 포함된 어휘사전을 구축하였다.

  • PDF

Research on Keyword-Overlap Similarity Algorithm Optimization in Short English Text Based on Lexical Chunk Theory

  • Na Li;Cheng Li;Honglie Zhang
    • Journal of Information Processing Systems
    • /
    • 제19권5호
    • /
    • pp.631-640
    • /
    • 2023
  • Short-text similarity calculation is one of the hot issues in natural language processing research. The conventional keyword-overlap similarity algorithms merely consider the lexical item information and neglect the effect of the word order. And some of its optimized algorithms combine the word order, but the weights are hard to be determined. In the paper, viewing the keyword-overlap similarity algorithm, the short English text similarity algorithm based on lexical chunk theory (LC-SETSA) is proposed, which introduces the lexical chunk theory existing in cognitive psychology category into the short English text similarity calculation for the first time. The lexical chunks are applied to segment short English texts, and the segmentation results demonstrate the semantic connotation and the fixed word order of the lexical chunks, and then the overlap similarity of the lexical chunks is calculated accordingly. Finally, the comparative experiments are carried out, and the experimental results prove that the proposed algorithm of the paper is feasible, stable, and effective to a large extent.

영어 어휘 의미 정보와 피치 액센트 (Lexical Semantic Information and Pitch Accent in English)

  • 전윤실;김기호;이용재
    • 음성과학
    • /
    • 제10권3호
    • /
    • pp.187-209
    • /
    • 2003
  • In this paper, we examine if the lexical information of the verb and its noun object affects the pitch accent patterns of the verb phrase focus. Three types of verb-object combinations with different semantic weights are discussed: when the verbs have optional direct objects, when the objects have the greater semantic weight relative to verbs, and when the verbs and the objects have equal semantic weight. Argument-structure-based works note that the pitch accent location in a focused phrase is closely related to the argument structure and contextual information. For example, it has been argued that contextually new noun objects receive accent while given noun objects don't. Contrary to nouns, verbs can be accented or not in verb phrase focus regardless of whether they are given information or new information (Selkirk 1984, 1992). However, the production experiment in this paper shows that the accenting of verbs is not fully optional, but influenced by the lexical semantic information of the verbs. The accenting of noun objects with given information is possible and the deaccenting of new noun objects also occurs depending on the lexical information of the noun objects. The results demonstrate that in addition to argument structure and information by means of context sentences, the lexical semantic information of words influences the pitch accent location in focused phrase.

  • PDF

Linear Precedence in Morphosyntactic and Semantic Processes in Korean Sentential Processing as Revealed by Event-related Potential

  • Kim, Choong-Myung
    • International Journal of Contents
    • /
    • 제10권4호
    • /
    • pp.30-37
    • /
    • 2014
  • The current study was conducted to examine the temporal and spatial activation sequences related to morphosyntactic, semantic and orthographic-lexical sentences, focusing on the morphological-orthographic and lexical-semantic deviation processes in Korean language processing. The Event-related Potentials (ERPs) of 15 healthy students were adopted to explore the processing of head-final critical words in a sentential plausibility task. Specifically, it was examined whether the ERP-pattern to orthographic-lexical violation might show linear precedence over other processes, or the presence of additivity across combined processing components. For the morphosyntactic violation, fronto-central LAN followed by P600 was found, while semantic violation elicited N400, as expected. Activation of P600 was distributed in the left frontal and central sites, while N400 appeared even in frontal sites other than the centro-parietal areas. Most importantly, the orthographic-lexical violation process revealed by earlier N2 with fronto-central activity was shown to be complexes of morphological and semantic functions from the same critical word. The present study suggests that there is a linear precedence over the morphological deviation and its lexical semantic processing based on the immediate possibility of lexical information, followed by sentential semantics. Finally, late syntactic integration processes were completed, showing different topographic activation in order of importance of ongoing sentential information.

어휘정보구축을 위한 사전텍스트의 구조분석 및 변환 (A Structural Analysis of Dictionary Text for the Construction of Lexical Data Base)

  • 최병진
    • 한국언어정보학회지:언어와정보
    • /
    • 제6권2호
    • /
    • pp.33-55
    • /
    • 2002
  • This research aims at transforming the definition tort of an English-English-Korean Dictionary (EEKD) which is encoded in EST files for the purpose of publishing into a structured format for Lexical Data Base (LDB). The construction of LDB is very time-consuming and expensive work. In order to save time and efforts in building new lexical information, the present study tries to extract useful linguistic information from an existing printed dictionary. In this paper, the process of extraction and structuring of lexical information from a printed dictionary (EEKD) as a lexical resource is described. The extracted information is represented in XML format, which can be transformed into another representation for different application requirements.

  • PDF

한국어 어휘 중의성 해소에서 어휘 확률에 대한 효과적인 평가 방법 (An Effective Estimation method for Lexical Probabilities in Korean Lexical Disambiguation)

  • 이하규
    • 한국정보처리학회논문지
    • /
    • 제3권6호
    • /
    • pp.1588-1597
    • /
    • 1996
  • 본 논문은 한국어 어휘 중의성 해소(lexical disambiguation)에서 어휘 확률 (lexical probability) 평가방법에 대해 기술하고 있다. 통계적 접근 방법의 어휘 중 의성 해소에서는 일반적으로 말뭉치(corpus)로부터 추출된 통계 자료에 기초하여 어 휘 확률과 문맥 확률(contextual probability)을 평가한다. 한국어는 어절별로 띄어 쓰기가 이루어지므로 어절 단위로 어휘 확률을 적용하는 것이 바람직하다. 하지만 한 국어는 어절의 다양성이 심하기 때문에 상당히 큰 말뭉치를 사용하더라도 어절 단위 로는 어휘 확률을 직접 평가할 수 없는 경우가 다소 있다. 이러한 문제점을 극복하기 위해 본 연구에서는 어휘 분석 측면에서 어절의 유사성을 정의하고 이에 기반을 둔 한국어 어휘 확률 평가 방법을 제안한다. 이 방법에서는 어떤 어절에 대해 어휘 확률 을 직접 평가할 수 없는 경우 이와 어휘 분석이 유사한 어절들을 통해 간접적으로 평 가한다. 실험결과 제안된 접근방법이 한국어 어휘 중의성 해소에 효과적인 것으로 나 타나고 있다.

  • PDF

한중일영 다국어 어휘 데이터베이스의 모형

  • 차재은;강범모
    • 한국언어정보학회:학술대회논문집
    • /
    • 한국언어정보학회 2002년도 학술대회 발표논문집
    • /
    • pp.48-67
    • /
    • 2002
  • This paper is a report on part of the results of a research project entitled "Research and Model Development for a Multi-Lingual Lexical Database". It Is a six-year project in which we aim to construct a model of a multilingual lexical database of Korean, Chinese, Japanese, and English. Now we have finished the first two-year stage of the project In this paper, we present the goal of the project, the construction model of items in the lexical database, and the possible (semi-)automatic methods of acquisition of lexical information. As an appendix, we present some sample items of the database as an i1lustration.

  • PDF

Analyzing the Effect of Lexical and Conceptual Information in Spam-mail Filtering System

  • Kang Sin-Jae;Kim Jong-Wan
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권2호
    • /
    • pp.105-109
    • /
    • 2006
  • In this paper, we constructed a two-phase spam-mail filtering system based on the lexical and conceptual information. There are two kinds of information that can distinguish the spam mail from the ham (non-spam) mail. The definite information is the mail sender's information, URL, a certain spam keyword list, and the less definite information is the word list and concept codes extracted from the mail body. We first classified the spam mail by using the definite information, and then used the less definite information. We used the lexical information and concept codes contained in the email body for SVM learning in the 2nd phase. According to our results the ham misclassification rate was reduced if more lexical information was used as features, and the spam misclassification rate was reduced when the concept codes were included in features as well.

한국어 어휘습득의 계산주의적 모델 (A Computational Model for Lexical Acquisition in Korean)

  • 유원희;박기남;류기곤;임희석;남기춘
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2007년도 한국음성과학회 공동학술대회 발표논문집
    • /
    • pp.135-137
    • /
    • 2007
  • This study has experimented and materialized a computational lexical processing model which hybridizes full model and decomposition model as applying lexical acquisition, one of early stages of human lexical processes, to Korean. As the result of the study, we could simulate the lexical acquisition process of linguistic input through experiments and studying, and suggest a theoretical foundation for the order of acquitting certain grammatical categories. Also, the model of this study has shown proofs with which we can infer the type of the mental lexicon of the human cerebrum through fu1l-list dictionary and decomposition dictionary which were automatically produced in the study.

  • PDF

Lexical Mismatches between English and Korean: with Particular Reference to Polysemous Nouns and Verbs

  • Lee, Yae-Sheik
    • 한국언어정보학회지:언어와정보
    • /
    • 제4권1호
    • /
    • pp.43-65
    • /
    • 2000
  • Along with the flourishign development of computational linguistics, research on the meanings of individual words has started to resume. Polyusemous words are especially brought into focus since their multiple senses have placed a real challenge to linguists and computer scientists. This paper mainly concerns the following three questions with regard to the treatments of such polysemous nouns and verbs in English and Korean. Firstly, what types of information should be represented in individual lexical entries for those polysemous words\ulcorner Secondly, how different are corresponding polysemous lexical entries in both languages\ulcorner Thirdly, what does a mental lexicon look like with regard to polysemous lexical entries\ulcorner For the first and second questions, Pustejosky's (1995) Generative Lexicon Theory (hereafter GLT) will be discussed in detail: the main focus falls on developing alternative way of representing (polysemous) lexical entries. For the third question, a brief discussion is made on mapping between concepts and their lexicalizations. Furthermore, a conceptual graph around conept 'bake' is depicted in terms of Sowa(2000)

  • PDF