• Title/Summary/Keyword: Lexical processing

Search Result 142, Processing Time 0.028 seconds

Searching Animation Models with a Lexical Ontology for Text Animation (온톨로지를 이용한 텍스트 애니메이션 객체 탐색)

  • Chang, Eun-Young;Lee, Hee-Jin;Park, Jong-C.
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.469-474
    • /
    • 2007
  • 텍스트 애니메이션 시스템에서는 자연언어 단어로 표현된 개체들을 한정된 수의 애니메이션 모델로 나타낸다. 그러나 자연언어 단어의 수에 비해 기존의 모델DB에 있는 모델의 수가 훨씬 적은 것이 일반적이기 때문에 해당 단어에 대응되는 애니메이션 모델이 존재하지 않는 경우가 있게 된다. 이러한 경우, 해당 단어가 가지는 의미를 최대한 보존할 수 있는 대체 모델을 찾을 수 있는 방법이 필요하다. 본 논문은 애니메이션에서 캐릭터 또는 사물로 표현되어야 하는 명사에 대해, 온톨로지에서 해당 명사와 상위(hypernym), 하위(hyponym), 부분(member meronymy) 관계에 있는 다른 단어를 탐색하여 적절한 모델을 찾는 방안을 제안한다.

  • PDF

The Phonological and Orthographic activation in Korean Word Recognition(II) (한국어 단어 재인에서의 음운정보와 철자정보의 활성화(II))

  • Choi Wonil;Nam Kichun
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.33-36
    • /
    • 2003
  • Two experiments were conducted to support the suggestion that the same information processing was used in both input modalities, visual and auditory modality in Wonil Choi & Kichun Nam(2003)'s paper. The primed lexical decision task was performed and pseudoword prime stimuli were used. The result was that priming effect did not occur in any experimental condition. This result might be interpreted visual facilitative information and phonological inhibitory information cancelled each other.

  • PDF

A study on Implementation of English Sentence Generator using Lexical Functions (언어함수를 이용한 영문 생성기의 구현에 관한 연구)

  • 정희연;김희연;이웅재
    • Journal of Internet Computing and Services
    • /
    • v.1 no.2
    • /
    • pp.49-59
    • /
    • 2000
  • The majority of work done to date on natural language processing has focused on analysis and understanding of language, thus natural language generation had been relatively less attention than understanding, And people even tends to regard natural language generation CIS a simple reverse process of language understanding, However, need for natural language generation is growing rapidly as application systems, especially multi-language machine translation systems on the web, natural language interface systems, natural language query systems need more complex messages to generate, In this paper, we propose an algorithm to generate more flexible and natural sentence using lexical functions of Igor Mel'uk (Mel'uk & Zholkovsky, 1988) and systemic grammar.

  • PDF

Korean Probabilistic Syntactic Model using Head Co-occurrence (중심어 간의 공기정보를 이용한 한국어 확률 구문분석 모델)

  • Lee, Kong-Joo;Kim, Jae-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.809-816
    • /
    • 2002
  • Since a natural language has inherently structural ambiguities, one of the difficulties of parsing is resolving the structural ambiguities. Recently, a probabilistic approach to tackle this disambiguation problem has received considerable attention because it has some attractions such as automatic learning, wide-coverage, and robustness. In this paper, we focus on Korean probabilistic parsing model using head co-occurrence. We are apt to meet the data sparseness problem when we're using head co-occurrence because it is lexical. Therefore, how to handle this problem is more important than others. To lighten the problem, we have used the restricted and simplified phrase-structure grammar and back-off model as smoothing. The proposed model has showed that the accuracy is about 84%.

Implementation of Very Large Hangul Text Retrieval Engine HMG (대용량 한글 텍스트 검색 엔진 HMG의 구현)

  • 박미란;나연묵
    • Journal of Korea Multimedia Society
    • /
    • v.1 no.2
    • /
    • pp.162-172
    • /
    • 1998
  • In this paper, we implement a gigabyte Hangul text retrieval engine HMG(Hangul MG) which is based on the English text retrieval engine MG(Managing Gigabytes) and the Hangul lexical analyzer HAM(Hangul Analysis Module). To support Hangul information, we use the KSC 5601 code in the database construction and query processing stages. The lexical analyzer, parser, and index construction module of the MG system are modified to support Hangul information. To show the usefulness of HMG system, we implemented a NOD(Novel On Demand) system supporting the retrieval of Hangul novels on the WWW. The proposed system HMG can be utilized in the construction of massive full-text information retrieval systems supporting Hangul.

  • PDF

Magnetoencephalographic Study on the cerebral neural activities related to the processing of lexically ambiguous words (뇌자도를 이용한 어휘적 중의성의 처리와 관련된 대뇌 신경활동 분석)

  • Yu, Gi-Soon;Kim, June-Sic;Chung, Chun-Kee;Nam, Ki-Chun
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.59-63
    • /
    • 2007
  • Neuromagnetic fields were recorded from normal 10 subjects to investigate the time course of cerebral neural activation during the resolution of lexical ambiguity. All recordings were made using a whole-head 306-channel MEG (Elekta Neuromag TM Inc., $Vectorview^{TM}$). The observed activity was described by sLORETA (standardized low resolution brain electromagnetic tomography) techniques implemented in CURRY software (Neuroscan). In the results, bilaterally occipito-temporal lobe was activated at 170ms. At 250ms was associated with bilateral temporal lobe during ambiguous condition, whereas in left parietal, temporal lobe on unambiguous condition. The left frontal lobe, temporal lobe were activated at 350ms for all condition. At approximately 430ms, was activated in right frontal, temporal lobe on the resolving ambiguous condition, in left parietal lobe, right temporal lobe on the preserving ambiguous condition. In conclusion, the cerebral activations related to the resolving lexical ambiguity were right frontal lobe and the areas of mountainous ambiguity were left parietal lobe.

  • PDF

Conditional Random Fields based Named Entity Recognition Using Korean Lexical Semantic Network (한국어 어휘의미망을 활용한 Conditional Random Fields 기반 한국어 개체명 인식)

  • Park, Seo-Yeon;Ock, Cheol-Young;Shin, Joon-Choul
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.343-346
    • /
    • 2020
  • 개체명 인식은 주어진 문장 내에서 OOV(Out of Vocaburary)로 자주 등장하는 고유한 의미가 있는 단어들을 미리 정의된 개체의 범주로 분류하는 작업이다. 최근 개체명이 문장 내에서 OOV로 등장하는 문제를 해결하기 위해 외부 리소스를 활용하는 연구들이 많이 진행되었다. 본 논문은 의미역, 의존관계 분석에 한국어 어휘지도를 이용한 자질을 추가하여 성능 향상을 보인 연구들을 바탕으로 이를 한국어 개체명 인식에 적용하고 평가하였다. 실험 결과, 한국어 어휘지도를 활용한 자질을 추가로 학습한 모델이 기존 모델에 비해 평균 1.83% 포인트 향상하였다. 또한, CRF 단일 모델만을 사용했음에도 87.25% 포인트라는 높은 성능을 보였다.

  • PDF

A Development of the Automatic Predicate-Argument Analyzer for Construction of Semantically Tagged Korean Corpus (한국어 의미 표지 부착 말뭉치 구축을 위한 자동 술어-논항 분석기 개발)

  • Cho, Jung-Hyun;Jung, Hyun-Ki;Kim, Yu-Seop
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.43-52
    • /
    • 2012
  • Semantic role labeling is the research area analyzing the semantic relationship between elements in a sentence and it is considered as one of the most important semantic analysis research areas in natural language processing, such as word sense disambiguation. However, due to the lack of the relative linguistic resources, Korean semantic role labeling research has not been sufficiently developed. We, in this paper, propose an automatic predicate-argument analyzer to begin constructing the Korean PropBank which has been widely utilized in the semantic role labeling. The analyzer has mainly two components: the semantic lexical dictionary and the automatic predicate-argument extractor. The dictionary has the case frame information of verbs and the extractor is a module to decide the semantic class of the argument for a specific predicate existing in the syntactically annotated corpus. The analyzer developed in this research will help the construction of Korean PropBank and will finally play a big role in Korean semantic role labeling.

Empirical Study for Automatic Evaluation of Abstractive Summarization by Error-Types (오류 유형에 따른 생성요약 모델의 본문-요약문 간 요약 성능평가 비교)

  • Seungsoo Lee;Sangwoo Kang
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.3
    • /
    • pp.197-226
    • /
    • 2023
  • Generative Text Summarization is one of the Natural Language Processing tasks. It generates a short abbreviated summary while preserving the content of the long text. ROUGE is a widely used lexical-overlap based metric for text summarization models in generative summarization benchmarks. Although it shows very high performance, the studies report that 30% of the generated summary and the text are still inconsistent. This paper proposes a methodology for evaluating the performance of the summary model without using the correct summary. AggreFACT is a human-annotated dataset that classifies the types of errors in neural text summarization models. Among all the test candidates, the two cases, generation summary, and when errors occurred throughout the summary showed the highest correlation results. We observed that the proposed evaluation score showed a high correlation with models finetuned with BART and PEGASUS, which is pretrained with a large-scale Transformer structure.

YDK : A Thesaurus Developing System for Korean Language (한국어 통합정보사전 시스템)

  • Hwang, Do-Sam;Choi, Key-Sun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.9
    • /
    • pp.2885-2893
    • /
    • 2000
  • Dictionaries are indispensable for NLP(natural language processing) systems. Sophisticated algorithms in the NLP systems can be fully appreciated only with matching dictionaries that are built systematically based on computational linguistics. Only few dictionaries are developed for natural language processing. Available dictionaries are far from complete specifications for practical uses. So, it is necessary to develop an integrated information dictionary that includes useful lexical information for processing and understanding natural languages such as morphology and syntactic and semantic information. In this paper, we propose a method to build an integrated dictionary, and introduce a dictionary developing system.

  • PDF