• Title/Summary/Keyword: Lexical Analysis

Search Result 174, Processing Time 0.022 seconds

Parser as An Analysis Finisher (분석의 최종 판단자로서의 구문 분석기)

  • Yuh, Sang Hwa
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.05a
    • /
    • pp.677-680
    • /
    • 2004
  • 통상적인 언어 처리의 분석 과정은 전처리, 형태소분석, 품사 태깅, 복합 단위 인식, 구문 분석, 그리고 의미 분석 등의 여러 단계로 이루어진다. 분석의 매 단계에서 중의성(Ambiguity)가 발생하며, 이를 해결하기 위한 노력으로 구문 분석 이전의 분석 단계에서도 정확률(Precision)을 높이기 위해, 어휘(Lexical) 정보, 품사정보 그리고 구문 정보 등을 이용한다. 각 단계에서 고급 정보로서의 구문 정보 이용은 구문분석의 중복성과 분석 지식의 중복성을 야기한다. 또한, 기존의 처리 흐름에서는 각 분석 단계에서의 결과는 최종적인 것으로, 이로 인해 다음 분석 단계에 분석 오류를 전파한다. 본 논문에서는 구문 분석기를 분석 결과의 최종 판단자로 이용할 것을 제안한다. 즉, 구문 분석 전단계의 모든 분석 정보는 구문 분석기에 제공되고, 구문분석기는 상향식 구문분석을 수행하면서 이들 정보들로부터 최종의 그리고 최적의 분석 후보를 결정한다. 이를 위해 구문분석기는 한 문장 단위를 입력 받는 기존의 제한을 따르지 않는다. 제안된 방법은 구문분석 앞 단계에서의 잘못된 정보 제공(예: 문장 분리 오류, 품사 오류, 복합단위 인식 오류 등)으로부터 자유로우며, 이를 통해 분석 실패의 가능성을 최대로 줄인다.

  • PDF

基于汉语语料库的中韩词典词汇释义的准确性研究 - 以D3H1区的词汇为中心

  • Gwak, Jun-Hwa
    • 중국학논총
    • /
    • no.65
    • /
    • pp.23-38
    • /
    • 2020
  • The dictionary is the most important tool for every Chinese learner to confirm the meaning and usage of words. Therefore, accuracy of headword's interpretation in the dictionary is crucial. This study aims to discuss the accuracy and the adequacy of headwords' interpretation in the Chinese-Korean dictionary through the Chinese corpus and Baidu. The scope of this study are 3000 words in the D3H1 region. According to the research results, the main problems of the vocabulary in this region can be divided into three categories: the first is the problem of lexical interpretation, the second is the problem of missing interpretation, and the third is other problems. In the D3H1 area, there are a total of 719 low-frequency vocabularies, and 54 headword's interpretations are not accurate or appropriate. This study is a detailed investigation and analysis of the problems of these 54 vocabularies.

Meta Information Retrieval using Sentence Analysis of Korean Dialogue Style (한국어 대화체 문장 분석을 이용한 메타 정보검색)

  • 박인철
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.10
    • /
    • pp.703-712
    • /
    • 2003
  • Today, documents existing on internet by the development of communication network increase in number. And it is required the information retrieval system that can efficiently acquire the necessary information. Most information retrieval systems retrieve documents using a simple keyword or a boolean query of keywords. But, the method is not fit for novice users to use and has many difficulties than user's dialogue query from the viewpoint of convenience and precise understanding for query. So, this paper has an aim to suggest the method that will cope with above problems and to design and implement a meta query processing system for information retrieval using Korean dialogue sentences. The system implemented in this paper can generates a new boolean query for a given Korean dialogue sentence and resolve lexical ambiguities through morphological analysis, syntactic analysis and extension of query using thesaurus.

  • PDF

An analysis of Speech Acts for Korean Using Support Vector Machines (지지벡터기계(Support Vector Machines)를 이용한 한국어 화행분석)

  • En Jongmin;Lee Songwook;Seo Jungyun
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.365-368
    • /
    • 2005
  • We propose a speech act analysis method for Korean dialogue using Support Vector Machines (SVM). We use a lexical form of a word, its part of speech (POS) tags, and bigrams of POS tags as sentence features and the contexts of the previous utterance as context features. We select informative features by Chi square statistics. After training SVM with the selected features, SVM classifiers determine the speech act of each utterance. In experiment, we acquired overall $90.54\%$ of accuracy with dialogue corpus for hotel reservation domain.

A Korean Mobile Conversational Agent System (한국어 모바일 대화형 에이전트 시스템)

  • Hong, Gum-Won;Lee, Yeon-Soo;Kim, Min-Jeoung;Lee, Seung-Wook;Lee, Joo-Young;Rim, Hae-Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.6
    • /
    • pp.263-271
    • /
    • 2008
  • This paper presents a Korean conversational agent system in a mobile environment using natural language processing techniques. The aim of a conversational agent in mobile environment is to provide natural language interface and enable more natural interaction between a human and an agent. Constructing such an agent, it is required to develop various natural language understanding components and effective utterance generation methods. To understand spoken style utterance, we perform morphosyntactic analysis, shallow semantic analysis including modality classification and predicate argument structure analysis, and to generate a system utterance, we perform example based search which considers lexical similarity, syntactic similarity and semantic similarity.

  • PDF

A Corpus-Based Study of the Use of HEART and HEAD in English

  • Oh, Sang-suk
    • Language and Information
    • /
    • v.18 no.2
    • /
    • pp.81-102
    • /
    • 2014
  • The purpose of this paper is to provide corpus-based quantitative analyses of HEART and HEAD in order to examine their actual usage status and to consider some cognitive linguistic aspects associated with their use. The two corpora COCA and COHA are used for analysis in this study. The analysis of COCA corpus reveals that the total frequency of HEAD is much higher than that of HEART, and that the figurative use of HEART (60%) is two times higher than its literal use (32%); by contrast, the figurative use of HEAD (41%) is a bit higher than its literal use (38%). Among all four genres, both lexemes occur most frequently in fictions and then in magazines. Over the past two centuries, the use of HEART has been steadily decreasing; by contrast, that the use of HEAD has been steadily increasing. It is assumed that the decreasing use of HEART has partially to do with the decrease in its figurative use and that the increasing use of HEAD is attributable to its diverse meanings, the increase of its lexical use, and the partial increase in its figurative use. The analysis of the collocation of verbs and adjectives preceding HEART and HEAD, as well the modifying and predicating forms of HEART and HEAD also provides some relevant information of the usage of the two lexemes. This paper showcases that the quantitative information helps understanding not only of the actual usage of the two lexemes but also of the cognitive forces working behind it.

  • PDF

A Study on Keywords Extraction based on Semantic Analysis of Document (문서의 의미론적 분석에 기반한 키워드 추출에 관한 연구)

  • Song, Min-Kyu;Bae, Il-Ju;Lee, Soo-Hong;Park, Ji-Hyung
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2007.11a
    • /
    • pp.586-591
    • /
    • 2007
  • 지식 관리 시스템, 정보 검색 시스템, 그리고 전자 도서관 시스템 등의 문서를 다루는 시스템에서는 문서의 구조화 및 문서의 저장이 필요하다. 문서에 담겨있는 정보를 추출하기 위해 가장 우선시되어야 하는 것은 키워드의 선별이다. 기존 연구에서 가장 널리 사용된 알고리즘은 단어의 사용 빈도를 체크하는 TF(Term Frequency)와 IDF(Inverted Document Frequency)를 활용하는 TF-IDF 방법이다. 그러나 TF-IDF 방법은 문서의 의미를 반영하지 못하는 한계가 존재한다. 이를 보완하기 위하여 본 연구에서는 세 가지 방법을 활용한다. 첫 번째는 문헌 속에서의 단어의 위치 및 서론, 결론 등의 특정 부분에 사용된 단어의 활용도를 체크하는 문헌구조적 기법이고, 두 번째는 강조 표현, 비교 표현 등의 특정 사용 문구를 통제 어휘로 지정하여 활용하는 방법이다. 마지막으로 어휘의 사전적 의미를 분석하여 이를 메타데이터로 활용하는 방법인 언어학적 기법이 해당된다. 이를 통하여 키워드 추출 과정에서 문서의 의미 분석도 수행하여 키워드 추출의 효율을 높일 수 있다.

  • PDF

Analysis on Vocabulary Used in School Newsletters of Korean elementary Schools: Focus on the areas of Busan, Ulsan and Gyeongnam (한국 초등학교 가정통신문의 어휘 특성 연구 -부산·울산·경남 지역을 중심으로-)

  • Kang, Hyunju
    • Journal of Korean language education
    • /
    • v.29 no.2
    • /
    • pp.1-23
    • /
    • 2018
  • This study aims to analyze words and phrases which are frequently used in newsletters from Korean elementary schools. In order to achieve this goal, high frequent words from school newsletters were selected and classified into content and function words, and the domains of the words were looked up. For this study 1,000 school newsletters were collected in the areas of Busan, Ulsan and Gyeongnam. In terms of parts of speech, nouns, especially common nouns, most frequently appeared in the school newsletters followed by verbs and adjectives. This result shows that for immigrant women who have basic knowledge on Korean language, it is useful to give translated words to get the message of school letters. Furthermore, school related terms such as facilities, regulations and activities of school and Chinese-based vocabularies are found in school newsletters. In case of verbs, the words which contain the meaning of requests and suggestions are used the most. Adjectives which are related to positive value and evaluation, and describing weather and season is frequently used as well.

An analysis on streetscape using the Model of Emotion Evaluation (가로경관에 대한 감성평가모형 적용 분석 연구)

  • Lee, Jin-Sook;Kim, Ji-Hye
    • Science of Emotion and Sensibility
    • /
    • v.16 no.2
    • /
    • pp.149-156
    • /
    • 2013
  • In this study, the Model of Emotion Evaluation, an emotional analysis actively applied in environmental assessment, was divided into two parts, the abbreviated model and the inferential model, through pilot study and experiment. In addition, an analysis was conducted through the experiment on the attributes of the evaluation vocabularies of two additional types of representative models, the EPA Model and PAD Model, and the results show a huge difference in the development approach and lexical constitution of the two models. It was also identified through factor analysis that the vocabularies were abbreviated according to the respective models. Similarity relationships were analyzed using multidimensional scaling and the results show that mutual relationship was established to some degree. Based on this, we can conclude that, rather than a biased use of the Model of Emotion Evaluation in emotion evaluation, a more objective image analysis is possible by analyzing the characteristics of the model before applying it. In this study, the evaluation target was confined only to the environmental assessment of streetscape and continuous research on the Model of Emotion Evaluation that allows for the comparison of evaluation models in various areas is needed.

  • PDF

Part-of-speech Tagging for Hindi Corpus in Poor Resource Scenario

  • Modi, Deepa;Nain, Neeta;Nehra, Maninder
    • Journal of Multimedia Information System
    • /
    • v.5 no.3
    • /
    • pp.147-154
    • /
    • 2018
  • Natural language processing (NLP) is an emerging research area in which we study how machines can be used to perceive and alter the text written in natural languages. We can perform different tasks on natural languages by analyzing them through various annotational tasks like parsing, chunking, part-of-speech tagging and lexical analysis etc. These annotational tasks depend on morphological structure of a particular natural language. The focus of this work is part-of-speech tagging (POS tagging) on Hindi language. Part-of-speech tagging also known as grammatical tagging is a process of assigning different grammatical categories to each word of a given text. These grammatical categories can be noun, verb, time, date, number etc. Hindi is the most widely used and official language of India. It is also among the top five most spoken languages of the world. For English and other languages, a diverse range of POS taggers are available, but these POS taggers can not be applied on the Hindi language as Hindi is one of the most morphologically rich language. Furthermore there is a significant difference between the morphological structures of these languages. Thus in this work, a POS tagger system is presented for the Hindi language. For Hindi POS tagging a hybrid approach is presented in this paper which combines "Probability-based and Rule-based" approaches. For known word tagging a Unigram model of probability class is used, whereas for tagging unknown words various lexical and contextual features are used. Various finite state machine automata are constructed for demonstrating different rules and then regular expressions are used to implement these rules. A tagset is also prepared for this task, which contains 29 standard part-of-speech tags. The tagset also includes two unique tags, i.e., date tag and time tag. These date and time tags support all possible formats. Regular expressions are used to implement all pattern based tags like time, date, number and special symbols. The aim of the presented approach is to increase the correctness of an automatic Hindi POS tagging while bounding the requirement of a large human-made corpus. This hybrid approach uses a probability-based model to increase automatic tagging and a rule-based model to bound the requirement of an already trained corpus. This approach is based on very small labeled training set (around 9,000 words) and yields 96.54% of best precision and 95.08% of average precision. The approach also yields best accuracy of 91.39% and an average accuracy of 88.15%.