• Title/Summary/Keyword: 어휘 통계

Search Result 121, Processing Time 0.027 seconds

The Study on the Model of Extracting Collocations from Corpus in Korean Using the Statistical Tools (통계 기법을 이용한 연어 추출 모형 연구)

  • Ahn, Sung-Min
    • Annual Conference on Human and Language Technology
    • /
    • 2010.10a
    • /
    • pp.162-165
    • /
    • 2010
  • 공기하여 나타나는 구 정보 중에서 언어에 대한 연구는 응용 언어학에 발전에 기여할 수 있는 부분이 크다. 연어란 어휘들 간의 제한된 결합 관계를 갖는 공기 확률이 높은 구 구성이다. 이러한 연어 구성에 대한 연구는 특히 기계 번역이나 사전 편찬 등의 분야에서 관심이 높아지고 있다. 본 연구에서는 언어를 추출하기 위해 T-test와 상호 정보, 조건 확률 등의 여러 통계 기법의 사용을 제시한다. 각 기법을 적용하였을 때 연어 추출에 어떠한 변화를 보이는지 조사하였고, 가장 적절한 기법의 적용도 모색함으로써 향후 언어 추출의 방향을 제시하고자 한다.

  • PDF

Analysis of the English Textbooks in North Korean First Middle School (북한 제1중학교 영어교과서 분석)

  • Hwang, Seo-yeon;Kim, Jeong-ryeol
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.11
    • /
    • pp.242-251
    • /
    • 2017
  • For the purposes of this research, a corpus of words was created from the English textbooks of the "First Middle School" for the gifted in North Korea, and using the corpus, their linguistic characteristics were analyzed. Although there have been many studies that identified the traits of English textbooks in the North Korea's general middle school, not much focus has been placed on the English textbooks used at North Korea's First Middle School. Initially, the structure of English textbooks of the first, second, fourth, and sixth grades that had been procured from the Information Center on North Korea was reviewed, after which their corpus was created. Then, by using Wordsmith Tools 7.0, linguistic properties and high frequency content words appeared in the English textbook of the first grade were analyzed specifically. Basic statistical data gathered indicated that while the number of vocabulary did not increase as students progress through the grades, the words used tended to diversify incrementally. In the mean time, a distribution of the high frequency content words by grade illustrated that a big difference was found between the content words used in the English texts of each grade, and it was a subject matter of the texts that determined such difference.

A Model of English Part-Of-Speech Determination for English-Korean Machine Translation (영한 기계번역에서의 영어 품사결정 모델)

  • Kim, Sung-Dong;Park, Sung-Hoon
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.3
    • /
    • pp.53-65
    • /
    • 2009
  • The part-of-speech determination is necessary for resolving the part-of-speech ambiguity in English-Korean machine translation. The part-of-speech ambiguity causes high parsing complexity and makes the accurate translation difficult. In order to solve the problem, the resolution of the part-of-speech ambiguity must be performed after the lexical analysis and before the parsing. This paper proposes the CatAmRes model, which resolves the part-of-speech ambiguity, and compares the performance with that of other part-of-speech tagging methods. CatAmRes model determines the part-of-speech using the probability distribution from Bayesian network training and the statistical information, which are based on the Penn Treebank corpus. The proposed CatAmRes model consists of Calculator and POSDeterminer. Calculator calculates the degree of appropriateness of the partof-speech, and POSDeterminer determines the part-of-speech of the word based on the calculated values. In the experiment, we measure the performance using sentences from WSJ, Brown, IBM corpus.

  • PDF

Building Korean Multi-word Expression Lexicons and Grammars Represented by Finite-State Graphs for FbSA of Cosmetic Reviews (화장품 후기글의 자질기반 감성분석을 위한 다단어 표현의 유한그래프 사전 및 문법 구축)

  • Hwang, Chang-Hoe;Yoo, Gwang-Hoon;Choi, Seong-Yong;Shin, Dong-Heouk;Nam, Jee-Sun
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.400-405
    • /
    • 2018
  • 본 연구는 한국어 화장품 리뷰 코퍼스의 자질기반 감성 분석을 위하여, 이 도메인에서 실현되는 중요한 다단어 표현(MWE)의 유한상태 그래프 사전과 문법을 구축하는 방법론을 제시하고, 실제 구축된 사전과 문법의 성능을 평가하는 것을 목표로 한다. 본 연구에서는 자연어처리(NLP)에서 중요한 화두로 논의되어 온 MWE의 어휘-통사적 특징을 부분문법 그래프(LGG)로 형식화하였다. 화장품 리뷰 코퍼스에 DECO 한국어 전자사전을 적용하여 어휘 빈도 통계를 획득하고 이에 대한 언어학적 분석을 통해 극성 MWE(Polarity-MWE)와 화제 MWE(Topic MWE)의 전체 네 가지 하위 범주를 분류하였다. 또한 각 모듈간의 상호관계에 대한 어휘-통사적 속성을 반복적으로 적용하는 이중 증식(double-propagation)을 통해 자원을 확장하였다. 이 과정을 통해 구축된 대용량 MWE 유한그래프 사전 DECO-MWE의 성능을 테스트한 결과 각각 0.844(Pol-MWE), 0.742(Top-MWE)의 조화평균을 보였다. 이를 통해 본 연구에서 제안하는 MWE 언어자원 구축 방법론이 다양한 도메인에서 활용될 수 있고 향후 자질기반 감성 분석에 중요한 자원이 될 것임을 확인하였다.

  • PDF

Analysis of affective words on photographic images and the effects of color on the images (사진 이미지와 관련된 감성 어휘 분석 및 색 유무에 따른 감성 반응 비교)

  • 박수진;정우현;한재현;신수진
    • Science of Emotion and Sensibility
    • /
    • v.7 no.1
    • /
    • pp.41-49
    • /
    • 2004
  • The affective words on photographic images were analyzed and a model was structured. Based on this model, the effects of color on the affections were studied. In study 1, the photographic images with various materials and techniques were presented and the affective responses are collected. The factor analysis using principal axing method showed that the variance of the affective words could be explained about 42% by the three factors. These are named positive-negative, dynamic-static, light-heavy, respectively. In study 2, the effects of color on the affections were evaluated on three basic dimensions. Ninety representative color images were converged black-and-white images, and each of 180 images was rated on the three affective scales. The t-test showed that the effects of color are statistically significant on the three affective scales, respectively. The achromatic images were felt more negative, more static, and heavier than chromatic images.

  • PDF

Applying Randomization Tests to Collocation Analyses in Large Corpora (언어의 공기관계 분석을 위한 임의화검증의 응용)

  • Yang Kyung-Sook;Kim HeeYoung
    • The Korean Journal of Applied Statistics
    • /
    • v.18 no.3
    • /
    • pp.583-595
    • /
    • 2005
  • Contingency tables are used to compare counts of n-grams to determine if the n-gram is a true collocation, meaning that the words that make up the n-gram are highly associated in the text. Some statistical methods for identifying collocation are used. They are Kulczinsky coefficient, Ochiai coefficient, Frager and McGowan coefficient, Yule coefficient, mutual information, and chi-square, and so on. But the main problem is that these measures are based ell the assumption of a nor-mal or approximately normal distribution of the variables being sampled. While this assumption is valid in most instances, it is not valid when comparing the rates of occurrence of rare events, and texts are composed mostly of rare events. In this paper we have simply reviewed some statistics about testing association of two words. Some randomization tests to evaluate the significance level in analyzing collocation in large corpora are proposed. A related graph can be used to compare different lest statistics that ran be used to analyze the same contingency table.

Noun Sense Disambiguation Based-on Corpus and Conceptual Information (말뭉치와 개념정보를 이용한 명사 중의성 해소 방법)

  • 이휘봉;허남원;문경희;이종혁
    • Korean Journal of Cognitive Science
    • /
    • v.10 no.2
    • /
    • pp.1-10
    • /
    • 1999
  • This paper proposes a noun sense disambiguation method based-on corpus and conceptual information. Previous research has restricted the use of linguistic knowledge to the lexical level. Since knowledge extracted from corpus is stored in words themselves, the methods requires a large amount of space for the knowledge with low recall rate. On the contrary, we resolve noun sense ambiguity by using concept co-occurrence information extracted from an automatically sense-tagged corpus. In one experimental evaluation it achieved, on average, a precision of 82.4%, which is an improvement of the baseline by 14.6%. considering that the test corpus is completely irrelevant to the learning corpus, this is a promising result.

  • PDF

Two-Level Part-of-Speech Tagging for Korean Text Using Hidden Markov Model (은닉 마르코프 모델을 이용한 두단계 한국어 품사 태깅)

  • Lee, Sang-Zoo;Lim, Heui-Suk;Rim, Hae-Chang
    • Annual Conference on Human and Language Technology
    • /
    • 1994.11a
    • /
    • pp.305-312
    • /
    • 1994
  • 품사 태깅은 코퍼스에 정확한 품사 정보를 첨가하는 작업이다. 많은 단어는 하나 이상의 품사를 갖는 중의성이 있으며, 품사 태깅은 지역적 문맥을 이용하여 품사 중의성을 해결한다. 한국어에서 품사 중의성은 다양한 원인에 의해서 발생한다. 일반적으로 동형 이품사 형태소에 의해 발생되는 품사 중의성은 문맥 확률과 어휘 확률에 의해 해결될 수 있지만, 이형 동품사 형태소에 의해 발생되는 품사 중의성은 상호 정보나 의미 정보가 있어야만 해결될 수 있다. 그리나, 기존의 한국어 품사 태깅 방법은 문맥 확률과 어휘 확률만을 이용하여 모든 품사 중의성을 해결하려 하였다. 본 논문은 어절 태깅 단계에서는 중의성을 최소화하고, 형태소 태깅 단계에서는 최소화된 중의성 중에서 하나를 결정하는 두단계 태깅 방법을 제시한다. 제안된 어절 태깅 방법은 단순화된 어절 태그를 이용하므로 품사 집합에 독립적이면, 대량의 어절을 소량의 의사 부류에 사상하므로 통계 정보의 양이 적다. 또한, 은닉 마르코프 모델을 이용하므로 태깅되지 않은 원시 코퍼스로부터 학습이 가능하며, 적은 수의 파라메터와 Viterbi 알고리즘을 이용하므로 태깅 속도가 효율적이다.

  • PDF

Automatic Generation of Multiple-Choice Questions Based on Statistical Language Model (통계 언어모델 기반 객관식 빈칸 채우기 문제 생성)

  • Park, Youngki
    • Journal of The Korean Association of Information Education
    • /
    • v.20 no.2
    • /
    • pp.197-206
    • /
    • 2016
  • A fill-in-the-blank with choices are widely used in classrooms in order to check whether students' understand what is being taught. Although there have been proposed many algorithms for generating this type of questions, most of them focus on preparing sentences with blanks rather than generating multiple choices. In this paper, we propose a novel algorithm for generating multiple choices, given a sentence with a blank. Because the algorithm is based on a statistical language model, we can generate relatively unbiased result and adjust the level of difficulty with ease. The experimental results show that our approach automatically produces similar multiple-choices to those of the exam writers.

User Behavior Classification for Contents Configuration of Life-logging Application (라이프로깅 애플리케이션 콘텐츠 구성을 위한 사용자 행태 분류)

  • Kwon, Jieun;Kwak, Sojung;Lim, Yoon Ah;Whang, Min Cheol
    • Science of Emotion and Sensibility
    • /
    • v.19 no.4
    • /
    • pp.13-20
    • /
    • 2016
  • Recently, life-logging service which has expanded to measure and record the daily life of the users and to share with others are increasing. In particular, as life-logging services based on the application has become popular with the development of wearable-devices and smart-phones, the contents of this service are produced by user behavior and are provided in infographic menu form. The purpose of this paper is to extract user behavior and classify for making contents items of life-logging service. For this paper, the first of all, we discuss the definition and characteristics of life-logging and research the contents based on user behavior related to life-logging by the publications including thesis, articles, and books. Secondly, we extract and classify the user behavior to build the contents for life-logging service. We gather users' action words from publication materials, researches, and contents of existing life-logging service. And then collected words are analyzed by FGI (Focus Group Interview) and survey. As the result, 39 words which suit for contents of life-logging service are extracted by verify suitability. Finally, the extracted 39 words are classified for 19 categories -'Eat', 'Keep house', 'Diet', 'Travel', 'Work out', 'Transit', 'Shoot', 'Meet', 'Feel', 'Talk', 'Care for', 'Drive', 'Listen', 'Go online', 'Sleep', 'Go', 'Work', 'Learn', 'Watch' - which are suggested by the surveys, statistical analysis, and FGI. We will discuss the role and limitations of this results to build contents configuration based on life-logging application in this study.