• Title/Summary/Keyword: 어절 분석

Search Result 280, Processing Time 0.022 seconds

The Sensitivity Analysis for Customer Feedback on Social Media (소셜 미디어 상 고객피드백을 위한 감성분석)

  • Song, Eun-Jee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.4
    • /
    • pp.780-786
    • /
    • 2015
  • Social media, such as Social Network Service include a lot of spontaneous opinions from customers, so recent companies collect and analyze information about customer feedback by using the system that analyzes Big Data on social media in order to efficiently operate businesses. However, it is difficult to analyze data collected from online sites accurately with existing morpheme analyzer because those data have spacing errors and spelling errors. In addition, many online sentences are short and do not include enough meanings which will be selected, so established meaning selection methods, such as mutual information, chi-square statistic are not able to practice Emotional Classification. In order to solve such problems, this paper suggests a module that can revise the meanings by using initial consonants/vowels and phase pattern dictionary and meaning selection method that uses priority of word class in a sentence. On the basis of word class extracted by morpheme analyzer, these new mechanisms would separate and analyze predicate and substantive, establish properties Database which is subordinate to relevant word class, and extract positive/negative emotions by using accumulated properties Database.

A Korean Grammar Checker based on the Trees Resulted from a Full Parser (전체 문장 분석에 기반한 한국어 문법 검사기)

  • 이공주;황선영;김지은
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.10
    • /
    • pp.992-999
    • /
    • 2003
  • The purpose of a grammar checker is to find a grammatical erroneous expression in a sentence, and to provide appropriate suggestions for them. To find those errors, grammar checker should parse the whole input sentence, which is a highly time-consuming job. B7or this reason, most Korean grammar checkers adopt a partial parser that can analyze a fragment of a sentence without an ambiguity. This paper presents a Korean grammar checker using a full parser in order to find grammatical errors. This approach allows the grammar checker to critique the errors between the two words in a long distance relationship within a sentence. As a result, this approach improves the accuracy in correcting errors, but it nay come at the expense of decrease in its performance. The Korean grammar checker described in this paper is implemented with 65 rules for checking and correcting the grammatical errors. The grammar checker shows 96.49% in checking accuracy against the test corpus including 7 million words.

Modification Distance Model using Headible Path Contexts for Korean Dependency Parsing (지배가능 경로 문맥을 이용한 의존 구문 분석의 수식 거리 모델)

  • Woo, Yeon-Moon;Song, Young-In;Park, So-Young;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.2
    • /
    • pp.140-149
    • /
    • 2007
  • This paper presents a statistical model for Korean dependency-based parsing. Although Korean is one of free word order languages, it has the feature of which some word order is preferred to local contexts. Earlier works proposed parsing models using modification lengths due to this property. Our model uses headible path contexts for modification length probabilities. Using a headible path of a dependent it is effective for long distance relation because the large surface context for a dependent are abbreviated as its headible path. By combined with lexical bigram dependency, our probabilistic model achieves 86.9% accuracy in eojoel analysis for KAIST corpus, more improvement especially for long distance dependencies.

Analysis of the Directives and Wh-words in the Directives of Elementary Korean Textbooks (초등 국어교과서 지시문과 의문사 분석)

  • Lee, Suhyang
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.3
    • /
    • pp.134-140
    • /
    • 2022
  • The purpose of this study was to investigate the directives and Wh-words in the directives from elementary 2nd, 4th and 6th grade Korean textbooks. After entering all directives into Microsoft Office Excel, directives with Wh-words were separated. The analysis program, Natmal, was used for the analysis of the directives and Wh-words. The criteria from previous studies were also applied for this analysis process. As a result of the study, there are a lot of nouns and verbs in directives. They were consisted of sentences with an average of 6.9 Eojeol. There were a total of 11 types of Wh-words and 'Mueot(what), Eotteon(which), eotteohge(how)' appeared most frequently in all grades. For question types, both grades had more inferential questions than literal information questions. This results were expected to be used as basic data for language interventions with school aged children who have language disorders.

An Analysis of Messages Produced by Participants in the Agenda Setting Process during a Government's Crisis Situation: Focusing on the Ministry of Drug and Food Safety's Response to Paraben Toothpaste Issue (정부의 위기 상황에서 의제설정과정 참여자들의 메시지 분석: 파라벤 치약 논란과 정부의 대응을 중심으로)

  • Lee, Mina;Hong, Ju-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.7
    • /
    • pp.460-476
    • /
    • 2015
  • The purpose of this study is to provide practical implications on government crisis management strategies and on the use of SNS in crisis management. Specifically, this study analyzed Ministry of Drug and Food Safety's responses to paraben toothpaste issue, media coverage of paraben toothpaste issue, and public responses to paraben toothpaste issue. Through textual analysis and the network analysis of 45 news articles and 645 tweets, this study found that Ministry used one-way communication strategy and mostly negative issues regarding Ministry's crisis response strategies were diffused via the media and Twitter. This study was meaningful in that it highlighted the importance of media relations and use of SNS in crisis management. The findings of this study provide useful implications for government officials and PR practitioners in their crisis management and communication strategy.

Aspects of Language Use in Newspaper Articles: A Corpus Linguistic Perspective (신문 기사의 언어 사용 양상: 코퍼스언어학적 접근)

  • Song, Kyung-Hwa;Kang, Beom-Mo
    • Korean Journal of Cognitive Science
    • /
    • v.17 no.4
    • /
    • pp.255-269
    • /
    • 2006
  • The purpose of this study is to analyze newspaper articles from corpus linguistic point of view. We used a large corpus of newspaper articles built from <21st century Sejong Project> and counted occurrences of certain expressions. A newspaper article is divided into the headline, the lead and the body. We tried to figure out how to measure the characteristics of indication and compression which are typical to headlines. Then, we focused on the differences between the headline and the lead. finally, we analyzed the sentence structure and measured the ratio of the frequency of common nouns in the body. This study verifies the existing stylistic theories of newspapers and shows new aspects of language use in newspaper articles. Texts like newspaper articles are the results of human language processing and they in turn affect the development of cognitive ability of language.

  • PDF

Shallow Parsing on Grammatical Relations in Korean Sentences (한국어 문법관계에 대한 부분구문 분석)

  • Lee, Song-Wook;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.10
    • /
    • pp.984-989
    • /
    • 2005
  • This study aims to identify grammatical relations (GRs) in Korean sentences. The key task is to find the GRs in sentences in terms of such GR categories as subject, object, and adverbial. To overcome this problem, we are fared with the many ambiguities. We propose a statistical model, which resolves the grammatical relational ambiguity first, and then finds correct noun phrases (NPs) arguments of given verb phrases (VP) by using the probabilities of the GRs given NPs and VPs in sentences. The proposed model uses the characteristics of the Korean language such as distance, no-crossing and case property. We attempt to estimate the probabilities of GR given an NP and a VP with Support Vector Machines (SVM) classifiers. Through an experiment with a tree and GR tagged corpus for training the model, we achieved an overall accuracy of $84.8\%,\;94.1\%,\;and\;84.8\%$ in identifying subject, object, and adverbial relations in sentences, respectively.

A Study on the Computational Model of Word Sense Disambiguation, based on Corpora and Experiments on Native Speaker's Intuition (직관 실험 및 코퍼스를 바탕으로 한 의미 중의성 해소 계산 모형 연구)

  • Kim, Dong-Sung;Choe, Jae-Woong
    • Korean Journal of Cognitive Science
    • /
    • v.17 no.4
    • /
    • pp.303-321
    • /
    • 2006
  • According to Harris'(1966) distributional hypothesis, understanding the meaning of a word is thought to be dependent on its context. Under this hypothesis about human language ability, this paper proposes a computational model for native speaker's language processing mechanism concerning word sense disambiguation, based on two sets of experiments. Among the three computational models discussed in this paper, namely, the logic model, the probabilistic model, and the probabilistic inference model, the experiment shows that the logic model is first applied fer semantic disambiguation of the key word. Nexr, if the logic model fails to apply, then the probabilistic model becomes most relevant. The three models were also compared with the test results in terms of Pearson correlation coefficient value. It turns out that the logic model best explains the human decision behaviour on the ambiguous words, and the probabilistic inference model tomes next. The experiment consists of two pans; one involves 30 sentences extracted from 1 million graphic-word corpus, and the result shows the agreement rate anong native speakers is at 98% in terms of word sense disambiguation. The other pm of the experiment, which was designed to exclude the logic model effect, is composed of 50 cleft sentences.

  • PDF

Study on the realization of pause groups and breath groups (휴지 단위와 호흡 단위의 실현 양상 연구)

  • Yoo, Doyoung;Shin, Jiyoung
    • Phonetics and Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.19-31
    • /
    • 2020
  • The purpose of this study is to observe the realization of pause and breath groups from adult speakers and to examine how gender, generation, and tasks can affect this realization. For this purpose, we analyzed forty-eight male or female speakers. Their generation was divided into two groups: young, old. Task and gender affected both the realization of pause and breath groups. The length of the pause groups was longer in the read speech than in the spontaneous speech and female speech. On the other hand, the length of the breath group was longer in the spontaneous speech and the male speech. In the spontaneous speech, which requires planning, the speaker produced shorter length of pause group. The short sentence length of the reading material influenced the reason for which the length of the breath group was shorter in the reading speech. Gender difference resulted from difference in pause patterns between genders. In the case of the breath groups, the male speaker produced longer duration of pause than the female speaker did, which may be due to difference in lung capacity between genders. On the other hand, generation did not affect either the pause groups or the breath groups. The generation factor only influenced the number of syllables and the eojeols, which can be interpreted as the result of the difference in speech rate between generations.

Automatic Word Spacing of the Korean Sentences by Using End-to-End Deep Neural Network (종단 간 심층 신경망을 이용한 한국어 문장 자동 띄어쓰기)

  • Lee, Hyun Young;Kang, Seung Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.11
    • /
    • pp.441-448
    • /
    • 2019
  • Previous researches on automatic spacing of Korean sentences has been researched to correct spacing errors by using n-gram based statistical techniques or morpheme analyzer to insert blanks in the word boundary. In this paper, we propose an end-to-end automatic word spacing by using deep neural network. Automatic word spacing problem could be defined as a tag classification problem in unit of syllable other than word. For contextual representation between syllables, Bi-LSTM encodes the dependency relationship between syllables into a fixed-length vector of continuous vector space using forward and backward LSTM cell. In order to conduct automatic word spacing of Korean sentences, after a fixed-length contextual vector by Bi-LSTM is classified into auto-spacing tag(B or I), the blank is inserted in the front of B tag. For tag classification method, we compose three types of classification neural networks. One is feedforward neural network, another is neural network language model and the other is linear-chain CRF. To compare our models, we measure the performance of automatic word spacing depending on the three of classification networks. linear-chain CRF of them used as classification neural network shows better performance than other models. We used KCC150 corpus as a training and testing data.