• Title/Summary/Keyword: Lexical Analysis

Search Result 174, Processing Time 0.023 seconds

Government and Derivation in Korean Phonology

  • Park, Hee-Heon;David Michaels
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.117-122
    • /
    • 1996
  • This paper proposes a derivational account of tensing and neutralization of obstruents in Korean within the theory of Government Phonology (GP) (Kaye, Lowenstamm and Vergnaud 1990, henceforth KLV; Park 1996). We begin by outling the relevant tensing and neutralization data in Korean. We point out several problems that need to be addressed in any account of these data. We then set out the central notions of GP, pointing out how adherence to the requirement that government relations remain constant throughout a derivation under the Projection Principle prevents a GP account of tensing and neutralization in Korean, which requires government relations to switch between lexical and phonetic representations. To address this problem, we propose abandoning the Projection Principle, extending lexical representations in GP along the lines of the Markedness Theory approach (Michaels 1989), and adopting the economy principles for derivation of the Minimalist approach (Chomsky 1993; Chomsky & Lasnik 1991). finally, we summarize the analysis of obstruent phenomena in Korean within GP extended in these ways.

  • PDF

A Hybrid Approach for the Morpho-Lexical Disambiguation of Arabic

  • Bousmaha, Kheira Zineb;Rahmouni, Mustapha Kamel;Kouninef, Belkacem;Hadrich, Lamia Belguith
    • Journal of Information Processing Systems
    • /
    • v.12 no.3
    • /
    • pp.358-380
    • /
    • 2016
  • In order to considerably reduce the ambiguity rate, we propose in this article a disambiguation approach that is based on the selection of the right diacritics at different analysis levels. This hybrid approach combines a linguistic approach with a multi-criteria decision one and could be considered as an alternative choice to solve the morpho-lexical ambiguity problem regardless of the diacritics rate of the processed text. As to its evaluation, we tried the disambiguation on the online Alkhalil morphological analyzer (the proposed approach can be used on any morphological analyzer of the Arabic language) and obtained encouraging results with an F-measure of more than 80%.

Neuroanatomical analysis for onomatopoeia : fMRI study

  • Han, Jong-Hye;Choi, Won-Il;Chang, Yong-Min;Jeong, Ok-Ran;Nam, Ki-Chun
    • Annual Conference on Human and Language Technology
    • /
    • 2004.10d
    • /
    • pp.315-318
    • /
    • 2004
  • The purpose of this study is to examine the neuroanatomical areas related with onomatopoeia (sound-imitated word). Using the block-designed fMRI, whole-brain images (N=11) were acquired during lexical decisions. We examined how the lexical information initiates brain activation during visual word recognition. The onomatopoeic word recognition activated the bilateral occipital lobes and superior mid-temporal gyrus.

  • PDF

A Hidden Markov Model Imbedding Multiword Units for Part-of-Speech Tagging

  • Kim, Jae-Hoon;Jungyun Seo
    • Journal of Electrical Engineering and information Science
    • /
    • v.2 no.6
    • /
    • pp.7-13
    • /
    • 1997
  • Morphological Analysis of Korean has known to be a very complicated problem. Especially, the degree of part-of-speech(POS) ambiguity is much higher than English. Many researchers have tried to use a hidden Markov model(HMM) to solve the POS tagging problem and showed arround 95% correctness ratio. However, the lack of lexical information involves a hidden Markov model for POS tagging in lots of difficulties in improving the performance. To alleviate the burden, this paper proposes a method for combining multiword units, which are types of lexical information, into a hidden Markov model for POS tagging. This paper also proposes a method for extracting multiword units from POS tagged corpus. In this paper, a multiword unit is defined as a unit which consists of more than one word. We found that these multiword units are the major source of POS tagging errors. Our experiment shows that the error reduction rate of the proposed method is about 13%.

  • PDF

网络流行语"X+人"探析 - 从"打工人", "尾款人", "工具人"等谈起

  • Yu, Cheol
    • 중국학논총
    • /
    • no.71
    • /
    • pp.41-59
    • /
    • 2021
  • With the progress of social economy and science and technology, network media technology has developed rapidly, China has ushered in the network information age, and the network buzzwords emerged to reflect the interaction and influence between language and society. The network buzzwords of "X+ ren "indirectly show the social psychology and value orientation of modern people with their unique structural characteristics, semantic connotation and cultural deposits, and so on. Based on this, we have conducted a multi-angle investigation on the network buzzwords "X+ ren". This paper first analyzes the structure types and syntactic functions of the lexical model of "X+ ren ", then makes a semantic analysis of the lexical model of "X+ Ren ", and finally investigates the causes and influences of the popularity of "X+ ren ". Through the investigation, we believe that "X+ ren "will continue to grow, and "X+ ren" will continue to attract the attention of the academic community.

The Effect of Expert Reviews on Consumer Product Evaluations: A Text Mining Approach (전문가 제품 후기가 소비자 제품 평가에 미치는 영향: 텍스트마이닝 분석을 중심으로)

  • Kang, Taeyoung;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.63-82
    • /
    • 2016
  • Individuals gather information online to resolve problems in their daily lives and make various decisions about the purchase of products or services. With the revolutionary development of information technology, Web 2.0 has allowed more people to easily generate and use online reviews such that the volume of information is rapidly increasing, and the usefulness and significance of analyzing the unstructured data have also increased. This paper presents an analysis on the lexical features of expert product reviews to determine their influence on consumers' purchasing decisions. The focus was on how unstructured data can be organized and used in diverse contexts through text mining. In addition, diverse lexical features of expert reviews of contents provided by a third-party review site were extracted and defined. Expert reviews are defined as evaluations by people who have expert knowledge about specific products or services in newspapers or magazines; this type of review is also called a critic review. Consumers who purchased products before the widespread use of the Internet were able to access expert reviews through newspapers or magazines; thus, they were not able to access many of them. Recently, however, major media also now provide online services so that people can more easily and affordably access expert reviews compared to the past. The reason why diverse reviews from experts in several fields are important is that there is an information asymmetry where some information is not shared among consumers and sellers. The information asymmetry can be resolved with information provided by third parties with expertise to consumers. Then, consumers can read expert reviews and make purchasing decisions by considering the abundant information on products or services. Therefore, expert reviews play an important role in consumers' purchasing decisions and the performance of companies across diverse industries. If the influence of qualitative data such as reviews or assessment after the purchase of products can be separately identified from the quantitative data resources, such as the actual quality of products or price, it is possible to identify which aspects of product reviews hamper or promote product sales. Previous studies have focused on the characteristics of the experts themselves, such as the expertise and credibility of sources regarding expert reviews; however, these studies did not suggest the influence of the linguistic features of experts' product reviews on consumers' overall evaluation. However, this study focused on experts' recommendations and evaluations to reveal the lexical features of expert reviews and whether such features influence consumers' overall evaluations and purchasing decisions. Real expert product reviews were analyzed based on the suggested methodology, and five lexical features of expert reviews were ultimately determined. Specifically, the "review depth" (i.e., degree of detail of the expert's product analysis), and "lack of assurance" (i.e., degree of confidence that the expert has in the evaluation) have statistically significant effects on consumers' product evaluations. In contrast, the "positive polarity" (i.e., the degree of positivity of an expert's evaluations) has an insignificant effect, while the "negative polarity" (i.e., the degree of negativity of an expert's evaluations) has a significant negative effect on consumers' product evaluations. Finally, the "social orientation" (i.e., the degree of how many social expressions experts include in their reviews) does not have a significant effect on consumers' product evaluations. In summary, the lexical properties of the product reviews were defined according to each relevant factor. Then, the influence of each linguistic factor of expert reviews on the consumers' final evaluations was tested. In addition, a test was performed on whether each linguistic factor influencing consumers' product evaluations differs depending on the lexical features. The results of these analyses should provide guidelines on how individuals process massive volumes of unstructured data depending on lexical features in various contexts and how companies can use this mechanism from their perspective. This paper provides several theoretical and practical contributions, such as the proposal of a new methodology and its application to real data.

Implementation of Very Large Hangul Text Retrieval Engine HMG (대용량 한글 텍스트 검색 엔진 HMG의 구현)

  • 박미란;나연묵
    • Journal of Korea Multimedia Society
    • /
    • v.1 no.2
    • /
    • pp.162-172
    • /
    • 1998
  • In this paper, we implement a gigabyte Hangul text retrieval engine HMG(Hangul MG) which is based on the English text retrieval engine MG(Managing Gigabytes) and the Hangul lexical analyzer HAM(Hangul Analysis Module). To support Hangul information, we use the KSC 5601 code in the database construction and query processing stages. The lexical analyzer, parser, and index construction module of the MG system are modified to support Hangul information. To show the usefulness of HMG system, we implemented a NOD(Novel On Demand) system supporting the retrieval of Hangul novels on the WWW. The proposed system HMG can be utilized in the construction of massive full-text information retrieval systems supporting Hangul.

  • PDF

Disambiguation of Homograph Suffixes using Lexical Semantic Network(U-WIN) (어휘의미망(U-WIN)을 이용한 동형이의어 접미사의 의미 중의성 해소)

  • Bae, Young-Jun;Ock, Cheol-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.1
    • /
    • pp.31-42
    • /
    • 2012
  • In order to process the suffix derived nouns of Korean, most of Korean processing systems have been registering the suffix derived nouns in dictionary. However, this approach is limited because the suffix is very high productive. Therefore, it is necessary to analyze semantically the unregistered suffix derived nouns. In this paper, we propose a method to disambiguate homograph suffixes using Korean lexical semantic network(U-WIN) for the purpose of semantic analysis of the suffix derived nouns. 33,104 suffix derived nouns including the homograph suffixes in the morphological and semantic tagged Sejong Corpus were used for experiments. For the experiments first of all we semantically tagged the homograph suffixes and extracted root of the suffix derived nouns and mapped the root to nodes in the U-WIN. And we assigned the distance weight to the nodes in U-WIN that could combine with each homograph suffix and we used the distance weight for disambiguating the homograph suffixes. The experiments for 35 homograph suffixes occurred in the Sejong corpus among 49 homograph suffixes in a Korean dictionary result in 91.01% accuracy.

Predicate Recognition Method using BiLSTM Model and Morpheme Features (BiLSTM 모델과 형태소 자질을 이용한 서술어 인식 방법)

  • Nam, Chung-Hyeon;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.24-29
    • /
    • 2022
  • Semantic role labeling task used in various natural language processing fields, such as information extraction and question answering systems, is the task of identifying the arugments for a given sentence and predicate. Predicate used as semantic role labeling input are extracted using lexical analysis results such as POS-tagging, but the problem is that predicate can't extract all linguistic patterns because predicate in korean language has various patterns, depending on the meaning of sentence. In this paper, we propose a korean predicate recognition method using neural network model with pre-trained embedding models and lexical features. The experiments compare the performance on the hyper parameters of models and with or without the use of embedding models and lexical features. As a result, we confirm that the performance of the proposed neural network model was 92.63%.

A Corpus Analysis of British-American Children's Adventure Novels: Treasure Island (영미 아동 모험 소설에 관한 코퍼스 분석 연구: 『보물섬』을 중심으로)

  • Choi, Eunsaem;Jung, Chae Kwan
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.1
    • /
    • pp.333-342
    • /
    • 2021
  • In this study, we analyzed the vocabulary, lemmas, keywords, and n-grams in 『Treasure Island』 to identify certain linguistic features of this British-American children's adventure novel. The current study found that, contrary to the popular claim that frequently-used words are important and essential to a story, the set of frequently-used words in 『Treasure Island』 were mostly function words and proper nouns that were not directly related to the plot found in 『Treasure Island』. We also ascertained that a list of keywords using a statistical method making use of a corpus program was not good enough to surmise the story of 『Treasure Island』. However, we managed to extract 30 keywords through the first quantitative keyword analysis and then a second qualitative keyword analysis. We also carried out a series of n-gram analyses and were able to discover lexical bundles that were preferred and frequently used by the author of 『Treasure Island』. We hope that the results of this study will help spread this knowledge among British-American children's literature as well as to further put forward corpus stylistic theory.