• Title/Summary/Keyword: 통사적 특징

Search Result 43, Processing Time 0.016 seconds

De re context and some semantic traits of 'rago' (대물(de re) 문맥과 '-라고'의 몇 가지 의미론적 특성)

  • Min, Chanhong
    • Korean Journal of Logic
    • /
    • v.16 no.1
    • /
    • pp.61-85
    • /
    • 2013
  • The author, after introducing the concept of de re belief and discussing de re/de dicto ambiguity in belief context and modal context, concludes that modal sentences of Korean language does not show any distinctive traits against English. He, after discussing this ambiguity in negative sentence a la Russell, tries to show that Korean provides two way of negation construction, one of which corresponds to de re negation (primary occurrence in Russell's terms). De re reading makes referentially transparent context, thus permits substitutions of identicals salva veritate; De dicto reading does not. Korean ending 'rago', used with quotation verbs, speech act verbs and cognitive attitude verbs, deserves some attention in that it permits de re sentences in addition to de re/de dicto ambiguous sentences. 'Rago' also makes speaker's commitment to the content of the intensionally contained clause 'neutral', in contrast with other Korean endings such as 'um/im' and 'raneun gut' which make speaker's positive commitment. This explains why the maxim of western epistemology that knowledge presupposes truth does not hold in Korean 'rago' sentences.

  • PDF

Recognition and Narrative Aspects of the History of Korean Classic Literature from Two Korean Literature History Works Written in China (중국 한국문학사 2종의 한국고전문학사 인식과 서술 양상: 남북한문학사와 자국문학사의 수용과 변용을 중심으로)

  • Lee, Deung-yearn
    • Cross-Cultural Studies
    • /
    • v.48
    • /
    • pp.67-106
    • /
    • 2017
  • This study focuses on two specific history of Korean literature in Chinese: the outline of The History of Joseon Literature (2010) by Li Yan and The History of Joseon Literature (1988, 2008) by Wei Xu-sheng; it was conducted to compare narrative viewpoints to the history of South and North Korean literature and therefore identify distinguishable characteristics. As a result, the following was concluded. First, The History of Korean Literature by Cho Dong-il and The History of Korean Literature in North Korea (15 volumes) include thorough discussions on division of historical eras, concept of genres as well as individual literary works and applied such discussions on writing literary history. However, Wei Xu-sheng and Li Yan's The History of Korean Literature did not illuminate theoretical discussion of South and North Korea. Li Yan's outline of The History of Joseon Literature was published in 2010 and the first edition of Wei Xu-sheng's The History of Joseon Literature was published in 1986 and later was published as revised editions in 2000 and 2008. Regarding published dates, it is a matter of course to reference Cho Dong-il's The History of Korean Literature, published in the 1980s, or The History of Korean Literature in North Korea (15 volumes), published in the 1990s; nevertheless, neither Wei Xu-sheng nor Li Yan used those texts in their works. Their works were heavily influenced by the narrative tradition of the history of national literature and therefore, entailed unsophisticated discussion on the division of historical eras or the concept of genres. Second, those two texts also emphasized external factors such as politics, society, economy and culture and explicitly mention these factors in historical overview of each chapter. Such an approach is commonly used in narratives of literary history under socialist regimes, including The History of Korean Literature in North Korea (15 volumes). Accordingly, evaluations based on 'political standards' - stress of people, nationality, practicality and so forth - in main texts are particularly accentuated, akin to narratives of literary history under socialist regimes. Finally, since those two Korean literature history works are written by Chinese scholars, they focus on correlation between Chinese literature history and Korean literature history. However, several genre-related terminologies such as Xiaopin (a kind of essay), Yuefu (a kind of popular song/poem), Yuyan (fable), Shuochang (telling of popular stories with the interspersal songs), Shizhuan (biography or/and memoirs in history) were adopted directly from Chinese literature. In analyzing Korean literature using terminologies introduced from Chinese literature, differences between original and alternative definitions were not examined in detail. While some terminologies and concepts were adopted directly without further consideration as to state of the two nations, it is also interesting to note that dichotomy, mainly used in Korean literature history, was used to discuss the genre of Cheonki (romance tale), relevant to Suyichon and Keumosinhua, rather than follow traditions of Chinese literature history.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.