• Title/Summary/Keyword: Corpus Network Analysis

Search Result 33, Processing Time 0.022 seconds

Korean and English Sentiment Analysis Using the Deep Learning

  • Ramadhani, Adyan Marendra;Choi, Hyung Rim;Lim, Seong Bae
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.23 no.3
    • /
    • pp.59-71
    • /
    • 2018
  • Social media has immense popularity among all services today. Data from social network services (SNSs) can be used for various objectives, such as text prediction or sentiment analysis. There is a great deal of Korean and English data on social media that can be used for sentiment analysis, but handling such huge amounts of unstructured data presents a difficult task. Machine learning is needed to handle such huge amounts of data. This research focuses on predicting Korean and English sentiment using deep forward neural network with a deep learning architecture and compares it with other methods, such as LDA MLP and GENSIM, using logistic regression. The research findings indicate an approximately 75% accuracy rate when predicting sentiments using DNN, with a latent Dirichelet allocation (LDA) prediction accuracy rate of approximately 81%, with the corpus being approximately 64% accurate between English and Korean.

Automatic Generation of Training Corpus for a Sentiment Analysis Using a Generative Adversarial Network (생성적 적대 네트워크를 이용한 감성인식 학습데이터 자동 생성)

  • Park, Cheon-Young;Choi, Yong-Seok;Lee, Kong Joo
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.389-393
    • /
    • 2018
  • 딥러닝의 발달로 기계번역, 대화 시스템 등의 자연언어처리 분야가 크게 발전하였다. 딥러닝 모델의 성능을 향상시키기 위해서는 많은 데이터가 필요하다. 그러나 많은 데이터를 수집하기 위해서는 많은 시간과 노력이 소요된다. 본 연구에서는 이미지 생성 모델로 좋은 성능을 보이고 있는 생성적 적대 네트워크(Generative adverasarial network)를 문장 생성에 적용해본다. 본 연구에서는 긍/부정 조건에 따른 문장을 자동 생성하기 위해 SeqGAN 모델을 수정하여 사용한다. 그리고 분류기를 포함한 SeqGAN이 긍/부정 감성인식 학습데이터를 자동 생성할 수 있는지 실험한다. 실험을 수행한 결과, 분류기를 포함한 SeqGAN 모델이 생성한 문장과 학습데이터를 혼용하여 학습할 경우 실제 학습데이터만 학습 시킨 경우보다 좋은 정확도를 보였다.

  • PDF

An Analysis of the process acting as a driver of the expansion of meanings in the synonym-antonym net: the meanings of '틀리다' ranging from "be wrong" to "be different" ([다름]의 '틀리다'를 형성하는 유의-반의 관계망 분석)

  • Shin, Jung-Jin
    • Korean Linguistics
    • /
    • v.78
    • /
    • pp.31-54
    • /
    • 2018
  • '맞다(right)', which is inversely related to 'teullida', has a synonymic relationship with '같다(same)' depending on the sense. Naturally, the '같다' is usually inversely related to '다르다(be different)' as symmetry verb. The meaning of '다르다' is 'teullida' and there is a close meaning relationship network in the network of words. In other words, the process acting as a driver of the expansion of meanings based on the antonym-relation of (1)'틀리다${\leftrightarrow}$맞다', and the s?ynonym-relation of (2)'맞다 = 같다' forms a network, and the relation between them and the opposite semantics is (3)'같다=맞다${\leftrightarrow}$다르다'. And many of today's speakers speak (4)'teullida' of [difference]. Therefore, after the application of the synonymic analogy, eventually, the antonymic analogy is formed, and the word formed is 'teullida' of [difference]. This, of course, forms another type of enlargement of the meaning.

Burmese Sentiment Analysis Based on Transfer Learning

  • Mao, Cunli;Man, Zhibo;Yu, Zhengtao;Wu, Xia;Liang, Haoyuan
    • Journal of Information Processing Systems
    • /
    • v.18 no.4
    • /
    • pp.535-548
    • /
    • 2022
  • Using a rich resource language to classify sentiments in a language with few resources is a popular subject of research in natural language processing. Burmese is a low-resource language. In light of the scarcity of labeled training data for sentiment classification in Burmese, in this study, we propose a method of transfer learning for sentiment analysis of a language that uses the feature transfer technique on sentiments in English. This method generates a cross-language word-embedding representation of Burmese vocabulary to map Burmese text to the semantic space of English text. A model to classify sentiments in English is then pre-trained using a convolutional neural network and an attention mechanism, where the network shares the model for sentiment analysis of English. The parameters of the network layer are used to learn the cross-language features of the sentiments, which are then transferred to the model to classify sentiments in Burmese. Finally, the model was tuned using the labeled Burmese data. The results of the experiments show that the proposed method can significantly improve the classification of sentiments in Burmese compared to a model trained using only a Burmese corpus.

Disambiguation of Homograph Suffixes using Lexical Semantic Network(U-WIN) (어휘의미망(U-WIN)을 이용한 동형이의어 접미사의 의미 중의성 해소)

  • Bae, Young-Jun;Ock, Cheol-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.1
    • /
    • pp.31-42
    • /
    • 2012
  • In order to process the suffix derived nouns of Korean, most of Korean processing systems have been registering the suffix derived nouns in dictionary. However, this approach is limited because the suffix is very high productive. Therefore, it is necessary to analyze semantically the unregistered suffix derived nouns. In this paper, we propose a method to disambiguate homograph suffixes using Korean lexical semantic network(U-WIN) for the purpose of semantic analysis of the suffix derived nouns. 33,104 suffix derived nouns including the homograph suffixes in the morphological and semantic tagged Sejong Corpus were used for experiments. For the experiments first of all we semantically tagged the homograph suffixes and extracted root of the suffix derived nouns and mapped the root to nodes in the U-WIN. And we assigned the distance weight to the nodes in U-WIN that could combine with each homograph suffix and we used the distance weight for disambiguating the homograph suffixes. The experiments for 35 homograph suffixes occurred in the Sejong corpus among 49 homograph suffixes in a Korean dictionary result in 91.01% accuracy.

Machine Learning Algorithm Accuracy for Code-Switching Analytics in Detecting Mood

  • Latib, Latifah Abd;Subramaniam, Hema;Ramli, Siti Khadijah;Ali, Affezah;Yulia, Astri;Shahdan, Tengku Shahrom Tengku;Zulkefly, Nor Sheereen
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.334-342
    • /
    • 2022
  • Nowadays, as we can notice on social media, most users choose to use more than one language in their online postings. Thus, social media analytics needs reviewing as code-switching analytics instead of traditional analytics. This paper aims to present evidence comparable to the accuracy of code-switching analytics techniques in analysing the mood state of social media users. We conducted a systematic literature review (SLR) to study the social media analytics that examined the effectiveness of code-switching analytics techniques. One primary question and three sub-questions have been raised for this purpose. The study investigates the computational models used to detect and measures emotional well-being. The study primarily focuses on online postings text, including the extended text analysis, analysing and predicting using past experiences, and classifying the mood upon analysis. We used thirty-two (32) papers for our evidence synthesis and identified four main task classifications that can be used potentially in code-switching analytics. The tasks include determining analytics algorithms, classification techniques, mood classes, and analytics flow. Results showed that CNN-BiLSTM was the machine learning algorithm that affected code-switching analytics accuracy the most with 83.21%. In addition, the analytics accuracy when using the code-mixing emotion corpus could enhance by about 20% compared to when performing with one language. Our meta-analyses showed that code-mixing emotion corpus was effective in improving the mood analytics accuracy level. This SLR result has pointed to two apparent gaps in the research field: i) lack of studies that focus on Malay-English code-mixing analytics and ii) lack of studies investigating various mood classes via the code-mixing approach.

Identification of Profane Words in Cyberbullying Incidents within Social Networks

  • Ali, Wan Noor Hamiza Wan;Mohd, Masnizah;Fauzi, Fariza
    • Journal of Information Science Theory and Practice
    • /
    • v.9 no.1
    • /
    • pp.24-34
    • /
    • 2021
  • The popularity of social networking sites (SNS) has facilitated communication between users. The usage of SNS helps users in their daily life in various ways such as sharing of opinions, keeping in touch with old friends, making new friends, and getting information. However, some users misuse SNS to belittle or hurt others using profanities, which is typical in cyberbullying incidents. Thus, in this study, we aim to identify profane words from the ASKfm corpus to analyze the profane word distribution across four different roles involved in cyberbullying based on lexicon dictionary. These four roles are: harasser, victim, bystander that assists the bully, and bystander that defends the victim. Evaluation in this study focused on occurrences of the profane word for each role from the corpus. The top 10 common words used in the corpus are also identified and represented in a graph. Results from the analysis show that these four roles used profane words in their conversation with different weightage and distribution, even though the profane words used are mostly similar. The harasser is the first ranked that used profane words in the conversation compared to other roles. The results can be further explored and considered as a potential feature in a cyberbullying detection model using a machine learning approach. Results in this work will contribute to formulate the suitable representation. It is also useful in modeling a cyberbullying detection model based on the identification of profane word distribution across different cyberbullying roles in social networks for future works.

국가연구개발사업 평가에서 사회연결망 분석 활용 방안

  • Gi, Ji-Hun
    • Proceedings of the Korea Technology Innovation Society Conference
    • /
    • 2017.11a
    • /
    • pp.129-129
    • /
    • 2017
  • In planning and evaluating government R&D programs, one of the first steps is to understand the government's current R&D investment portfolio - which fields or topics the government is now investing in in R&D. Analysis methods of an investment portfolio of government R&D tend traditionally to rely on keyword searches or ad-hoc two-dimensional classifications. The main drawback of these approaches is their limited ability to account for the characteristics of the whole government investment in R&D and the role of individual R&D program in it, which tends to depend on the relationship with other programs. This paper suggests a new method for mapping and analyzing government investment in R&D using a combination of methods from natural language processing (NLP) and network analysis. The NLP enables us to build a network of government R&D programs whose links are defined as similarity in R&D topics. Then methods from network analysis show the characteristics of government investment in R&D, including major investment fields, unexplored topics, and key R&D programs which play a role like a hub or a bridge in the network of R&D programs, which are difficult to be identified by conventional methods. These insights can be utilized in planning a new R&D program, in reviewing its proposal, or in evaluating the performance of R&D programs. The utilized (filtered) Korean text corpus consists of hundreds of R&D program descriptions in the budget requests for fiscal year 2017 submitted by government departments to the Korean Ministry of Strategy and Finance.

  • PDF

Affinity and Variety between Words in the Framework of Hypernetwork (하이퍼네트워크에서 본 단어간 긴밀성과 다양성)

  • Kim, Joon-Shik;Park, Chan-Hoon;Lee, Eun-Seok;Zhang, Byoung-Tak
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.4
    • /
    • pp.166-171
    • /
    • 2008
  • We studied the variety and affinity between the successive words in the text document A number of groups were defined by the frequency of a following word in the whole text (corpus). In the previous studies, the Zipf's power law was explained by Chinese restaurant process and hub node was searched after by examining the edge number profile in scale free network. We have observed both a power law and a hub profile at the same time by studying the conditional frequency and degeneracy of a group. A symmetry between the affinity and the variety between words were found during the data analysis. And this phenomenon can be explained within a viewpoint of "exploitation and exploration." We also remark on a small symmetry breaking phenomenon in TIPSTER data.

A Model of English Part-Of-Speech Determination for English-Korean Machine Translation (영한 기계번역에서의 영어 품사결정 모델)

  • Kim, Sung-Dong;Park, Sung-Hoon
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.3
    • /
    • pp.53-65
    • /
    • 2009
  • The part-of-speech determination is necessary for resolving the part-of-speech ambiguity in English-Korean machine translation. The part-of-speech ambiguity causes high parsing complexity and makes the accurate translation difficult. In order to solve the problem, the resolution of the part-of-speech ambiguity must be performed after the lexical analysis and before the parsing. This paper proposes the CatAmRes model, which resolves the part-of-speech ambiguity, and compares the performance with that of other part-of-speech tagging methods. CatAmRes model determines the part-of-speech using the probability distribution from Bayesian network training and the statistical information, which are based on the Penn Treebank corpus. The proposed CatAmRes model consists of Calculator and POSDeterminer. Calculator calculates the degree of appropriateness of the partof-speech, and POSDeterminer determines the part-of-speech of the word based on the calculated values. In the experiment, we measure the performance using sentences from WSJ, Brown, IBM corpus.

  • PDF