• Title, Summary, Keyword: Twitter Corpus

Search Result 11, Processing Time 0.045 seconds

Extracting Core Events Based on Timeline and Retweet Analysis in Twitter Corpus (트위터 문서에서 시간 및 리트윗 분석을 통한 핵심 사건 추출)

  • Tsolmon, Bayar;Lee, Kyung-Soon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.1
    • /
    • pp.69-74
    • /
    • 2012
  • Many internet users attempt to focus on the issues which have posted on social network services in a very short time. When some social big issue or event occurred, it will affect the number of comments and retweet on that day in twitter. In this paper, we propose the method of extracting core events based on timeline analysis, sentiment feature and retweet information in twitter data. To validate our method, we have compared the methods using only the frequency of words, word frequency with sentiment analysis, using only chi-square method and using sentiment analysis with chi-square method. For justification of the proposed approach, we have evaluated accuracy of correct answers in top 10 results. The proposed method achieved 94.9% performance. The experimental results show that the proposed method is effective for extracting core events in twitter corpus.

Developing a Sentiment Analysing and Tagging System (감성 분석 및 감성 정보 부착 시스템 구현)

  • Lee, Hyun Gyu;Lee, Songwook
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.8
    • /
    • pp.377-384
    • /
    • 2016
  • Our goal is to build the system which collects tweets from Twitter, analyzes the sentiment of each tweet, and helps users build a sentiment tagged corpus semi-automatically. After collecting tweets with the Twitter API, we analyzes the sentiments of them with a sentiment dictionary. With the proposed system, users can verify the results of the system and can insert new sentimental words or dependency relations where sentiment information exist. Sentiment information is tagged with the JSON structure which is useful for building or accessing the corpus. With a test set, the system shows about 76% on the accuracy in analysing the sentiments of sentences as positive, neutral, or negative.

Company Name Discrimination in Tweets using Topic Signatures Extracted from News Corpus

  • Hong, Beomseok;Kim, Yanggon;Lee, Sang Ho
    • Journal of Computing Science and Engineering
    • /
    • v.10 no.4
    • /
    • pp.128-136
    • /
    • 2016
  • It is impossible for any human being to analyze the more than 500 million tweets that are generated per day. Lexical ambiguities on Twitter make it difficult to retrieve the desired data and relevant topics. Most of the solutions for the word sense disambiguation problem rely on knowledge base systems. Unfortunately, it is expensive and time-consuming to manually create a knowledge base system, resulting in a knowledge acquisition bottleneck. To solve the knowledge-acquisition bottleneck, a topic signature is used to disambiguate words. In this paper, we evaluate the effectiveness of various features of newspapers on the topic signature extraction for word sense discrimination in tweets. Based on our results, topic signatures obtained from a snippet feature exhibit higher accuracy in discriminating company names than those from the article body. We conclude that topic signatures extracted from news articles improve the accuracy of word sense discrimination in the automated analysis of tweets.

Propensity Analysis of Political Attitude of Twitter Users by Extracting Sentiment from Timeline (타임라인의 감정추출을 통한 트위터 사용자의 정치적 성향 분석)

  • Kim, Sukjoong;Hwang, Byung-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.1
    • /
    • pp.43-51
    • /
    • 2014
  • Social Network Service has the sufficient potential can be widely and effectively used for various fields of society because of convenient accessibility and definite user opinion. Above all Twitter has characteristics of simple and open network formation between users and remarkable real-time diffusion. However, real analysis is accompanied by many difficulties because of semantic analysis in 140-characters, the limitation of Korea natural language processing and the technical problem of Twitter is own restriction. This thesis paid its attention to human's political attitudes showing permanence and assumed that if applying it to the analytic design, it would contribute to the increase of precision and showed it through the experiment. As a result of experiment with Tweet corpus gathered during the election of national assemblymen on 11st April 2012, it could be known to be considerably similar compared to actual election result. The precision of 75.4% and recall of 34.8% was shown in case of individual Tweet analysis. On the other hand, the performance improvement of approximately 8% and 5% was shown in by-timeline political attitude analysis of user.

Construction and Evaluation of a Sentiment Dictionary Using a Web Corpus Collected from Game Domain (게임 도메인 웹 코퍼스를 이용한 감성사전 구축 및 평가)

  • Jeong, Woo-Young;Bae, Byung-Chull;Cho, Sung Hyun;Kang, Shin-Jin
    • Journal of Korea Game Society
    • /
    • v.18 no.5
    • /
    • pp.113-122
    • /
    • 2018
  • This paper describes an approach to building and evaluating a sentiment dictionary using a Web corpus in the game domain. To build a sentiment dictionary, we collected vocabulary based on game-related web documents from a domestic portal site, using the Twitter Korean Processor. From the collected vocabulary, we selected the words whose POS are tagged as either verbs or adjectives, and assigned sentiment score for each selected word. To evaluate the constructed sentiment dictionary, we calculated F1 score with precision and recall, using Korean-SWN that is based on English Senti-word Net(SWN). The evaluation results show that average F1 scores are 0.85 for adjectives and 0.77 for verbs, respectively.

Extracting Core Event Feature Based on Timeline Analysis and Sentiment Feature in Twitter Corpus (트위터 자료의 시간별 분석과 감성 자질을 이용한 핵심 사건 추출)

  • Kim, Hui-Hwan;Tsolmon, Bayar;Lee, Kyung-Soon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • /
    • pp.395-398
    • /
    • 2011
  • 트위터 사용자들은 어떠한 이슈에 대해 트위터를 통해 빠르고 간결하게 다른 사람들과의 지속적인 커뮤니케이션을 원하고, 이러한 특징은 이슈 별 사건에 따라 트윗 개수에 영향을 미치게 된다. 만약 어느 하나의 사회적 이슈에 대해 어떠한 사건이 일어나게 되면 그때의 트윗 개수는 폭발적으로 증가하게 된다. 본 논문에서는 이러한 특징을 이용하여 트위터 자료를 시간별로 분석하여 사건을 인식하고, 감성 자질과 카이제곱 값을 이용해 해당 날짜에 대한 핵심 사건을 추출한다.

  • PDF

Twitter Corpus Collection and Analysis (트위터 말뭉치 수집과 분석)

  • Yoo, Daehoon;Lee, Cheongjae;Kim, Seokhwan;Lee, Gary Geunbae
    • Annual Conference on Human and Language Technology
    • /
    • /
    • pp.136-140
    • /
    • 2009
  • 최근 기존 블로그와 다른 마이크로 블로그의 한 종류로 트위터가 인터넷 상에서 화두로 대두되고 있다. 트위터는 기존 블로그나 미니홈피의 여러 가지 기능을 간소화하고 짧은 내용의 텍스트만을 올릴 수 있는 마이크로 블로그이다. 그런 이유로 트위터는 단순함과 즉시성이라는 고유의 특성을 가지고 일반적인 인터넷 이용자들에게 급속하게 알려지고 있다. 이러한 트위터를 분석하면 다양한 주제에 대해서 인터넷상의 대중들의 생각과 의견들을 알 수 있는 창구가 될 수 있다. 또한 다른 언어권 국가들의 트위터와 비교하면 양 국가간의 문화적 차이를 알 수 있다. 본 논문에서는 한국어 및 영어권 이용자들의 트위터 상의 메시지를 주제별, 목적별 등으로 분석하였다. 그 결과, 한국에서는 트위터 이용을 개인적인 생각을 적는 일기장으로 많이 사용되지만, 영어권 에서는 그 외에도 보도 자료나 광고등 여러 가지 목적으로 사용되고 있다는 것을 알 수 있다.

  • PDF

A Crowdsourcing-based Emotional Words Tagging Game for Building a Polarity Lexicon in Korean (한국어 극성 사전 구축을 위한 크라우드소싱 기반 감성 단어 극성 태깅 게임)

  • Kim, Jun-Gi;Kang, Shin-Jin;Bae, Byung-Chull
    • Journal of Korea Game Society
    • /
    • v.17 no.2
    • /
    • pp.135-144
    • /
    • 2017
  • Sentiment analysis refers to a way of analyzing the writer's subjective opinions or feelings through text. For effective sentiment analysis, it is essential to build emotional word polarity lexicon. This paper introduces a crowdsourcing-based game that we have developed for efficiently building a polarity lexicon in Korean. First, we collected a corpus from the relating Internet communities using a crawler, and we classified them into words using the Twitter POS analyzer. These POS-tagged words are provided as a form of mobile platform based tagging game in which the players voluntarily tagged the polarities of the words, and then the result was collected into the database. So far we have tagged the polarities of about 1200 words. We expect that our research can contribute to the Korean sentiment analysis research especially in the game domain by collecting more emotional word data in the future.

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.