• Title/Summary/Keyword: 트위터 데이터

Search Result 229, Processing Time 0.022 seconds

A Study on Sentiment Analysis of Media and SNS response to National Policy: focusing on policy of Child allowance, Childbirth grant (국가 정책에 대한 언론과 SNS 반응의 감성 분석 연구 -아동 수당, 출산 장려금 정책을 중심으로-)

  • Yun, Hye Min;Choi, Eun Jung
    • Journal of Digital Convergence
    • /
    • v.17 no.2
    • /
    • pp.195-200
    • /
    • 2019
  • Nowadays as the use of mobile communication devices such as smart phones and tablets and the use of Computer is expanded, data is being collected exponentially on the Internet. In addition, due to the development of SNS, users can freely communicate with each other and share information in various fields, so various opinions are accumulated in the from of big data. Accordingly, big data analysis techniques are being used to find out the difference between the response of the general public and the response of the media. In this paper, we analyzed the public response in SNS about child allowance and childbirth grant and analyzed the response of the media. Therefore we gathered articles and comments of users which were posted on Twitter for a certain period of time and crawling the news articles and applied sentiment analysis. From these data, we compared the opinion of the public posted on SNS with the response of the media expressed in news articles. As a result, we found that there is a different response to some national policy between the public and the media.

Social Media Bigdata Analysis Based on Information Security Keyword Using Text Mining (텍스트마이닝을 활용한 정보보호 키워드 기반 소셜미디어 빅데이터 분석)

  • Chung, JinMyeong;Park, YoungHo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.5
    • /
    • pp.37-48
    • /
    • 2022
  • With development of Digital Technology, social issues are communicated through digital-based platform such as SNS and form public opinion. This study attempted to analyze big data from Twitter, a world-renowned social network service, and find out the public opinion. After collecting Twitter data based on 14 keywords for 1 year in 2021, analyzed the term-frequency and relationship among keyword documents with pearson correlation coefficient using Data-mining Technology. Furthermore, the 6 main topics that on the center of information security field in 2021 were derived through topic modeling using the LDA(Latent Dirichlet Allocation) technique. These results are expected to be used as basic data especially finding key agenda when establishing strategies for the next step related industries or establishing government policies.

Relationship between Result of Sentiment Analysis and User Satisfaction -The case of Korean Meteorological Administration- (감성분석 결과와 사용자 만족도와의 관계 -기상청 사례를 중심으로-)

  • Kim, In-Gyum;Kim, Hye-Min;Lim, Byunghwan;Lee, Ki-Kwang
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.10
    • /
    • pp.393-402
    • /
    • 2016
  • To compensate for limited the satisfaction survey currently conducted by Korea Metrological Administration (KMA), a sentiment analysis via a social networking service (SNS) can be utilized. From 2011 to 2014, with the sentiment analysis, Twitter who had commented 'KMA' had collected, then, using $Na{\ddot{i}}ve$ Bayes classification, we were classified into three sentiments: positive, negative, and neutral sentiments. An additional dictionary was made with morphemes appeared only in the positive, negative, and neutral sentiments of basic $Na{\ddot{i}}ve$ Bayes classification, thus the accuracy of sentiment analysis was improved. As a result, when sentiments were classified with a basic $Na{\ddot{i}}ve$ Bayes classification, the training data were reproduced about 75% accuracy rate. Whereas, when classifying with the additional dictionary, it showed 97% accuracy rate. When using the additional dictionary, sentiments of verification data was classified with about 75% accuracy rate. Lower classification accuracy rate would be improved by not only a qualified dictionary that has increased amount of training data, including diverse keywords related to weather, but continuous update of the dictionary. Meanwhile, contrary to the sentiment analysis based on dictionary definition of individual vocabulary, if sentiments are classified into meaning of sentence, increased rate of negative sentiment and change in satisfaction could be explained. Therefore, the sentiment analysis via SNS would be considered as useful tool for complementing surveys in the future.

Identification of Visitation Density and Critical Management Area Regarding Marine Spatial Planning: Applying Social Big Data (해양공간계획 수립을 위한 방문밀집도 및 중점관리지역 규명: 소셜 빅데이터를 활용하여)

  • Kim, Yoonjung;Kim, Choongki;Kim, Gangsun
    • Journal of Environmental Impact Assessment
    • /
    • v.29 no.2
    • /
    • pp.122-131
    • /
    • 2020
  • Marine Spatial Planning is an emerging strategy that promoting sustainable development at coastal and marine areas based on the concept of ecosystem services. Regarding its methodology, usage rate of resources and its impact should be considered in the process of spatial planning. Particularly, considering the rapid increase of coastal tourism, visitation pattern is required to be identified across coastal areas. However, actions to quantify visitation pattern have been limited due to its required high cost and labor for conducting extensive field-study. In this regard, this study aimed to pose the usage of social big data in Marine Spatial Planning to identify spatial visitation density and critical management zone throughout coastal areas. We suggested the usage of GPS information from Flickr and Twitter, and evaluated the critical management zone by applying spatial statistics and density analysis. This study's results clearly showed the coastal areas having relatively high visitors in the southern sea of South Korea. Applied Flickr and Twitter information showed high correlation with field data, when proxy excluding over-estimation was applied and appropriate grid-scale was identified in assessment approach. Overall, this study offers insights to use social big data in Marine Spatial Planning for reflecting size and usage rate of coastal tourism, which can be used to designate conservation area and critical zones forintensive management to promote constant supply of cultural services.

Location Inference of Twitter Users using Timeline Data (타임라인데이터를 이용한 트위터 사용자의 거주 지역 유추방법)

  • Kang, Ae Tti;Kang, Young Ok
    • Spatial Information Research
    • /
    • v.23 no.2
    • /
    • pp.69-81
    • /
    • 2015
  • If one can infer the residential area of SNS users by analyzing the SNS big data, it can be an alternative by replacing the spatial big data researches which result from the location sparsity and ecological error. In this study, we developed the way of utilizing the daily life activity pattern, which can be found from timeline data of tweet users, to infer the residential areas of tweet users. We recognized the daily life activity pattern of tweet users from user's movement pattern and the regional cognition words that users text in tweet. The models based on user's movement and text are named as the daily movement pattern model and the daily activity field model, respectively. And then we selected the variables which are going to be utilized in each model. We defined the dependent variables as 0, if the residential areas that users tweet mainly are their home location(HL) and as 1, vice versa. According to our results, performed by the discriminant analysis, the hit ratio of the two models was 67.5%, 57.5% respectively. We tested both models by using the timeline data of the stress-related tweets. As a result, we inferred the residential areas of 5,301 users out of 48,235 users and could obtain 9,606 stress-related tweets with residential area. The results shows about 44 times increase by comparing to the geo-tagged tweets counts. We think that the methodology we have used in this study can be used not only to secure more location data in the study of SNS big data, but also to link the SNS big data with regional statistics in order to analyze the regional phenomenon.

Storm-Based Dynamic Tag Cloud for Real-Time SNS Data (실시간 SNS 데이터를 위한 Storm 기반 동적 태그 클라우드)

  • Son, Siwoon;Kim, Dasol;Lee, Sujeong;Gil, Myeong-Seon;Moon, Yang-Sae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.6
    • /
    • pp.309-314
    • /
    • 2017
  • In general, there are many difficulties in collecting, storing, and analyzing SNS (social network service) data, since those data have big data characteristics, which occurs very fast with the mixture form of structured and unstructured data. In this paper, we propose a new data visualization framework that works on Apache Storm, and it can be useful for real-time and dynamic analysis of SNS data. Apache Storm is a representative big data software platform that processes and analyzes real-time streaming data in the distributed environment. Using Storm, in this paper we collect and aggregate the real-time Twitter data and dynamically visualize the aggregated results through the tag cloud. In addition to Storm-based collection and aggregation functionalities, we also design and implement a Web interface that a user gives his/her interesting keywords and confirms the visualization result of tag cloud related to the given keywords. We finally empirically show that this study makes users be able to intuitively figure out the change of the interested subject on SNS data and the visualized results be applied to many other services such as thematic trend analysis, product recommendation, and customer needs identification.

Understanding Public Opinion by Analyzing Twitter Posts Related to Real Estate Policy (부동산 정책 관련 트위터 게시물 분석을 통한 대중 여론 이해)

  • Kim, Kyuli;Oh, Chanhee;Zhu, Yongjun
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.56 no.3
    • /
    • pp.47-72
    • /
    • 2022
  • This study aims to understand the trends of subjects related to real estate policies and public's emotional opinion on the policies. Two keywords related to real estate policies such as "real estate policy" and "real estate measure" were used to collect tweets created from February 25, 2008 to August 31, 2021. A total of 91,740 tweets were collected and we applied sentiment analysis and dynamic topic modeling to the final preprocessed and categorized data of 18,925 tweets. Sentiment analysis and dynamic topic model analysis were conducted for a total of 18,925 posts after preprocessing data and categorizing them into supply, real estate tax, interest rate, and population variance. Keywords of each category are as follows: the supply categories (rental housing, greenbelt, newlyweds, homeless, supply, reconstruction, sale), real estate tax categories (comprehensive real estate tax, acquisition tax, holding tax, multiple homeowners, speculation), interest rate categories (interest rate), and population variance categories (Sejong, new city). The results of the sentiment analysis showed that one person posted on average one or two positive tweets whereas in the case of negative and neutral tweets, one person posted two or three. In addition, we found that part of people have both positive as well as negative and neutral opinions towards real estate policies. As the results of dynamic topic modeling analysis, negative reactions to real estate speculative forces and unearned income were identified as major negative topics and as for positive topics, expectation on increasing supply of housing and benefits for homeless people who purchase houses were identified. Unlike previous studies, which focused on changes and evaluations of specific real estate policies, this study has academic significance in that it collected posts from Twitter, one of the social media platforms, used emotional analysis, dynamic topic modeling analysis, and identified potential topics and trends of real estate policy over time. The results of the study can help create new policies that take public opinion on real estate policies into consideration.

Design of Splunk Platform based Big Data Analysis System for Objectionable Information Detection (Splunk 플랫폼을 활용한 유해 정보 탐지를 위한 빅데이터 분석 시스템 설계)

  • Lee, Hyeop-Geon;Kim, Young-Woon;Kim, Ki-Young;Choi, Jong-Seok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.1
    • /
    • pp.76-81
    • /
    • 2018
  • The Internet of Things (IoT), which is emerging as a future economic growth engine, has been actively introduced in areas close to our daily lives. However, there are still IoT security threats that need to be resolved. In particular, with the spread of smart homes and smart cities, an explosive amount of closed-circuit televisions (CCTVs) have been installed. The Internet protocol (IP) information and even port numbers assigned to CCTVs are open to the public via search engines of web portals or on social media platforms, such as Facebook and Twitter; even with simple tools these pieces of information can be easily hacked. For this reason, a big-data analytics system is needed, capable of supporting quick responses against data, that can potentially contain risk factors to security or illegal websites that may cause social problems, by assisting in analyzing data collected by search engines and social media platforms, frequently utilized by Internet users, as well as data on illegal websites.

An Update-Efficient, Disk-Based Inverted Index Structure for Keyword Search on Data Streams (데이터 스트림에 대한 키워드 검색을 위한, 효율적인 갱신이 가능한 디스크 기반 역색인 구조)

  • Park, Eun Ju;Lee, Ki Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.4
    • /
    • pp.171-180
    • /
    • 2016
  • As social networking services such as twitter become increasingly popular, data streams are widely prevalent these days. In order to search data accumulated from data streams efficiently, the use of an index structure is essential. In this paper, we propose an update-efficient, disk-based inverted index structure for efficient keyword search on data streams. When new data arrive at the data stream, the index needs to be updated to incorporate the new data. The traditional inverted index is very inefficient to update in terms of disk I/O, because all index data stored in the disk need to be read and written to the disk each time the index is updated. To solve this problem, we divide the whole inverted index into a sequence of inverted indices with exponentially increasing size. When new data arrives, it is first inserted into the smallest index and, later, the small indices are merged with the larger indices, which leads to a small amortize update cost for each new data. Furthermore, when indices stored in the disk are merged with each other, we minimize the disk I/O cost incurred for the merge operation, resulting in an even smaller update cost. Through various experiments, we compare the update efficiency of the proposed index structure with the previous one, and show the performance advantage of the proposed structure in terms of the update cost.

FinBERT Fine-Tuning for Sentiment Analysis: Exploring the Effectiveness of Datasets and Hyperparameters (감성 분석을 위한 FinBERT 미세 조정: 데이터 세트와 하이퍼파라미터의 효과성 탐구)

  • Jae Heon Kim;Hui Do Jung;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.127-135
    • /
    • 2023
  • This research paper explores the application of FinBERT, a variational BERT-based model pre-trained on financial domain, for sentiment analysis in the financial domain while focusing on the process of identifying suitable training data and hyperparameters. Our goal is to offer a comprehensive guide on effectively utilizing the FinBERT model for accurate sentiment analysis by employing various datasets and fine-tuning hyperparameters. We outline the architecture and workflow of the proposed approach for fine-tuning the FinBERT model in this study, emphasizing the performance of various datasets and hyperparameters for sentiment analysis tasks. Additionally, we verify the reliability of GPT-3 as a suitable annotator by using it for sentiment labeling tasks. Our results show that the fine-tuned FinBERT model excels across a range of datasets and that the optimal combination is a learning rate of 5e-5 and a batch size of 64, which perform consistently well across all datasets. Furthermore, based on the significant performance improvement of the FinBERT model with our Twitter data in general domain compared to our news data in general domain, we also express uncertainty about the model being further pre-trained only on financial news data. We simplify the complex process of determining the optimal approach to the FinBERT model and provide guidelines for selecting additional training datasets and hyperparameters within the fine-tuning process of financial sentiment analysis models.