• Title/Summary/Keyword: 뉴스 데이터 분석

Search Result 391, Processing Time 0.026 seconds

An Analysis of Flood Vulnerability by Administrative Region through Big Data Analysis (빅데이터 분석을 통한 행정구역별 홍수 취약성 분석)

  • Yu, Yeong UK;Seong, Yeon Jeong;Park, Tae Gyeong;Jung, Young Hun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.193-193
    • /
    • 2021
  • 전 세계적으로 기후변화가 지속되면서 그에 따른 자연재난의 강도와 발생 빈도가 증가하고 있다. 자연재난의 발생 유형 중 집중호우와 태풍으로 인한 수문학적 재난이 대부분을 차지하고 있으며, 홍수피해는 지역적 수문학적 특성에 따라 피해의 규모와 범위가 달라지는 경향을 보인다. 이러한 이질적인 피해를 관리하기 위해서는 많은 홍수피해 정보를 수집하는 것이 필연적이다. 정보화 시대인 요즘 방대한 양의 데이터가 발생하면서 '빅데이터', '머신러닝', '인공지능'과 같은 말들이 다양한 분야에서 주목을 받고 있다. 홍수피해 정보에 대해서도 과거 국가에서 발간하는 정보외에 인터넷에는 뉴스기사나 SNS 등 미디어를 통하여 수많은 정보들이 생성되고 있다. 이러한 방대한 규모의 데이터는 미래 경쟁력의 우위를 좌우하는 중요한 자원이 될 것이며, 홍수대비책으로 활용될 소중한 정보가 될 수 있다. 본 연구는 인터넷기반으로 한 홍수피해 현상 조사를 통해 홍수피해 규모에 따라 발생하는 홍수피해 현상을 파악하고자 하였다. 이를 위해 과거에 발생한 홍수피해 사례를 조사하여 강우량, 홍수피해 현상 등 홍수피해 관련 정보를 조사하였다. 홍수피해 현상은 뉴스기사나 보고서 등 미디어 정보를 활용하여 수집하였으며, 수집된 비정형 형태의 텍스트 데이터를 '텍스트 마이닝(Text Mining)' 기법을 이용하여 데이터를 정형화 및 주요 홍수피해 현상 키워드를 추출하여 데이터를 수치화하여 표현하였다.

  • PDF

News based Stock Market Sentiment Lexicon Acquisition Using Word2Vec (Word2Vec을 활용한 뉴스 기반 주가지수 방향성 예측용 감성 사전 구축)

  • Kim, Daye;Lee, Youngin
    • The Journal of Bigdata
    • /
    • v.3 no.1
    • /
    • pp.13-20
    • /
    • 2018
  • Stock market prediction has been long dream for researchers as well as the public. Forecasting ever-changing stock market, though, proved a Herculean task. This study proposes a novel stock market sentiment lexicon acquisition system that can predict the growth (or decline) of stock market index, based on economic news. For this purpose, we have collected 3-year's economic news from January 2015 to December 2017 and adopted Word2Vec model to consider the context of words. To evaluate the result, we performed sentiment analysis to collected news data with the automated constructed lexicon and compared with closings of the KOSPI (Korea Composite Stock Price Index), the South Korean stock market index based on economic news.

An Analysis of the 2017 Korean Presidential Election Using Text Mining (텍스트 마이닝을 활용한 2017년 한국 대선 분석)

  • An, Eunhee;An, Jungkook
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.5
    • /
    • pp.199-207
    • /
    • 2020
  • Recently, big data analysis has drawn attention in various fields as it can generate value from large amounts of data and is also used to run political campaigns or predict results. However, existing research had limitations in compiling information about candidates at a high-level by analyzing only specific SNS data. Therefore, this study analyses news trends, topics extraction, sentiment analysis, keyword analysis, comment analysis for the 2017 presidential election of South Korea. The results show that various topics had been generated, and online opinions are extracted for trending keywords of respective candidates. This study also shows that portal news and comments can serve as useful tools for predicting the public's opinion on social issues. This study will This paper advances a building strategic course of action by providing a method of analyzing public opinion across various fields.

Performance Evaluations of Text Ranking Algorithms

  • Kim, Myung-Hwi;Jang, Beakcheol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.2
    • /
    • pp.123-131
    • /
    • 2020
  • The text ranking algorithm is a representative method for keyword extraction, and its importance is emphasized highly. In this paper, we compare the performance of recent research and experiments with TF-IDF, SMART, INQUERY and CCA algorithms, which are used in text ranking algorithm.. After explaining each algorithm, we compare the performance of each algorithm based on the data collected from news and Twitter. Experimental results show that all of four algorithms can extract specific words from news data equally. However, in the case of Twitter, CCA has the best performance to extract specific words, and INQUERY shows the worst performance. We also analyze the accuracy of the algorithm through six comparison metrics. The experimental results present that CCA shows the best accuracy in the news data. In case of Twitter, TF-IDF and CCA show similar performance and demonstrate good performance.

A News Video Mining based on Multi-modal Approach and Text Mining (멀티모달 방법론과 텍스트 마이닝 기반의 뉴스 비디오 마이닝)

  • Lee, Han-Sung;Im, Young-Hee;Yu, Jae-Hak;Oh, Seung-Geun;Park, Dai-Hee
    • Journal of KIISE:Databases
    • /
    • v.37 no.3
    • /
    • pp.127-136
    • /
    • 2010
  • With rapid growth of information and computer communication technologies, the numbers of digital documents including multimedia data have been recently exploded. In particular, news video database and news video mining have became the subject of extensive research, to develop effective and efficient tools for manipulation and analysis of news videos, because of their information richness. However, many research focus on browsing, retrieval and summarization of news videos. Up to date, it is a relatively early state to discover and to analyse the plentiful latent semantic knowledge from news videos. In this paper, we propose the news video mining system based on multi-modal approach and text mining, which uses the visual-textual information of news video clips and their scripts. The proposed system systematically constructs a taxonomy of news video stories in automatic manner with hierarchical clustering algorithm which is one of text mining methods. Then, it multilaterally analyzes the topics of news video stories by means of time-cluster trend graph, weighted cluster growth index, and network analysis. To clarify the validity of our approach, we analyzed the news videos on "The Second Summit of South and North Korea in 2007".

A Design and Implementation of Scheme and Summary Generation Mechanism for News Video based on MPEG-7 MDS (MPEG-7을 기반으로 한 뉴스 동영상 스키마와 요약 생성 방법의 설계 및 구현)

  • 심진선;정진국;낭종호;김경수;하명환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.04b
    • /
    • pp.577-579
    • /
    • 2002
  • 최근 디지털 동명상의 사용이 증가하면서 자동으로 구조를 분석하는 기술이 필요하게 되었다. 특히 뉴스 동영상은 다른 동영상에 비해 그 구조가 비교적 정명화 되어 있다는 특징 때문에 많은 연구에서 이용되어졌다. 뉴스 동영상을 이용하는 이전의 연구에서 문제점으로 제시될 수 있는 사항은 서로 다른 자료 구조와 시스템 구조로 인 딴 호환성의 부족을 들 수 있다. 본 논문에서 는 이와 같은 호환성 부족을 해결하기 위해 멀티미디어 데이터를 기술하는 표준인 MPEG-7을 기반으로 한 뉴스 동영상 스키마를 제안하였다 특히 전제 뉴스를 보지 않고 효율적으로 뉴스 동영상을 이해할 수 있도록 요약하는 방법을 제시하였으며 MPEG-7의 HDS를 이용하여 기술하였다. 본 논문에서 제안한 방법은 디지털 비디오 라이브러리와 같은 응용 분야에서 유용하게 이용될 수 있을 것이다.

  • PDF

Analysis of Review Data of 'Tamna' Franchisees to Promote Sustainable Travel in Jeju City (제주시의 지속가능한 여행 활성화를 위한 지역화폐 '탐나는전' 가맹점의 리뷰 데이터 분석)

  • Sehui Baek;Sehyoung Kim;Miran Bae;Juyoung Kang
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.113-128
    • /
    • 2022
  • After COVID-19, interest in "sustainable tourism" increased, and the number of tourists who wanted to experience "sustainable tourism" also increased. However, there is a problem that the programs and methods for 'sustainable tourism' are not specific and diverse. In addition, since most of the interests of "sustainable tourism" focus on "environment" and "carbon neutrality," there are not many programs or government policies that can contribute to the community. Therefore, in this study, news data and review data were analyzed to suggest a method for promoting 'sustainable tourism'. First, in this study, major themes of sustainable travel were derived through news big data analysis. Through this analysis, policy themes and events of 'sustainable tourism' were derived. By analyzing news big data related to "sustainable tourism," we would like to analyze the reasons why sustainable travel has not been activated in Korea. Finally, in order to promote sustainable travel in Jeju island, we analyzed user review data of Jeju local currency, and propose a idea to coexist with the local community.

A Comparative Study of Text analysis and Network embedding Methods for Effective Fake News Detection (효과적인 가짜 뉴스 탐지를 위한 텍스트 분석과 네트워크 임베딩 방법의 비교 연구)

  • Park, Sung Soo;Lee, Kun Chang
    • Journal of Digital Convergence
    • /
    • v.17 no.5
    • /
    • pp.137-143
    • /
    • 2019
  • Fake news is a form of misinformation that has the advantage of rapid spreading of information on media platforms that users interact with, such as social media. There has been a lot of social problems due to the recent increase in fake news. In this paper, we propose a method to detect such false news. Previous research on fake news detection mainly focused on text analysis. This research focuses on a network where social media news spreads, generates qualities with DeepWalk, a network embedding method, and classifies fake news using logistic regression analysis. We conducted an experiment on fake news detection using 211 news on the Internet and 1.2 million news diffusion network data. The results show that the accuracy of false network detection using network embedding is 10.6% higher than that of text analysis. In addition, fake news detection, which combines text analysis and network embedding, does not show an increase in accuracy over network embedding. The results of this study can be effectively applied to the detection of fake news that organizations spread online.

Real Time Stock Information Analysis Method Based on Big Data considering Reliability (신뢰성을 고려한 빅데이터 기반 실시간 증권정보 분석 기법)

  • Kim, Yoon-Ki;Cho, Chang-Woo;Jeong, Chang-Sung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.146-147
    • /
    • 2013
  • 소셜 미디어와 스마트폰의 확산으로 인터넷상의 사용자간 교류되는 정보의 양이 대폭 늘어남에 따라 대규모의 데이터를 처리해야할 필요성이 높아졌다. 이러한 빅데이터는 뉴스, 소셜미디어, 웹사이트 등의 다양한 분산 서버에서 발생한다. 증권정보를 분석하기 위해서도 실시간으로 발생되는 거래량, 시가와 더불어 상장회사의 공시 정보 등의 데이터를 여러 분산된 서버에서 데이터를 가져와야 한다. 기존의 빅데이터 분석기법은 각 분산된 서버로부터 가져온 데이터가 동일한 신뢰성을 가지고 있다고 가정하고 분석을 한다. 이는 부문별한 정보를 포함한 데이터를 효율적으로 분석하지 못하는 한계를 지니고 있다. 본 논문에서는 가져오는 데이터에 신뢰성 가중치를 부여하여 신뢰성 있는 증권정보 분석을 가능하게 한다.

Analysis of YouTube's role as a new platform between media and consumers

  • Hur, Tai-Sung;Im, Jung-ju;Song, Da-hye
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.53-60
    • /
    • 2022
  • YouTube realistically shows fake news and biased content based on facts that have not been verified due to low entry barriers and ambiguity in video regulation standards. Therefore, this study aims to analyze the influence of the media and YouTube on individual behavior and their relationship. Data from YouTube and Twitter are randomly imported with selenium, beautiful soup, and Twitter APIs to classify the 31 most frequently mentioned keywords. Based on 31 keywords classified, data were collected from YouTube, Twitter, and Naver News, and positive, negative, and neutral emotions were classified and quantified with NLTK's Natural Language Toolkit (NLTK) Vader model and used as analysis data. As a result of analyzing the correlation of data, it was confirmed that the higher the negative value of news, the more positive content on YouTube, and the positive index of YouTube content is proportional to the positive and negative values on Twitter. As a result of this study, YouTube is not consistent with the emotion index shown in the news due to its secondary processing and affected characteristics. In other words, processed YouTube content intuitively affects Twitter's positive and negative figures, which are channels of communication. The results of this study analyzed that YouTube plays a role in assisting individual discrimination in the current situation where accurate judgment of information has become difficult due to the emergence of yellow media that stimulates people's interests and instincts.