• Title/Summary/Keyword: 빅 이슈

Search Result 304, Processing Time 0.026 seconds

An Analysis of News Report Characteristics on Archives & Records Management for the Press in Korea: Based on 1999~2018 News Big Data (뉴스 빅데이터를 이용한 우리나라 언론의 기록관리 분야 보도 특성 분석: 1999~2018 뉴스를 중심으로)

  • Han, Seunghee
    • Journal of the Korean Society for information Management
    • /
    • v.35 no.3
    • /
    • pp.41-75
    • /
    • 2018
  • The purpose of this study is to analyze the characteristics of Korean media on the topic of archives & records management based on time-series analysis. In this study, from January, 1999 to June, 2018, 4,680 news articles on archives & records management topics were extracted from BigKinds. In order to examine the characteristics of the media coverage on the archives & records management topic, this study was analyzed to the difference of the press coverage by period, subject, and type of the media. In addition, this study was conducted word-frequency based content analysis and semantic network analysis to investigate the content characteristics of media on the subject. Based on these results, this study was analyzed to the differences of media coverage by period, subject, and type of media. As a result, the news in the field of records management showed that there was a difference in the amount of news coverage and news contents by period, subject, and type of media. The amount of news coverage began to increase after the Presidential Records Management Act was enacted in 2007, and the largest amount of news was reported in 2013. Daily newspapers and financial newspapers reported the largest amount of news. As a result of analyzing news reports, during the first 10 years after 1999, news topics were formed around the issues arising from the application and diffusion process of the concept of archives & records management. However, since the enactment of the Presidential Records Management Act, archives & records management has become a major factor in political and social issues, and a large amount of political and social news has been reported.

Future Residential Forecasting and Recommendations of Housing Using STEEP-V Analysis (STEEP-V 방법론을 활용한 미래주거예측 및 대응방안)

  • An, Se-Yun;Lee, Sangho;Yoon, Jeong Joong;Kim, So-Yeon;Ju, Hannah;Kim, Sungwhan
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.6
    • /
    • pp.230-240
    • /
    • 2020
  • Recently, the social debate about the fourth industrial revolution has been actively developed, and it is predicted that the 4th Industrial Revolution will have a great influence on our society, cities, residential and industrial spaces. Especially, it is anticipated that the technological development of the 4th Industrial Revolution will cause a wide range of changes in residential style and culture. Therefore, it is necessary to grasp the direction of future change in advance and proactively respond to future tasks and strategies need. The purpose of this study is to predict the direction and characteristics of the mid - to long - term changes in future housing that will be brought about by the 4th Industrial Revolution and to define future social, spatial and technological impacts and issues and to find policy measures for them. STEEP (V) as a methodology for forecasting future has been used. It is a process of deriving technical and social issues by using Big Data. It collects various keywords and draws out key issues and summarizes social change patterns related to each core issue. The proposed strategy for future housing prediction and countermeasures can be used as a basic data for future directions of housing policy and suggests a process for deriving reasonable and reasonable results from multiple data sets rather than accurate prediction.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (비정형 텍스트 분석을 활용한 이슈의 동적 변이과정 고찰)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.1-18
    • /
    • 2016
  • Owing to the extensive use of Web media and the development of the IT industry, a large amount of data has been generated, shared, and stored. Nowadays, various types of unstructured data such as image, sound, video, and text are distributed through Web media. Therefore, many attempts have been made in recent years to discover new value through an analysis of these unstructured data. Among these types of unstructured data, text is recognized as the most representative method for users to express and share their opinions on the Web. In this sense, demand for obtaining new insights through text analysis is steadily increasing. Accordingly, text mining is increasingly being used for different purposes in various fields. In particular, issue tracking is being widely studied not only in the academic world but also in industries because it can be used to extract various issues from text such as news, (SocialNetworkServices) to analyze the trends of these issues. Conventionally, issue tracking is used to identify major issues sustained over a long period of time through topic modeling and to analyze the detailed distribution of documents involved in each issue. However, because conventional issue tracking assumes that the content composing each issue does not change throughout the entire tracking period, it cannot represent the dynamic mutation process of detailed issues that can be created, merged, divided, and deleted between these periods. Moreover, because only keywords that appear consistently throughout the entire period can be derived as issue keywords, concrete issue keywords such as "nuclear test" and "separated families" may be concealed by more general issue keywords such as "North Korea" in an analysis over a long period of time. This implies that many meaningful but short-lived issues cannot be discovered by conventional issue tracking. Note that detailed keywords are preferable to general keywords because the former can be clues for providing actionable strategies. To overcome these limitations, we performed an independent analysis on the documents of each detailed period. We generated an issue flow diagram based on the similarity of each issue between two consecutive periods. The issue transition pattern among categories was analyzed by using the category information of each document. In this study, we then applied the proposed methodology to a real case of 53,739 news articles. We derived an issue flow diagram from the articles. We then proposed the following useful application scenarios for the issue flow diagram presented in the experiment section. First, we can identify an issue that actively appears during a certain period and promptly disappears in the next period. Second, the preceding and following issues of a particular issue can be easily discovered from the issue flow diagram. This implies that our methodology can be used to discover the association between inter-period issues. Finally, an interesting pattern of one-way and two-way transitions was discovered by analyzing the transition patterns of issues through category analysis. Thus, we discovered that a pair of mutually similar categories induces two-way transitions. In contrast, one-way transitions can be recognized as an indicator that issues in a certain category tend to be influenced by other issues in another category. For practical application of the proposed methodology, high-quality word and stop word dictionaries need to be constructed. In addition, not only the number of documents but also additional meta-information such as the read counts, written time, and comments of documents should be analyzed. A rigorous performance evaluation or validation of the proposed methodology should be performed in future works.

An Efficient Log Data Management Architecture for Big Data Processing in Cloud Computing Environments (클라우드 환경에서의 효율적인 빅 데이터 처리를 위한 로그 데이터 수집 아키텍처)

  • Kim, Julie;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.1-7
    • /
    • 2013
  • Big data management is becoming increasingly important in both industry and academia of information science community. One of the important categories of big data generated from software systems is log data. Log data is generally used for better services in various service providers and can also be used as information for qualification. This paper presents a big data management architecture specialized for log data. Specifically, it provides the aggregation of log messages sent from multiple clients and provides intelligent functionalities such as analyzing log data. The proposed architecture supports an asynchronous process in client-server architectures to prevent the potential bottleneck of accessing data. Accordingly, it does not affect the client performance although using remote data store. We implement the proposed architecture and show that it works well for processing big log data. All components are implemented based on open source software and the developed prototypes are now publicly available.

A Study on the Effective Approaches to Big Data Planning (효과적인 빅데이터분석 기획 접근법에 대한 융합적 고찰)

  • Namn, Su Hyeon;Noh, Kyoo-Sung
    • Journal of Digital Convergence
    • /
    • v.13 no.1
    • /
    • pp.227-235
    • /
    • 2015
  • Big data analysis is a means of organizational problem solving. For an effective problem solving, approaches to problem solving should take into account the factors such as characteristics of problem, types and availability of data, data analytic capability, and technical capability. In this article we propose three approaches: logical top-down, data driven bottom-up, and prototyping for overcoming undefined problem circumstances. In particular we look into the relationship of creative problem solving with the bottom-up approach. Based on the organizational data governance and data analytic capability, we also derive strategic issues concerning the sourcing of big data analysis.

Frequency and Social Network Analysis of the Bible Data using Big Data Analytics Tools R (빅데이터 분석도구 R을 이용한 성경 데이터의 빈도와 소셜 네트워크 분석)

  • Ban, ChaeHoon;Ha, JongSoo;Kim, Dong Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.2
    • /
    • pp.166-171
    • /
    • 2020
  • Big data processing technology that can store and analyze data and obtain new knowledge has been adjusted for importance in many fields of the society. Big data is emerging as an important problem in the field of information and communication technology, but the mind of continuous technology is rising. the R, a tool that can analyze big data, is a language and environment that enables information analysis of statistical bases. In this paper, we use this to analyze the Bible data. We analyze the four Gospels of the New Testament in the Bible. We collect the Bible data and perform filtering for analysis. The R is used to investigate the frequency of what text is distributed and analyze the Bible through social network analysis, in which words from a sentence are paired and analyzed between words for accurate data analysis.

Analysis of Performance of Creative Education based on Twitter Big Data Analysis (트위터 빅데이터 분석을 통한 창의적 교육의 성과요인 분석)

  • Joo, Kilhong
    • Journal of Creative Information Culture
    • /
    • v.5 no.3
    • /
    • pp.215-223
    • /
    • 2019
  • The wave of the information age gradually accelerates, and fusion analysis solutions that can utilize these knowledge data according to accumulation of various forms of big data such as large capacity texts, sounds, movies and the like are increasing, Reduction in the cost of storing data accordingly, development of social network service (SNS), etc. resulted in quantitative qualitative expansion of data. Such a situation makes possible utilization of data which was not trying to be existing, and the potential value and influence of the data are increasing. Research is being actively made to present future-oriented education systems by applying these fusion analysis systems to the improvement of the educational system. In this research, we conducted a big data analysis on Twitter, analyzed the natural language of the data and frequency analysis of the word, quantitative measure of how domestic windows education problems and outcomes were done in it as a solution.

Big data text mining analysis to identify non-face-to-face education problems (비대면 교육 문제점 파악을 위한 빅데이터 텍스트 마이닝 분석)

  • Park, Sung Jae;Hwang, Ug-Sun
    • Korean Educational Research Journal
    • /
    • v.43 no.1
    • /
    • pp.1-27
    • /
    • 2022
  • As the COVID-19 virus became prevalent worldwide, non-face-to-face contact was implemented in various ways, and the education system also began to draw much attention due to rapid non-face-to-face contact. The purpose of this study is to analyze the direction of non-face-to-face education in line with the continuously changing educational environment to date. In this study, data were visualized using Textom and Ucinet6 analysis tool programs to collect social network big data with various opinions. As a result of the study, keywords related to "COVID-19" were dominant, and keywords with high frequency such as "article" and "news" existed. As a result of the analysis, various issues related to non-face-to-face education, such as network failures and security issues, were identified. After the analysis, the direction of the non-face-to-face education system was studied according to the growth of the education market and changes in the educational environment. In addition, there is a need to strengthen security and feedback on teaching methods in non-face-to-face education analyzed using big data.

Development of big data-based water supply and demand analysis technique for digital new deal (디지털 뉴딜을 위한 빅데이터 기반 물수급 분석 기법 개발)

  • Kim, Jang-Gyeong;Moon, Soo-Jin;Nam, Woo-Sung;Kang, Shin-Uk;Kwon, Hyun-Han
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.76-76
    • /
    • 2021
  • 물정보 중 가뭄 정보가 상대적으로 부족한 원인은 무엇을 가뭄으로 볼 것인지 정의하기 어렵기 때문이다. 특히 우리나라와 같이 댐 및 저수지, 광역상수도 등 수자원시스템 네트워크를 기반으로 물공급이 이루어지는 경우, 개별 요소만을 고려한 기존 가뭄모니터링 및 전망은 현실적이지 못하며, 가뭄 위험도 관리 측면에서도 부족한 부분이 있다. 가뭄 현상의 경우 기상학적 영향인 강수의 부족이 가장 큰 요소로 기여하지만 실질적으로 국민에 필요한 양보다 적은 양의 물이 공급될 때 국민들은 가뭄을 체감한다. 이러한 점을 보완하기 위하여 지역별로 사용하는 수원 및 물수급 시설 등을 세분화하고, 실적기반 분석을 통해 분석대상 지역의 가뭄을 정확히 판단하기 위한 합리적인 물수급 분석 모형 개발이 필요하다. 즉, 공간분석단위를 표준유역 단위 이하의 취방류 시설물을 기준으로 구성하고, 이들 시설물의 운영정보와 수문기상 빅데이터를 연계한 물순환 모형을 구현함으로써 댐, 저수지, 하천 등 다양한 수원을 가지는 유역 내 가용 수자원량을 준실시간 개념으로 평가하는 시스템의 개발이 필요하다. 본 연구에서는 하천을 중심으로 물수급 관련 수요·공급 시설의 위치를 절점으로 부여하고 연결하는 물수급 네트워크 알고리즘을 통해 빅데이터 기반 물수급 분석 모형을 개발하였다. 주요 모니터링 지점 및 모든 이수 시설의 위치를 유역분석 기법을 통하여 점(point), 선(line), 면(shape)으로 구성된 지형공간정보의 위상(topology) 관계를 설정하여 물수급 분석의 계산순서를 선정하고, 시계열 DB를 입력하여 지점별 물수급 분석 결과를 도출하였다. 권역별 주요 수위-유량관측소 1:1 Nash 계수를 검증한 결과 저유량에서 0.8 이상의 높은 재현 성능을 보이는 것으로 나타났다. 이에 따라 본 연구에서 개발된 물수급 분석 모형은 향후 물관련 이슈 지역의 용수공급능력 평가 및 수자원장기종합계획 등 다양한 수자원 정책평가에 활용될 것으로 기대된다.

  • PDF

Design and Implementation of Bigdata Platform for Vessel Traffic Service (해상교통 관제 빅데이터 체계의 설계 및 구현)

  • Hye-Jin Kim;Jaeyong Oh
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.7
    • /
    • pp.887-892
    • /
    • 2023
  • Vessel traffic service(VTS) centers are equipped with RADAR, AIS(Automatic Identification System), weather sensors, and VHF(Very High Frequency). VTS operators use this equipment to observe the movement of ships operating in the VTS area and provide information. The VTS data generated by these various devices is highly valuable for analyzing maritime traffic situation. However, owing to a lack of compatibility between system manufacturers or policy issues, they are often not systematically managed. Therefore, we developed the VTS Bigdata Platform that could efficiently collect, store, and manage control data collected by the VTS, and this paper describes its design and implementation. A microservice architecture was applied to secure operational stability that was one of the important issues in the development of the platform. In addition, the performance of the platform could be improved by dualizing the storage for real-time navigation information. The implemented system was tested using real maritime data to check its performance, identify additional improvements, and consider its feasibility in a real VTS environment.