• Title/Summary/Keyword: Big Data Log Analysis System

Search Result 38, Processing Time 0.021 seconds

Real time predictive analytic system design and implementation using Bigdata-log (빅데이터 로그를 이용한 실시간 예측분석시스템 설계 및 구현)

  • Lee, Sang-jun;Lee, Dong-hoon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.6
    • /
    • pp.1399-1410
    • /
    • 2015
  • Gartner is requiring companies to considerably change their survival paradigms insisting that companies need to understand and provide again the upcoming era of data competition. With the revealing of successful business cases through statistic algorithm-based predictive analytics, also, the conversion into preemptive countermeasure through predictive analysis from follow-up action through data analysis in the past is becoming a necessity of leading enterprises. This trend is influencing security analysis and log analysis and in reality, the cases regarding the application of the big data analysis framework to large-scale log analysis and intelligent and long-term security analysis are being reported file by file. But all the functions and techniques required for a big data log analysis system cannot be accommodated in a Hadoop-based big data platform, so independent platform-based big data log analysis products are still being provided to the market. This paper aims to suggest a framework, which is equipped with a real-time and non-real-time predictive analysis engine for these independent big data log analysis systems and can cope with cyber attack preemptively.

For Improving Security Log Big Data Analysis Efficiency, A Firewall Log Data Standard Format Proposed (보안로그 빅데이터 분석 효율성 향상을 위한 방화벽 로그 데이터 표준 포맷 제안)

  • Bae, Chun-sock;Goh, Sung-cheol
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.1
    • /
    • pp.157-167
    • /
    • 2020
  • The big data and artificial intelligence technology, which has provided the foundation for the recent 4th industrial revolution, has become a major driving force in business innovation across industries. In the field of information security, we are trying to develop and improve an intelligent security system by applying these techniques to large-scale log data, which has been difficult to find effective utilization methods before. The quality of security log big data, which is the basis of information security AI learning, is an important input factor that determines the performance of intelligent security system. However, the difference and complexity of log data by various product has a problem that requires excessive time and effort in preprocessing big data with poor data quality. In this study, we research and analyze the cases related to log data collection of various firewall. By proposing firewall log data collection format standard, we hope to contribute to the development of intelligent security systems based on security log big data.

MapReduce-Based Partitioner Big Data Analysis Scheme for Processing Rate of Log Analysis (로그 분석 처리율 향상을 위한 맵리듀스 기반 분할 빅데이터 분석 기법)

  • Lee, Hyeopgeon;Kim, Young-Woon;Park, Jiyong;Lee, Jin-Woo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.593-600
    • /
    • 2018
  • Owing to the advancement of Internet and smart devices, access to various media such as social media became easy; thus, a large amount of big data is being produced. Particularly, the companies that provide various Internet services are analyzing the big data by using the MapReduce-based big data analysis techniques to investigate the customer preferences and patterns and strengthen the security. However, with MapReduce, when the big data is analyzed by defining the number of reducer objects generated in the reduce stage as one, the processing rate of big data analysis decreases. Therefore, in this paper, a MapReduce-based split big data analysis method is proposed to improve the log analysis processing rate. The proposed method separates the reducer partitioning stage and the analysis result combining stage and improves the big data processing rate by decreasing the bottleneck phenomenon by generating the number of reducer objects dynamically.

A Study on implementation model for security log analysis system using Big Data platform (빅데이터 플랫폼을 이용한 보안로그 분석 시스템 구현 모델 연구)

  • Han, Ki-Hyoung;Jeong, Hyung-Jong;Lee, Doog-Sik;Chae, Myung-Hui;Yoon, Cheol-Hee;Noh, Kyoo-Sung
    • Journal of Digital Convergence
    • /
    • v.12 no.8
    • /
    • pp.351-359
    • /
    • 2014
  • The log data generated by security equipment have been synthetically analyzed on the ESM(Enterprise Security Management) base so far, but due to its limitations of the capacity and processing performance, it is not suited for big data processing. Therefore the another way of technology on the big data platform is necessary. Big Data platform can achieve a large amount of data collection, storage, processing, retrieval, analysis, and visualization by using Hadoop Ecosystem. Currently ESM technology has developed in the way of SIEM (Security Information & Event Management) technology, and to implement security technology in SIEM way, Big Data platform technology is essential that can handle large log data which occurs in the current security devices. In this paper, we have a big data platform Hadoop Ecosystem technology for analyzing the security log for sure how to implement the system model is studied.

Security Operation Implementation through Big Data Analysis by Using Open Source ELK Stack (오픈소스 ELK Stack 활용 정보보호 빅데이터 분석을 통한 보안관제 구현)

  • Hyun, Jeong-Hoon;Kim, Hyoung-Joong
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.181-191
    • /
    • 2018
  • With the development of IT, hacking crimes are becoming intelligent and refined. In Emergency response, Big data analysis in information security is to derive problems such as abnormal behavior through collecting, storing, analyzing and visualizing whole log including normal log generated from various information protection system. By using the full log data, including data we have been overlooked, we seek to detect and respond to the abnormal signs of the cyber attack from the early stage of the cyber attack. We used open-source ELK Stack technology to analyze big data like unstructured data that occur in information protection system, terminal and server. By using this technology, we can make it possible to build an information security control system that is optimized for the business environment with its own staff and technology. It is not necessary to rely on high-cost data analysis solution, and it is possible to accumulate technologies to defend from cyber attacks by implementing protection control system directly with its own manpower.

An Efficient Design and Implementation of an MdbULPS in a Cloud-Computing Environment

  • Kim, Myoungjin;Cui, Yun;Lee, Hanku
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3182-3202
    • /
    • 2015
  • Flexibly expanding the storage capacity required to process a large amount of rapidly increasing unstructured log data is difficult in a conventional computing environment. In addition, implementing a log processing system providing features that categorize and analyze unstructured log data is extremely difficult. To overcome such limitations, we propose and design a MongoDB-based unstructured log processing system (MdbULPS) for collecting, categorizing, and analyzing log data generated from banks. The proposed system includes a Hadoop-based analysis module for reliable parallel-distributed processing of massive log data. Furthermore, because the Hadoop distributed file system (HDFS) stores data by generating replicas of collected log data in block units, the proposed system offers automatic system recovery against system failures and data loss. Finally, by establishing a distributed database using the NoSQL-based MongoDB, the proposed system provides methods of effectively processing unstructured log data. To evaluate the proposed system, we conducted three different performance tests on a local test bed including twelve nodes: comparing our system with a MySQL-based approach, comparing it with an Hbase-based approach, and changing the chunk size option. From the experiments, we found that our system showed better performance in processing unstructured log data.

Diagnosis Analysis of Patient Process Log Data (환자의 프로세스 로그 정보를 이용한 진단 분석)

  • Bae, Joonsoo
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.42 no.4
    • /
    • pp.126-134
    • /
    • 2019
  • Nowadays, since there are so many big data available everywhere, those big data can be used to find useful information to improve design and operation by using various analysis methods such as data mining. Especially if we have event log data that has execution history data of an organization such as case_id, event_time, event (activity), performer, etc., then we can apply process mining to discover the main process model in the organization. Once we can find the main process from process mining, we can utilize it to improve current working environment. In this paper we developed a new method to find a final diagnosis of a patient, who needs several procedures (medical test and examination) to diagnose disease of the patient by using process mining approach. Some patients can be diagnosed by only one procedure, but there are certainly some patients who are very difficult to diagnose and need to take several procedures to find exact disease name. We used 2 million procedure log data and there are 397 thousands patients who took 2 and more procedures to find a final disease. These multi-procedure patients are not frequent case, but it is very critical to prevent wrong diagnosis. From those multi-procedure taken patients, 4 procedures were discovered to be a main process model in the hospital. Using this main process model, we can understand the sequence of procedures in the hospital and furthermore the relationship between diagnosis and corresponding procedures.

A Study on the Calculation and Provision of Accruals-Quality by Big Data Real-Time Predictive Analysis Program

  • Shin, YeounOuk
    • International journal of advanced smart convergence
    • /
    • v.8 no.3
    • /
    • pp.193-200
    • /
    • 2019
  • Accruals-Quality(AQ) is an important proxy for evaluating the quality of accounting information disclosures. High-quality accounting information will provide high predictability and precision in the disclosure of earnings and will increase the response to stock prices. And high Accruals-Quality, such as mitigating heterogeneity in accounting information interpretation, provides information usefulness in capital markets. The purpose of this study is to suggest how AQ, which represents the quality of accounting information disclosure, is transformed into digitized data in real-time in combination with IT information technology and provided to financial analyst's information environment in real-time. And AQ is a framework for predictive analysis through big data log analysis system. This real-time information from AQ will help financial analysts to increase their activity and reduce information asymmetry. In addition, AQ, which is provided in real time through IT information technology, can be used as an important basis for decision-making by users of capital market information, and is expected to contribute in providing companies with incentives to voluntarily improve the quality of accounting information disclosure.

Anomaly Detection Technique of Log Data Using Hadoop Ecosystem (하둡 에코시스템을 활용한 로그 데이터의 이상 탐지 기법)

  • Son, Siwoon;Gil, Myeong-Seon;Moon, Yang-Sae
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.2
    • /
    • pp.128-133
    • /
    • 2017
  • In recent years, the number of systems for the analysis of large volumes of data is increasing. Hadoop, a representative big data system, stores and processes the large data in the distributed environment of multiple servers, where system-resource management is very important. The authors attempted to detect anomalies from the rapid changing of the log data that are collected from the multiple servers using simple but efficient anomaly-detection techniques. Accordingly, an Apache Hive storage architecture was designed to store the log data that were collected from the multiple servers in the Hadoop ecosystem. Also, three anomaly-detection techniques were designed based on the moving-average and 3-sigma concepts. It was finally confirmed that all three of the techniques detected the abnormal intervals correctly, while the weighted anomaly-detection technique is more precise than the basic techniques. These results show an excellent approach for the detection of log-data anomalies with the use of simple techniques in the Hadoop ecosystem.

An Optimization Technique for Smart-Walk Systems Using Big Stream Log Data (Smart-Walk 시스템에서 스트림 빅데이터 분석을 통한 최적화 기법)

  • Cho, Wan-Sup;Yang, Kyung-Eun;Lee, Joong-Yeub
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.3
    • /
    • pp.105-114
    • /
    • 2012
  • Various RFID-based smart-walk systems have been developed for guiding disabled people. The system sends appropriate message whenever the disabled people arrived at a specific point. We propose universal design concept and optimization techniques for the smart-walk systems. Universal design concept can be adopted for supporting various kinds of disabled such as a blind person, a hearing-impaired person, or a foreigner in a system. It can be supported by storing appropriate messages set in the message database table depending on the kinds of the disabled. System optimization can be done by analyzing operational log(stream) data accumulated in the system. Useful information can be extracted by analyzing or mining the accumulated operational log data. We show various analysis results from the operational log data.