• Title/Summary/Keyword: 하둡 환경

Search Result 95, Processing Time 0.024 seconds

A Study on the Massive Data Security System of the Hadoop Based (Hadoop 기반의 대용량 데이터 보안 시스템에 관한 연구)

  • Kim, Hyo-Nam
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2016.01a
    • /
    • pp.305-306
    • /
    • 2016
  • 현재 스마트 시대에 살고 있는 우리는 매우 복잡하고 거미줄처럼 연결되어 있는 빅 데이터 환경에서 살고 있다. 이런 환경에서는 대용량 데이터를 효율적으로 관리하고 활용하는 것이 개인이나 기업들이 추구하려는 목표이다. 빅 데이터 시대에 데이터의 효율적인 관리와 활용을 위해 다양한 장비에서 수집되고 저장된 대용량 데이터에 대해서 일반적인 데이터 분석을 통한 보안 기술로는 상당한 시간과 자원 낭비가 수반된다. 이를 개선하기 위해 본 논문에서는 하둡을 이용하여 대용량 데이터에 대한 처리 및 분석을 통해 효과적인 보안 시스템을 제안한다.

  • PDF

Integrated Verification of Hadoop Cluster Prototypes and Analysis Software for SMB (중소기업을 위한 하둡 클러스터의 프로토타입과 분석 소프트웨어의 통합된 검증)

  • Cha, Byung-Rae;Kim, Nam-Ho;Lee, Seong-Ho;Ji, Yoo-Kang;Kim, Jong-Won
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.2
    • /
    • pp.191-199
    • /
    • 2014
  • Recently, researches to facilitate utilization by small and medium business (SMB) of cloud computing and big data paradigm, which is the booming adoption of IT area, has been on the increase. As one of these efforts, in this paper, we design and implement the prototype to tentatively build up Hadoop cluster under private cloud infrastructure environments. Prototype implementation are made on each hardware type such as single board, PC, and server and performance is measured. Also, we present the integrated verification results for the data analysis performance of the analysis software system running on top of realized prototypes by employing ASA (American Standard Association) Dataset. For this, we implement the analysis software system using several open sources such as R, Python, D3, and java and perform a test.

Anomaly Detection of Hadoop Log Data Using Moving Average and 3-Sigma (이동 평균과 3-시그마를 이용한 하둡 로그 데이터의 이상 탐지)

  • Son, Siwoon;Gil, Myeong-Seon;Moon, Yang-Sae;Won, Hee-Sun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.6
    • /
    • pp.283-288
    • /
    • 2016
  • In recent years, there have been many research efforts on Big Data, and many companies developed a variety of relevant products. Accordingly, we are able to store and analyze a large volume of log data, which have been difficult to be handled in the traditional computing environment. To handle a large volume of log data, which rapidly occur in multiple servers, in this paper we design a new data storage architecture to efficiently analyze those big log data through Apache Hive. We then design and implement anomaly detection methods, which identify abnormal status of servers from log data, based on moving average and 3-sigma techniques. We also show effectiveness of the proposed detection methods by demonstrating that our methods identifies anomalies correctly. These results show that our anomaly detection is an excellent approach for properly detecting anomalies from Hadoop log data.

A Study on The Grid File Construction Method based on MapReduce for Multidimensional Data Processing (다차원 데이터 처리를 위한 맵리듀스 기반의 그리드 파일 생성기법에 관한 연구)

  • Jung, Joo-Hyuk;Lee, Sang-Ho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.77-80
    • /
    • 2014
  • 최근 컴퓨터와 인터넷 이용의 확산, 스마트폰을 포함한 스마트 기기의 보급과 소셜 네트워크 이용의 확대, 위치 기반의 다양한 서비스 확대 등으로 처리해야 할 데이터 크기가 증가하는 추세이다. 이에 따라 대용량 데이터에 대한 처리가 큰 이슈로 떠오르고 있다. 그로 인해 대용량 데이터 처리를 위한 큰 규모의 분산 컴퓨팅 환경을 지원하는 프레임워크인 하둡이 개발되었으며 많은 기업에서 이를 활용하고 있는 추세이다. 하지만 대용량 데이터 중 영상, 의료, 센서 데이터 등 다차원 데이터 처리에 관한 연구는 미비한 상태이다. 기존의 다차원 데이터 처리를 위해 다양한 다차원 인덱스가 제안되었지만, 대용량 다차원 데이터 처리는 단일머신에서는 비효율적인 단점이 있다. 본 논문에서는 다차원 인덱스 기법인 그리드 파일을 하둡의 분산 병렬 처리 모델인 맵리듀스를 기반으로 생성하는 기법을 제안한다. 또한 앞서 생성된 그리드 파일을 가지고 맵리듀스를 이용한 질의처리 방법을 제안 한다. 이로 인해 단일머신에서의 그리드 파일 생성을 병렬처리 함으로써 생성 시간을 단축시키고 질의 처리 또한 맵리듀스를 이용하여 병렬 처리 함으로써 질의 시간 단축을 예상한다.

Big Data Preprocessing for Predicting Box Office Success (영화 흥행 실적 예측을 위한 빅데이터 전처리)

  • Jun, Hee-Gook;Hyun, Geun-Soo;Lim, Kyung-Bin;Lee, Woo-Hyun;Kim, Hyoung-Joo
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.12
    • /
    • pp.615-622
    • /
    • 2014
  • The Korean film market has rapidly achieved an international scale, and this has led to a need for decision-making based on analytical methods that are more precise and appropriate. In this modern era, a highly advanced information environment can provide an overwhelming amount of data that is generated in real time, and this data must be properly handled and analyzed in order to extract useful information. In particular, the preprocessing of large data, which is the most time-consuming step, should be done in a reasonable amount of time. In this paper, we investigated a big data preprocessing method for predicting movie box office success. We analyzed the movie data characteristics for specialized preprocessing methods, and used the Hadoop MapReduce framework. The experimental results showed that the preprocessing methods using big data techniques are more effective than existing methods.

RDFS Rule based Parallel Reasoning Scheme for Large-Scale Streaming Sensor Data (대용량 스트리밍 센서데이터 환경에서 RDFS 규칙기반 병렬추론 기법)

  • Kwon, SoonHyun;Park, Youngtack
    • Journal of KIISE
    • /
    • v.41 no.9
    • /
    • pp.686-698
    • /
    • 2014
  • Recently, large-scale streaming sensor data have emerged due to explosive supply of smart phones, diffusion of IoT and Cloud computing technology, and generalization of IoT devices. Also, researches on combination of semantic web technology are being actively pushed forward by increasing of requirements for creating new value of data through data sharing and mash-up in large-scale environments. However, we are faced with big issues due to large-scale and streaming data in the inference field for creating a new knowledge. For this reason, we propose the RDFS rule based parallel reasoning scheme to service by processing large-scale streaming sensor data with the semantic web technology. In the proposed scheme, we run in parallel each job of Rete network algorithm, the existing rule inference algorithm and sharing data using the HBase, a hadoop database, as a public storage. To achieve this, we implement our system and evaluate performance through the AWS data of the weather center as large-scale streaming sensor data.

Adaptive Cache Management Scheme in HDFS (HDFS에서 적응형 캐시 관리 기법)

  • Choi, Hyoung-Rak;Yoo, Jae-Soo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2019.05a
    • /
    • pp.461-462
    • /
    • 2019
  • 스마트팩토리는 정보통신기술(ICT)를 이용한 공정의 모든 데이터를 수집, 분석하여 제어하고 있다. 기존보다 방대한 양의 데이터를 처리하기 위해 기업들은 하둡을 이용한다. 다양한 크기의 데이터가 나타나는 환경에서 HDFS을 효율적으로 관리하기 위한 적응형 캐시 관리 기법을 제안한다. 제안하는 기법은 데이터 노드의 로컬 디스크의 공간 이용 효율성을 높이고 평균 데이터 크기를 분석하여 데이터 노드 확장시 적합한 블록 크기를 적용할 수 있게 관리한다. 성능 평가를 통해 제안하는 기법의 데이터 노드에서 로컬 디스크 효율 향상과 읽기와 쓰기 속도의 속도에 효과를 보인다.

  • PDF

A Study on Procurement Audit Integration Real Time Monitoring System Using Process Mining Under Big Data Environment (빅 데이터 환경하에서 프로세스 마이닝을 이용한 구매 감사 통합 실시간 모니터링 시스템에 대한 연구)

  • Yoo, Young-Seok;Park, Han-Gyu;Back, Seung-Hoon;Hong, Sung-Chan
    • Journal of Internet Computing and Services
    • /
    • v.18 no.3
    • /
    • pp.71-83
    • /
    • 2017
  • In recent years, by utilizing the greatest strengths of process mining, the various research activities have been actively progressed to use auditing work of business organization. On the other hand, there is insufficient research on systematic and efficient analysis of massive data generated under big data environment using process mining, and proactive monitoring of risk management from audit side, which is one of important management activities of corporate organization. In this study, we intend to realize Hadoop-based internal audit integrated real-time monitoring system in order to detect the abnormal symptoms in prevent accidents in advance. Through the integrated real-time monitoring system for purchasing audit, we intend to realize strengthen the delivery management of purchasing materials ordered, reduce cost of purchase, manage competitive companies, prevent fraud, comply with regulations, and adhere to internal control accounting system. As a result, we can provide information that can be immediately executed due to enhanced purchase audit integrated real-time monitoring by analyzing data efficiently using process mining via Hadoop-based systems. From an integrated viewpoint, it is possible to manage the business status, by processing a large amount of work at a high speed faster than the continuous monitoring, the effectiveness of the quality improvement of the purchase audit and the innovation of the purchase process appears.

Analysis of big data using Rhipe (Rhipe를 활용한 빅데이터 처리 및 분석)

  • Ko, Youngjun;Kim, Jinseog
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.5
    • /
    • pp.975-987
    • /
    • 2013
  • The Hadoop system was developed by the Apache foundation based on GFS and MapReduce technologies of Google. Many modern systems for managing and processing the big data have been developing based on the Hadoop because the Hadoop was designed for scalability and distributed computing. The R software has been considered as a well-suited analytic tool in the Hadoop based systems because the R is flexible to other languages and has many libraries for complex analyses. We introduced Rhipe which is a R package supporting MapReduce programming easily under the Hadoop system, and implemented a MapReduce program using Rhipe for multiple regression especially. In addition, we compared the computing speeds of our program with the other packages (ff and bigmemory) for processing the large data. The simulation results showed that our program was more fast than ff and bigmemory as the size of data increases.

Big Data Management Scheme using Property Information based on Cluster Group in adopt to Hadoop Environment (하둡 환경에 적합한 클러스터 그룹 기반 속성 정보를 이용한 빅 데이터 관리 기법)

  • Han, Kun-Hee;Jeong, Yoon-Su
    • Journal of Digital Convergence
    • /
    • v.13 no.9
    • /
    • pp.235-242
    • /
    • 2015
  • Social network technology has been increasing interest in the big data service and development. However, the data stored in the distributed server and not on the central server technology is easy enough to find and extract. In this paper, we propose a big data management techniques to minimize the processing time of information you want from the content server and the management server that provides big data services. The proposed method is to link the in-group data, classified data and groups according to the type, feature, characteristic of big data and the attribute information applied to a hash chain. Further, the data generated to extract the stored data in the distributed server to record time for improving the data index information processing speed of the data classification of the multi-attribute information imparted to the data. As experimental result, The average seek time of the data through the number of cluster groups was increased an average of 14.6% and the data processing time through the number of keywords was reduced an average of 13%.