• Title/Summary/Keyword: 하둡 맵리듀스 프레임워크

Search Result 36, Processing Time 0.026 seconds

Distributed Support Vector Machines for Localization on a Sensor Newtork (센서 네트워크에서 위치 측정을 위한 분산 지지 벡터 머신)

  • Moon, Sangook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.944-946
    • /
    • 2014
  • Localization of a sensor network node using machine learning has been recently studied. It is easy for Support vector machines algorithm to implement in high level language enabling parallelism. In this paper, we realized Support vector machine using python language and built a sensor network cluster with 5 Pi's. We also established a Hadoop software framework to employ MapReduce mechanism. We modified the existing Support vector machine algorithm to fit into the distributed hadoop architecture system for localization of a sensor node. In our experiment, we implemented the test sensor network with a variety of parameters and examined based on proficiency, resource evaluation, and processing time.

  • PDF

A Study on Large-scale Data Analysis based on Hadoop for Astroinformatics (하둡 기반 천문 응용 분야 대규모 데이터 분석 기법 연구)

  • Kwak, Jae-Hyuck;Yoon, Jun-Weon;Jung, Yong-Hwan;Hahm, Jae-Gyoon;Park, Dong-In
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.13-16
    • /
    • 2011
  • 과학 응용 분야에서 생성되는 대규모의 데이터를 빠른 시간 내에 효율적으로 처리해야 할 필요성이 대두 되면서 클라우드 컴퓨팅이 주목받고 있다. 하둡(Hadoop)은 대규모 데이터 처리 분석을 위한 소프트웨어 프레임워크를 제공하는 아파치의 오픈소스 프로젝트로서 클라우드 컴퓨팅의 대표적인 기술로서 널리 사용되고 있다. 특히, 하둡은 높은 확장성과 성능을 제공하면서 결함 탐지와 자동 복구 기능이 우수하여 과학 기술 분야에서도 점차적으로 도입 및 활용되고 있다. 본 논문에서는 하둡을 이용하여 천문 응용 분야에서 생성되는 대규모 데이터를 분석하기 위한 방법을 연구하였다. 본 논문에서 관심을 가지는 천문 응용 데이터는 대략 천만개의 작은 크기의 관측 데이터를 처리해야 하지만, 하둡은 대규모 데이터 처리에 특화되어 있어서 많은 개수의 작은 크기를 가지는 관측데이터 처리에는 적합하지 않다. 본 논문에서는 천문 응용 데이터 처리를 위한 입출력 파일을 하둡에서 제공하는 특수화된 데이터 구조를 이용하여 압축하였고, 천문 응용 실행 코드가 하둡에서 실행이 가능하도록 맵리듀스 작업으로 랩핑하여 구현하였다.

Design and Implementation of HDFS data encryption scheme using ARIA algorithms on Hadoop (하둡 상에서 ARIA 알고리즘을 이용한 HDFS 데이터 암호화 기법의 설계 및 구현)

  • Song, Youngho;Shin, YoungSung;Yoon, Min;Jang, Miyoung;Chang, Jae-Woo
    • Annual Conference of KIPS
    • /
    • 2015.10a
    • /
    • pp.613-616
    • /
    • 2015
  • 최근 스마트폰 기기의 보급 및 소셜 서비스 산업의 고도화로 인해, 빅데이터가 등장하였다. 한편 빅데이터에서 효율적으로 정보를 분석하는 대표적인 플랫폼으로 하둡이 존재한다. 하둡은 클러스터 환경에 기반한 우수한 확장성, 장애 복구 기능 및 사용자가 기능을 정의할 수 있는 맵리듀스 프레임워크 등을 지원한다. 아울러 하둡은 개인정보나 위치 데이터 등의 민감한 정보를 보호하기 위해 Kerberos를 통한 사용자 인증 기법을 제공하고, HDFS 압축 코덱을 활용한 AES 코덱 기반 데이터 암호화를 지원하고 있다. 그러나 하둡 기반 소프트웨어를 사용하고 있는 국내 기관 및 기업은 국내 ARIA 데이터 암호화를 적용하지 못하고 있다. 이를 해결하기 위해 본 논문에서는 하둡을 기반으로 ARIA 암호화를 지원하는 HDFS 데이터 암호화 기법을 제안한다.

Dynamic Load Management Method for Spatial Data Stream Processing on MapReduce Online Frameworks (맵리듀스 온라인 프레임워크에서 공간 데이터 스트림 처리를 위한 동적 부하 관리 기법)

  • Jeong, Weonil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.8
    • /
    • pp.535-544
    • /
    • 2018
  • As the spread of mobile devices equipped with various sensors and high-quality wireless network communications functionsexpands, the amount of spatio-temporal data generated from mobile devices in various service fields is rapidly increasing. In conventional research into processing a large amount of real-time spatio-temporal streams, it is very difficult to apply a Hadoop-based spatial big data system, designed to be a batch processing platform, to a real-time service for spatio-temporal data streams. This paper extends the MapReduce online framework to support real-time query processing for continuous-input, spatio-temporal data streams, and proposes a load management method to distribute overloads for efficient query processing. The proposed scheme shows a dynamic load balancing method for the nodes based on the inflow rate and the load factor of the input data based on the space partition. Experiments show that it is possible to support efficient query processing by distributing the spatial data stream in the corresponding area to the shared resources when load management in a specific area is required.

Big Data Preprocessing for Predicting Box Office Success (영화 흥행 실적 예측을 위한 빅데이터 전처리)

  • Jun, Hee-Gook;Hyun, Geun-Soo;Lim, Kyung-Bin;Lee, Woo-Hyun;Kim, Hyoung-Joo
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.12
    • /
    • pp.615-622
    • /
    • 2014
  • The Korean film market has rapidly achieved an international scale, and this has led to a need for decision-making based on analytical methods that are more precise and appropriate. In this modern era, a highly advanced information environment can provide an overwhelming amount of data that is generated in real time, and this data must be properly handled and analyzed in order to extract useful information. In particular, the preprocessing of large data, which is the most time-consuming step, should be done in a reasonable amount of time. In this paper, we investigated a big data preprocessing method for predicting movie box office success. We analyzed the movie data characteristics for specialized preprocessing methods, and used the Hadoop MapReduce framework. The experimental results showed that the preprocessing methods using big data techniques are more effective than existing methods.

External Merge Sorting in Tajo with Variable Server Configuration (매개변수 환경설정에 따른 타조의 외부합병정렬 성능 연구)

  • Lee, Jongbaeg;Kang, Woon-hak;Lee, Sang-won
    • Journal of KIISE
    • /
    • v.43 no.7
    • /
    • pp.820-826
    • /
    • 2016
  • There is a growing requirement for big data processing which extracts valuable information from a large amount of data. The Hadoop system employs the MapReduce framework to process big data. However, MapReduce has limitations such as inflexible and slow data processing. To overcome these drawbacks, SQL query processing techniques known as SQL-on-Hadoop were developed. Apache Tajo, one of the SQL-on-Hadoop techniques, was developed by a Korean development group. External merge sort is one of the heavily used algorithms in Tajo for query processing. The performance of external merge sort in Tajo is influenced by two parameters, sort buffer size and fanout. In this paper, we analyzed the performance of external merge sort in Tajo with various sort buffer sizes and fanouts. In addition, we figured out that there are two major causes of differences in the performance of external merge sort: CPU cache misses which increase as the sort buffer size grows; and the number of merge passes determined by fanout.

Constructing a Support Vector Machine for Localization on a Low-End Cluster Sensor Network (로우엔드 클러스터 센서 네트워크에서 위치 측정을 위한 지지 벡터 머신)

  • Moon, Sangook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.12
    • /
    • pp.2885-2890
    • /
    • 2014
  • Localization of a sensor network node using machine learning has been recently studied. It is easy for Support vector machines algorithm to implement in high level language enabling parallelism. Raspberrypi is a linux system which can be used as a sensor node. Pi can be used to construct IP based Hadoop clusters. In this paper, we realized Support vector machine using python language and built a sensor network cluster with 5 Pi's. We also established a Hadoop software framework to employ MapReduce mechanism. In our experiment, we implemented the test sensor network with a variety of parameters and examined based on proficiency, resource evaluation, and processing time. The experimentation showed that with more execution power and memory volume, Pi could be appropriate for a member node of the cluster, accomplishing precise classification for sensor localization using machine learning.

Initial Authentication Protocol of Hadoop Distribution System based on Elliptic Curve (타원곡선기반 하둡 분산 시스템의 초기 인증 프로토콜)

  • Jeong, Yoon-Su;Kim, Yong-Tae;Park, Gil-Cheol
    • Journal of Digital Convergence
    • /
    • v.12 no.10
    • /
    • pp.253-258
    • /
    • 2014
  • Recently, the development of cloud computing technology is developed as soon as smartphones is increases, and increased that users want to receive big data service. Hadoop framework of the big data service is provided to hadoop file system and hadoop mapreduce supported by data-intensive distributed applications. But, smpartphone service using hadoop system is a very vulnerable state to data authentication. In this paper, we propose a initial authentication protocol of hadoop system assisted by smartphone service. Proposed protocol is combine symmetric key cryptography techniques with ECC algorithm in order to support the secure multiple data processing systems. In particular, the proposed protocol to access the system by the user Hadoop when processing data, the initial authentication key and the symmetric key instead of the elliptic curve by using the public key-based security is improved.

Development of Retargetable Hadoop Simulation Environment Based on DEVS Formalism (DEVS 형식론 기반의 재겨냥성 하둡 시뮬레이션 환경 개발)

  • Kim, Byeong Soo;Kang, Bong Gu;Kim, Tag Gon;Song, Hae Sang
    • Journal of the Korea Society for Simulation
    • /
    • v.26 no.4
    • /
    • pp.51-61
    • /
    • 2017
  • Hadoop platform is a representative storing and managing platform for big data. Hadoop consists of distributed computing system called MapReduce and distributed file system called HDFS. It is important to analyse the effectiveness according to the change of cluster constructions and several parameters. However, since it is hard to construct thousands of clusters and analyse the constructed system, simulation method is required to analyse the system. This paper proposes Hadoop simulator based on DEVS formalism which provides hierarchical and modular modeling. Hadoop simulator provides a retargetable experimental environment that is possible to change of various parameters, algorithms and models. It is also possible to design input models reflecting the characteristics of Hadoop applications. To maximize the user's convenience, the user interface, real-time model viewer, and input scenario editor are also provided. In this paper, we validate Hadoop Simulator through the comparison with the Hadoop execution results and perform various experiments.

Processing Method of Mass Small File Using Hadoop Platform (하둡 플랫폼을 이용한 대량의 스몰파일 처리방법)

  • Kim, Chang-Bok;Chung, Jae-Pil
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.4
    • /
    • pp.401-408
    • /
    • 2014
  • Hadoop is composed with MapReduce programming model for distributed processing and HDFS distributed file system. Hadoop is suitable framework for big data processing, but processing of mass small files have many problems. The processing of mass small file in hadoop have problems to created one mapper per one file, and it have problems to needed many memory for store of meta information of file. This paper have comparison evaluation processing method of mass small file with various method in hadoop platform. The processing of general compression format is inadequate because of processing by one mapper regardless of data size. The processing of sequence and hadoop archive file is removed memory problem of namenode by compress and combine of small file. Hadoop archive file is faster then sequence file about combine time of small file. The processing using CombineFileInputFormat class is needed not combine of small file, and it have similar speed big data processing method.