• Title/Summary/Keyword: Hadoop MapReduce Framework

Search Result 46, Processing Time 0.03 seconds

Big Numeric Data Classification Using Grid-based Bayesian Inference in the MapReduce Framework

  • Kim, Young Joon;Lee, Keon Myung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.313-321
    • /
    • 2014
  • In the current era of data-intensive services, the handling of big data is a crucial issue that affects almost every discipline and industry. In this study, we propose a classification method for large volumes of numeric data, which is implemented in a distributed programming framework, i.e., MapReduce. The proposed method partitions the data space into a grid structure and it then models the probability distributions of classes for grid cells by collecting sufficient statistics using distributed MapReduce tasks. The class labeling of new data is achieved by k-nearest neighbor classification based on Bayesian inference.

The MapReduce framework for Large-scale Data Analysis: Overview and Research Trends (대규모 데이터 분석을 위한 MapReduce 기술의 연구 동향)

  • Lee, K.H.;Park, W.J.;Cho, K.S.;Ryu, W.
    • Electronics and Telecommunications Trends
    • /
    • v.28 no.6
    • /
    • pp.156-166
    • /
    • 2013
  • MapReduce는 다양한 형식의 대용량 데이터를 병렬 처리하는데 있어 효과적인 도구로 인식되고 있다. 특히 MapReduce의 오픈 소스 구현인 Hadoop은 여러 분야에서 널리 이용되고 있으며, 가장 대표적인 빅데이터 솔루션으로 현재까지 많은 주목을 받아오고 있다. 하지만, MapReduce는 그 구조적 특정으로 인한 이점과 함께 여러 제약과 단점들을 가진다. 이에 따라 MapReduce의 개선을 위한 많은 연구와 시스템 개량이 학계와 산업계에서 동시에 수행되어 왔다. 본고에서는 대용량 데이터 분석을 위한 MapReduce 프레임워크의 특성과 이를 개선하기 위한 최근의 연구 내용들을 소개한다. 또한 향후의 대용량 데이터 처리는 어떠한 모습을 취하게 될 것인지를 예측해 본다.

A JobTracker Fault-tolerant Mechanism for MapReduce Framework (MapReduce 프레임워크를 위한 JobTracker 결함허용 메커니즘)

  • Hwang, Byung-Hyun;Park, Kie-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06a
    • /
    • pp.317-318
    • /
    • 2010
  • 클라우드 컴퓨팅 서비스를 제공하기 위해서는 클라우드 컴퓨팅에 적합한 데이터 분산 저장 및 병렬 처리가 가능한 IT 인프라 구축이 필수적이다. 이를 위해서 분산 파일 시스템 중 하나인 HDFS(Hadoop File System)와 병렬 데이터 처리를 지원하기 위한 MapReduce 프레임워크 관련 연구가 각광 받고 있다. 하지만 MapReduce 프레임워크를 구성하는 JobTracker 노드는 SPoF(Single Point of Failure)이기 때문에, 작업 도중 JobTracker 노드의 결함이 발생하게 되면 전체 작업이 실패하게 된다. 위와 같은 문제를 해결하기 위해서 본 논문에서는 MapReduce 프레임워크의 JobTracker 노드 결함 발생에 대처할 수 있는 결함허용 메커니즘을 제안하였다.

  • PDF

Sequential Pattern Mining with Optimization Calling MapReduce Function on MapReduce Framework (맵리듀스 프레임웍 상에서 맵리듀스 함수 호출을 최적화하는 순차 패턴 마이닝 기법)

  • Kim, Jin-Hyun;Shim, Kyu-Seok
    • The KIPS Transactions:PartD
    • /
    • v.18D no.2
    • /
    • pp.81-88
    • /
    • 2011
  • Sequential pattern mining that determines frequent patterns appearing in a given set of sequences is an important data mining problem with broad applications. For example, sequential pattern mining can find the web access patterns, customer's purchase patterns and DNA sequences related with specific disease. In this paper, we develop the sequential pattern mining algorithms using MapReduce framework. Our algorithms distribute input data to several machines and find frequent sequential patterns in parallel. With synthetic data sets, we did a comprehensive performance study with varying various parameters. Our experimental results show that linear speed up can be achieved through our algorithms with increasing the number of used machines.

An Algorithms for Tournament-based Big Data Analysis (토너먼트 기반의 빅데이터 분석 알고리즘)

  • Lee, Hyunjin
    • Journal of Digital Contents Society
    • /
    • v.16 no.4
    • /
    • pp.545-553
    • /
    • 2015
  • While all of the data has a value in itself, most of the data that is collected in the real world is a random and unstructured. In order to extract useful information from the data, it is need to use the data transform and analysis algorithms. Data mining is used for this purpose. Today, there is not only need for a variety of data mining techniques to analyze the data but also need for a computational requirements and rapid analysis time for huge volume of data. The method commonly used to store huge volume of data is to use the hadoop. A method for analyzing data in hadoop is to use the MapReduce framework. In this paper, we developed a tournament-based MapReduce method for high efficiency in developing an algorithm on a single machine to the MapReduce framework. This proposed method can apply many analysis algorithms and we showed the usefulness of proposed tournament based method to apply frequently used data mining algorithms k-means and k-nearest neighbor classification.

Framework for Efficient Web Page Prediction using Deep Learning

  • Kim, Kyung-Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.12
    • /
    • pp.165-172
    • /
    • 2020
  • Recently, due to exponential growth of access information on the web, the importance of predicting a user's next web page use has been increasing. One of the methods that can be used for predicting user's next web page is deep learning. To predict next web page, web logs are analyzed by data preprocessing and then a user's next web page is predicted on the output of the analyzed web logs using a deep learning algorithm. In this paper, we propose a framework for web page prediction that includes methods for web log preprocessing followed by deep learning techniques for web prediction. To increase the speed of preprocessing of large web log, a Hadoop based MapReduce programming model is used. In addition, we present a web prediction system that uses an efficient deep learning technique on the output of web log preprocessing for training and prediction. Through experiment, we show the performance improvement of our proposed method over traditional methods. We also show the accuracy of our prediction.

Implementaion of Video Processing Framework using Hadoop-based cloud computing (Hadoop 기반 클라우드 컴퓨팅을 이용한 영상 처리 프레임워크 구현)

  • Ryu, Chungmo;Lee, Daecheol;Jang, Minwook;Kim, Cheolgi
    • Annual Conference of KIPS
    • /
    • 2013.11a
    • /
    • pp.139-142
    • /
    • 2013
  • 최근 대용량 영상데이터로부터 정보 수집, 영상 처리를 위한 클라우드 관련 연구들이 활발하다. 그러나 공개 소프트웨어를 이용한 클라우드 연구의 대부분은 라이브러리 수준이 아닌 단순히 프로그램 수준의 조합으로 작동한다. 이런 이유로 단순 조합에 따른 비효율성에 의한 성능문제는 크게 다루어지지 않는다. 본 논문에서는 이 비효율성을 해결하는데 중점을 두고 FFmpeg과 Hadoop을 라이브러리 수준으로 결합하여 기존보다 더 나은 성능의 영상클라우드 환경을 구축하였다. C기반의 영상처리 라이브러리인 FFmpeg와 JAVA기반의 클라우드 환경 Hadoop의 결합을 위해 JNI(Java Native Interface)를 이용하였다. 상세구현으로는 HDFS(Hadoop Distributed File System)을 확장하여 Hadoop MapReduce가 직접 FFmpeg을 통한 영상파일 접근이 가능하게 하였다. 이로써 FFmpeg과 Hadoop간 상이한 파일 접근 방식에서 발생하는 불필요한 작업에 의한 시스템의 성능저하를 막았다. 또한 응용의 확장성을 위해 영상작업시 작업영상을 영상처리의 최소단위인 GOP(Group of Pictures)단위로 잘라 클라우드의 노드들에게 분산시켰다. 결과적으로 기존에 존재하는 Hadoop과 FFmpeg을 프로그램적으로 결합한 영상처리 클라우드보다 총 처리시간을 앞당겼고, GOP 단위의 영상 처리는 영상기반 작업에 안정성과 응용의 확장성을 보장해주었다.

An Efficient Implementation of Mobile Raspberry Pi Hadoop Clusters for Robust and Augmented Computing Performance

  • Srinivasan, Kathiravan;Chang, Chuan-Yu;Huang, Chao-Hsi;Chang, Min-Hao;Sharma, Anant;Ankur, Avinash
    • Journal of Information Processing Systems
    • /
    • v.14 no.4
    • /
    • pp.989-1009
    • /
    • 2018
  • Rapid advances in science and technology with exponential development of smart mobile devices, workstations, supercomputers, smart gadgets and network servers has been witnessed over the past few years. The sudden increase in the Internet population and manifold growth in internet speeds has occasioned the generation of an enormous amount of data, now termed 'big data'. Given this scenario, storage of data on local servers or a personal computer is an issue, which can be resolved by utilizing cloud computing. At present, there are several cloud computing service providers available to resolve the big data issues. This paper establishes a framework that builds Hadoop clusters on the new single-board computer (SBC) Mobile Raspberry Pi. Moreover, these clusters offer facilities for storage as well as computing. Besides the fact that the regular data centers require large amounts of energy for operation, they also need cooling equipment and occupy prime real estate. However, this energy consumption scenario and the physical space constraints can be solved by employing a Mobile Raspberry Pi with Hadoop clusters that provides a cost-effective, low-power, high-speed solution along with micro-data center support for big data. Hadoop provides the required modules for the distributed processing of big data by deploying map-reduce programming approaches. In this work, the performance of SBC clusters and a single computer were compared. It can be observed from the experimental data that the SBC clusters exemplify superior performance to a single computer, by around 20%. Furthermore, the cluster processing speed for large volumes of data can be enhanced by escalating the number of SBC nodes. Data storage is accomplished by using a Hadoop Distributed File System (HDFS), which offers more flexibility and greater scalability than a single computer system.

External Merge Sorting in Tajo with Variable Server Configuration (매개변수 환경설정에 따른 타조의 외부합병정렬 성능 연구)

  • Lee, Jongbaeg;Kang, Woon-hak;Lee, Sang-won
    • Journal of KIISE
    • /
    • v.43 no.7
    • /
    • pp.820-826
    • /
    • 2016
  • There is a growing requirement for big data processing which extracts valuable information from a large amount of data. The Hadoop system employs the MapReduce framework to process big data. However, MapReduce has limitations such as inflexible and slow data processing. To overcome these drawbacks, SQL query processing techniques known as SQL-on-Hadoop were developed. Apache Tajo, one of the SQL-on-Hadoop techniques, was developed by a Korean development group. External merge sort is one of the heavily used algorithms in Tajo for query processing. The performance of external merge sort in Tajo is influenced by two parameters, sort buffer size and fanout. In this paper, we analyzed the performance of external merge sort in Tajo with various sort buffer sizes and fanouts. In addition, we figured out that there are two major causes of differences in the performance of external merge sort: CPU cache misses which increase as the sort buffer size grows; and the number of merge passes determined by fanout.

Development of Application to Deal with Large Data Using Hadoop for 3D Printer (하둡을 이용한 3D 프린터용 대용량 데이터 처리 응용 개발)

  • Lee, Kang Eun;Kim, Sungsuk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.1
    • /
    • pp.11-16
    • /
    • 2020
  • 3D printing is one of the emerging technologies and getting a lot of attention. To do 3D printing, 3D model is first generated, and then converted to G-code which is 3D printer's operations. Facet, which is a small triangle, represents a small surface of 3D model. Depending on the height or precision of the 3D model, the number of facets becomes very large and so the conversion time from 3D model to G-code takes longer. Apach Hadoop is a software framework to support distributed processing for large data set and its application range gets widening. In this paper, Hadoop is used to do the conversion works time-efficient way. 2-phase distributed algorithm is developed first. In the algorithm, all facets are sorted according to its lowest Z-value, divided into N parts, and converted on several nodes independently. The algorithm is implemented in four steps; preprocessing - Map - Shuffling - Reduce of Hadoop. Finally, to show the performance evaluation, Hadoop systems are set up and converts testing 3D model while changing the height or precision.