• Title/Summary/Keyword: Apache Hadoop

Search Result 41, Processing Time 0.027 seconds

Implementation and comparison with Structured data collection modules (정형 빅데이터 수집 모듈 구현 및 비교)

  • Jang, Dong-Hwon;Lee, Min-Woo;Kim, Woosaeng
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.635-638
    • /
    • 2014
  • 빅데이터 시대의 대두에 따라 기존의 관계형 데이터베이스로는 처리하기 어려운 형태의 데이터가 발생하였다. 이런 성질의 데이터를 저장, 활용하기 위한 방법으로 Apache 하둡이 널리 사용되고 있다. 기존의 RDBMS 상의 데이터를 하둡 데이터 분석의 원천 데이터로 활용하려고 하는 경우, 혹은 데이터 크기와 복잡도의 증가로 저장방식을 바꿔야 하는 경우 데이터를 HDFS(Hadoop Distributed File System) 으로 전송해야 한다. 본 논문에서는 정형 데이터 수집 모듈인 Sqoop과 Nosqoop4u의 개발을 통하여 데이터 전송 성능을 비교하였다.

Distributed Stream Processing System with apache Hadoop for PTAM on Xeon Phi Cluster (PTAM을 위한 제온파이 기반 하둡 분산 스트림 프로세싱 시스템)

  • Seo, Jae Min;Cho, Kyu Nam;Kim, Do Hyung;Jeong, Chang-Sung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.184-186
    • /
    • 2015
  • 본 논문에서는 PTAM을 위한 새로운 분산 스트림 프로세싱 시스템을 제안한다. PTAM은 하나의 시스템에서 동작하도록 설계되었다. 이는 PTAM이 가지고 있는 한계점을 말해주는 부분인데, PTAM은 Bundle Adjustment의 계산 부하가 커지는 경우에 map을 구축하는데 있어 많은 시간과 리소스가 필요하다. 이에 하둡을 통해 계산 부하를 분산하고, PE(Processing Element)를 Xeon phi 시스템을 통해 동작되는 시스템을 제안한다.

Design and Implementation of Sensor Cloud System for Security and Surveillance Service (보안 감시 서비스를 위한 센서 클라우드 시스템 설계 및 구현)

  • Shim, Jae-Seok;Choi, Yeong-Ho;Lim, Yujin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.137-138
    • /
    • 2012
  • 최근 다양한 센서를 활용한 보안 감시 시스템의 수요가 증가하면서 센서 데이터의 효율적인 관리 또한 중요해지고 있다. 본 논문에서는 높은 확장성 대비 낮은 비용이 장점인 클라우드 환경을 적용한 센서 클라우드 시스템을 설계한다. 본 시스템에서는 옥내에 분산되어 있는 센서 네트워크가 침입자를 감지하여 클라우드 게이트웨이를 통해 센서 클라우드로 센서 데이터를 전달한다. 전달된 센서 데이터는 Apache Hadoop 을 기반으로 하는 데이터 서버에 분산 저장된다. 또한 본 시스템은 센서 데이터를 실시간으로 파악하기 위한 시스템 인터페이스를 포함한다.

Design and Implementation of a Search Engine based on Apache Spark (아파치 스파크 기반 검색엔진의 설계 및 구현)

  • Park, Ki-Sung;Choi, Jae-Hyun;Kim, Jong-Bae;Park, Jae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.17-28
    • /
    • 2017
  • Recently, a study on data has been actively conducted because the value of the data has become more useful. Web crawler that is program of data collection recently spotlighted because it can take advantage of the various fields. Web crawler can be defined as a tool to analyze the web pages and collects the URL by traversing the web server in an automated manner. For the treatment of Big-data, distributed Web crawler is widely used which is based on the Hadoop MapReduce. But, it is difficult to use and has constraints on the performance. Apache spark that is the In-memory computing platform is an alternative to MapReduce. The search engine which is one of the main purposes of web crawler displays the information you search by keyword gathered by web crawler. If search engines implement a spark-based web crawler instead of traditional MapReduce-based web crawler, it would be a more rapid data collection.

FAST Design for Large-Scale Satellite Image Processing (대용량 위성영상 처리를 위한 FAST 시스템 설계)

  • Lee, Youngrim;Park, Wanyong;Park, Hyunchun;Shin, Daesik
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.25 no.4
    • /
    • pp.372-380
    • /
    • 2022
  • This study proposes a distributed parallel processing system, called the Fast Analysis System for remote sensing daTa(FAST), for large-scale satellite image processing and analysis. FAST is a system that designs jobs in vertices and sequences, and distributes and processes them simultaneously. FAST manages data based on the Hadoop Distributed File System, controls entire jobs based on Apache Spark, and performs tasks in parallel in multiple slave nodes based on a docker container design. FAST enables the high-performance processing of progressively accumulated large-volume satellite images. Because the unit task is performed based on Docker, it is possible to reuse existing source codes for designing and implementing unit tasks. Additionally, the system is robust against software/hardware faults. To prove the capability of the proposed system, we performed an experiment to generate the original satellite images as ortho-images, which is a pre-processing step for all image analyses. In the experiment, when FAST was configured with eight slave nodes, it was found that the processing of a satellite image took less than 30 sec. Through these results, we proved the suitability and practical applicability of the FAST design.

A Security Log Analysis System using Logstash based on Apache Elasticsearch (아파치 엘라스틱서치 기반 로그스태시를 이용한 보안로그 분석시스템)

  • Lee, Bong-Hwan;Yang, Dong-Min
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.2
    • /
    • pp.382-389
    • /
    • 2018
  • Recently cyber attacks can cause serious damage on various information systems. Log data analysis would be able to resolve this problem. Security log analysis system allows to cope with security risk properly by collecting, storing, and analyzing log data information. In this paper, a security log analysis system is designed and implemented in order to analyze security log data using the Logstash in the Elasticsearch, a distributed search engine which enables to collect and process various types of log data. The Kibana, an open source data visualization plugin for Elasticsearch, is used to generate log statistics and search report, and visualize the results. The performance of Elasticsearch-based security log analysis system is compared to the existing log analysis system which uses the Flume log collector, Flume HDFS sink and HBase. The experimental results show that the proposed system tremendously reduces both database query processing time and log data analysis time compared to the existing Hadoop-based log analysis system.

Efficient K-Anonymization Implementation with Apache Spark

  • Kim, Tae-Su;Kim, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.11
    • /
    • pp.17-24
    • /
    • 2018
  • Today, we are living in the era of data and information. With the advent of Internet of Things (IoT), the popularity of social networking sites, and the development of mobile devices, a large amount of data is being produced in diverse areas. The collection of such data generated in various area is called big data. As the importance of big data grows, there has been a growing need to share big data containing information regarding an individual entity. As big data contains sensitive information about individuals, directly releasing it for public use may violate existing privacy requirements. Thus, privacy-preserving data publishing (PPDP) has been actively studied to share big data containing personal information for public use, while preserving the privacy of the individual. K-anonymity, which is the most popular method in the area of PPDP, transforms each record in a table such that at least k records have the same values for the given quasi-identifier attributes, and thus each record is indistinguishable from other records in the same class. As the size of big data continuously getting larger, there is a growing demand for the method which can efficiently anonymize vast amount of dta. Thus, in this paper, we develop an efficient k-anonymity method by using Spark distributed framework. Experimental results show that, through the developed method, significant gains in processing time can be achieved.

Performance Factor of Distributed Processing of Machine Learning using Spark (스파크를 이용한 머신러닝의 분산 처리 성능 요인)

  • Ryu, Woo-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.19-24
    • /
    • 2021
  • In this paper, we study performance factor of machine learning in the distributed environment using Apache Spark and presents an efficient distributed processing method through experiments. This work firstly presents performance factor when performing machine learning in a distributed cluster by classifying cluster performance, data size, and configuration of spark engine. In addition, performance study of regression analysis using Spark MLlib running on the Hadoop cluster is performed while changing the configuration of the node and the Spark Executor. As a result of the experiment, it was confirmed that the effective number of executors was affected by the number of data blocks, but depending on the cluster size, the maximum and minimum values were limited by the number of cores and the number of worker nodes, respectively.

Recommendation of Best Empirical Route Based on Classification of Large Trajectory Data (대용량 경로데이터 분류에 기반한 경험적 최선 경로 추천)

  • Lee, Kye Hyung;Jo, Yung Hoon;Lee, Tea Ho;Park, Heemin
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.2
    • /
    • pp.101-108
    • /
    • 2015
  • This paper presents the implementation of a system that recommends empirical best routes based on classification of large trajectory data. As many location-based services are used, we expect the amount of location and trajectory data to become big data. Then, we believe we can extract the best empirical routes from the large trajectory repositories. Large trajectory data is clustered into similar route groups using Hadoop MapReduce framework. Clustered route groups are stored and managed by a DBMS, and thus it supports rapid response to the end-users' request. We aim to find the best routes based on collected real data, not the ideal shortest path on maps. We have implemented 1) an Android application that collects trajectories from users, 2) Apache Hadoop MapReduce program that can cluster large trajectory data, 3) a service application to query start-destination from a web server and to display the recommended routes on mobile phones. We validated our approach using real data we collected for five days and have compared the results with commercial navigation systems. Experimental results show that the empirical best route is better than routes recommended by commercial navigation systems.

Development of Procurement Announcement Analysis Support System (전자조달공고 분석지원 시스템 개발)

  • Lim, Il-kwon;Park, Dong-Jun;Cho, Han-Jin
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.8
    • /
    • pp.53-60
    • /
    • 2018
  • Domestic public e-procurement has been recognized excellence at home and abroad. However, it is difficult for procurement companies to check the related announcements and to grasp the status of procurement announcements at a glance. In this paper, we propose an e-Procurement Announcement Analysis Support System using the HDFS, HDFS, Apache Spark, and Collaborative Filtering Technology for procurement announcement recommendation service and procurement announcement and contract trend analysis service for effective e-procurement system. Procurement announcement recommendation service can relieve the procurement company from searching for announcements according to the characteristics and characteristics of the procurement company. The procurement announcement/contract trend analysis service visualizes the procurement announcement/contract information and procures It is implemented so that the analysis information of electronic procurement can be seen at a glance to the company and the demand organization.