• Title/Summary/Keyword: HDFS(Hadoop Distributed File System)

Search Result 54, Processing Time 0.024 seconds

CERES: A Log-based, Interactive Web Analytics System for Backbone Networks (CERES: 백본망 로그 기반 대화형 웹 분석 시스템)

  • Suh, Ilhyun;Chung, Yon Dohn
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.10
    • /
    • pp.651-657
    • /
    • 2015
  • The amount of web traffic has increased as a result of the rapid growth of the use of web-based applications. In order to obtain valuable information from web logs, we need to develop systems that can support interactive, flexible, and efficient ways to analyze and handle large amounts of data. In this paper, we present CERES, a log-based, interactive web analytics system for backbone networks. Since CERES focuses on analyzing web log records generated from backbone networks, it is possible to perform a web analysis from the perspective of a network. CERES is designed for deployment in a server cluster using the Hadoop Distributed File System (HDFS) as the underlying storage. We transform and store web log records from backbone networks into relations and then allow users to use a SQL-like language to analyze web log records in a flexible and interactive manner. In particular, we use the data cube technique to enable the efficient statistical analysis of web log. The system provides users a web-based, multi-modal user interface.

A Design of Permission Management System Based on Group Key in Hadoop Distributed File System (하둡 분산 파일 시스템에서 그룹키 기반 Permission Management 시스템 설계)

  • Kim, Hyungjoo;Kang, Jungho;You, Hanna;Jun, Moonseog
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.4
    • /
    • pp.141-146
    • /
    • 2015
  • Data have been increased enormously due to the development of IT technology such as recent smart equipments, social network services and streaming services. To meet these environments the technologies that can treat mass data have received attention, and the typical one is Hadoop. Hadoop is on the basis of open source, and it has been designed to be used at general purpose computers on the basis of Linux. To initial Hadoop nearly no security was introduced, but as the number of users increased data that need security increased and there appeared new version that introduced Kerberos and Token system in 2009. But in this method there was a problem that only one secret key can be used and access permission to blocks cannot be authenticated to each user, and there were weak points that replay attack and spoofing attack were possible. Hence, to supplement these weak points and to maintain efficiency a protocol on the basis of group key, in which users are authenticated in logical group and then this is reflected to token, is proposed in this paper. The result shows that it has solved the weak points and there is no big overhead in terms of efficiency.

Design and Implementation of an Efficient Web Services Data Processing Using Hadoop-Based Big Data Processing Technique (하둡 기반 빅 데이터 기법을 이용한 웹 서비스 데이터 처리 설계 및 구현)

  • Kim, Hyun-Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.1
    • /
    • pp.726-734
    • /
    • 2015
  • Relational databases used by structuralizing data are the most widely used in data management at present. However, in relational databases, service becomes slower as the amount of data increases because of constraints in the reading and writing operations to save or query data. Furthermore, when a new task is added, the database grows and, consequently, requires additional infrastructure, such as parallel configuration of hardware, CPU, memory, and network, to support smooth operation. In this paper, in order to improve the web information services that are slowing down due to increase of data in the relational databases, we implemented a model to extract a large amount of data quickly and safely for users by processing Hadoop Distributed File System (HDFS) files after sending data to HDFSs and unifying and reconstructing the data. We implemented our model in a Web-based civil affairs system that stores image files, which is irregular data processing. Our proposed system's data processing was found to be 0.4 sec faster than that of a relational database system. Thus, we found that it is possible to support Web information services with a Hadoop-based big data processing technique in order to process a large amount of data, as in conventional relational databases. Furthermore, since Hadoop is open source, our model has the advantage of reducing software costs. The proposed system is expected to be used as a model for Web services that provide fast information processing for organizations that require efficient processing of big data because of the increase in the size of conventional relational databases.

An Efficient Data Distribution Store Schemes for Hadoop Distributed File System (하둡 분산 파일 시스템을 위한 효율적인 데이터 분산 저장 기법)

  • Choi, Sung-Jin;Jeon, Dae-Seuk;Bae, Dae-Keuk;Choi, Bu-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06d
    • /
    • pp.163-166
    • /
    • 2011
  • 클라우드 컴퓨팅이란 인터넷 기술을 활용하여 모든 인프라 자원(소프트웨어, 서버, 스토리지, 네트워크 등)을 서비스화(as a Service)하여, 언제, 어디서든, 장치에 독립적으로 네트워크를 통해 사용하고, 사용한 만큼 비용을 지불하는 컴퓨팅으로써, 대표적인 서비스 업체로는 구글과 아마존이 있다. 최근 아파치 재단에서는 구글의 GFS와 동일 또는 유사한 시스템을 만들기 위해 HDFS 오픈소스 프로젝트를 진행하고 있다. HDFS는 빈번한 하드웨어 고장에도 원본 데이터를 복구할 수 있는 가용성을 보장하기 위해 파일 데이터를 블록 단위로 나누어, 다시 datanode에 복제하여 저장한다. 이 기법은 복제가 많아 질수록 가용성은 높아지나 스토리지가 증가한다는 단점을 가지고 있다. 따라서 본 논문에서는 이러한 문제점을 해결하기 위해 행렬의 특성을 이용한 새로운 분산 저장 기법을 제안한다.

Dynamic Replication Management Scheme based on AVL Tree for Hadoop Distributed File System (하둡 분산 파일 시스템 기반의 AVL트리를 이용한 동적 복제 관리 기법)

  • Ryu, Yeon-Joong;Youn, Hee-Yong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2014.07a
    • /
    • pp.337-340
    • /
    • 2014
  • 클라우드 시스템이 큰 이슈로 떠오르면서 그 기반이 되는 분산 파일 시스템에 관한 연구가 계속되고 있다. 최근 제안된 분산파일 시스템은 대부분 확장 가능하며 신뢰성이 있는 시스템으로 구성되어 있으며 내고장성(Fault tolerance)과 높은 가용성을 위해 데이터 복제 기법을 사용하며 하둡 분산 파일 시스템에서는 블락의 복제수를 기본3개로 지정한다. 그러나 이 정책은 복제수가 많아지면 많아질수록 가용성은 높아지지만 스토리지 또한 증가한다는 단점이 있다. 본 논문에선 이러한 문제점을 해결하기 위해 최소한의 블락 복제수와 복제된 블락을 효율적으로 배치하여 더 좋은 성능과 부하분산(Load Balancing)하기 위한 기법을 제안한다.

  • PDF

Implementation and comparison with Structured data collection modules (정형 빅데이터 수집 모듈 구현 및 비교)

  • Jang, Dong-Hwon;Lee, Min-Woo;Kim, Woosaeng
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.635-638
    • /
    • 2014
  • 빅데이터 시대의 대두에 따라 기존의 관계형 데이터베이스로는 처리하기 어려운 형태의 데이터가 발생하였다. 이런 성질의 데이터를 저장, 활용하기 위한 방법으로 Apache 하둡이 널리 사용되고 있다. 기존의 RDBMS 상의 데이터를 하둡 데이터 분석의 원천 데이터로 활용하려고 하는 경우, 혹은 데이터 크기와 복잡도의 증가로 저장방식을 바꿔야 하는 경우 데이터를 HDFS(Hadoop Distributed File System) 으로 전송해야 한다. 본 논문에서는 정형 데이터 수집 모듈인 Sqoop과 Nosqoop4u의 개발을 통하여 데이터 전송 성능을 비교하였다.

Big data-based piping material analysis framework in offshore structure for contract design

  • Oh, Min-Jae;Roh, Myung-Il;Park, Sung-Woo;Chun, Do-Hyun;Myung, Sehyun
    • Ocean Systems Engineering
    • /
    • v.9 no.1
    • /
    • pp.79-95
    • /
    • 2019
  • The material analysis of an offshore structure is generally conducted in the contract design phase for the price quotation of a new offshore project. This analysis is conducted manually by an engineer, which is time-consuming and can lead to inaccurate results, because the data size from previous projects is too large, and there are so many materials to consider. In this study, the piping materials in an offshore structure are analyzed for contract design using a big data framework. The big data technologies used include HDFS (Hadoop Distributed File System) for data saving, Hive and HBase for the database to handle the saved data, Spark and Kylin for data processing, and Zeppelin for user interface and visualization. The analyzed results show that the proposed big data framework can reduce the efforts put toward contract design in the estimation of the piping material cost.

An Efficient Data Transmission to Cloud Storage using USB Hijacking (USB 하이재킹을 이용한 클라우드 스토리지로의 효율적인 데이터 전송 기법)

  • Eom, Hyun-Chul;No, Jae-Chun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.6
    • /
    • pp.47-55
    • /
    • 2011
  • The performance of data transmission from mobile devices to cloud storages is limited by the amount of data being transferred, communication speed and battery consumption of mobile devices. Especially, when the large-scale data communication takes place using mobile devices, such as smart phones, the performance turbulence and power consumption become an obstacle to establish the reliable communication environment. In this paper, we present an efficient data transmission method using USB Hijacking. In our approach, the synchronization to transfer a large amount of data between mobile devices and user PC is executed by using USB Hijacking. Also, there is no need to concern about data capacity and battery consumption in the data communication. We presented several experimental results to verify the effectiveness and suitability of our approach.

A Customized Tourism System Using Log Data on Hadoop (로그 데이터를 이용한 하둡기반 맞춤형 관광시스템)

  • Ya, Ding;Kim, Kang-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.2
    • /
    • pp.397-404
    • /
    • 2018
  • As the usage of internet is increasing, a lot of user behavior are written in a log file and the researches and industries using the log files are getting activated recently. This paper uses the Hadoop based on open source distributed computing platform and proposes a customized tourism system by analyzing user behaviors in the log files. The proposed system uses Google Analytics to get user's log files from the website that users visit, and stores search terms extracted by MapReduce to HDFS. Also it gathers features about the sight-seeing places or cities which travelers want to tour from travel guide websites by Octopus application. It suggests the customized cities by matching the search terms and city features. NBP(next bit permutation) algorithm to rearrange the search terms and city features is used to increase the probability of matching. Some customized cities are suggested by analyzing log files for 39 users to show the performance of the proposed system.

An Extraction Method of Sentiment Infromation from Unstructed Big Data on SNS (SNS상의 비정형 빅데이터로부터 감성정보 추출 기법)

  • Back, Bong-Hyun;Ha, Ilkyu;Ahn, ByoungChul
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.6
    • /
    • pp.671-680
    • /
    • 2014
  • Recently, with the remarkable increase of social network services, it is necessary to extract interesting information from lots of data about various individual opinions and preferences on SNS(Social Network Service). The sentiment information can be applied to various fields of society such as politics, public opinions, economics, personal services and entertainments. To extract sentiment information, it is necessary to use processing techniques that store a large amount of SNS data, extract meaningful data from them, and search the sentiment information. This paper proposes an efficient method to extract sentiment information from various unstructured big data on social networks using HDFS(Hadoop Distributed File System) platform and MapReduce functions. In experiments, the proposed method collects and stacks data steadily as the number of data is increased. When the proposed functions are applied to sentiment analysis, the system keeps load balancing and the analysis results are very close to the results of manual work.