• Title/Summary/Keyword: HADOOP

Search Result 395, Processing Time 0.023 seconds

"Multi-use Data Platform" 하둡 2.0과 관련 데이터 처리 프레임워크 기술

  • Kim, Jik-Su
    • Broadcasting and Media Magazine
    • /
    • v.22 no.4
    • /
    • pp.11-17
    • /
    • 2017
  • 본 고에서는 멀티 응용 데이터 플랫폼으로 진화하고 있는 하둡(Hadoop) 2.0의 주요 특징과 관련된 다양한 데이터 처리 프레임워크들에 대해서 기술하고자 한다. 기존의 맵리듀스(MapReduce) 기반의 배치 처리(Batch Processing)에 최적화되어 있던 하둡 1.0과는 달리, YARN의 등장과 함께 시작된 하둡 2.0 플랫폼은 다양한 형태의 데이터 처리 워크플로우들(Batch, Interactive, Streaming 등)을 지원할 수 있는 기능을 제공하고 있다. 또한, 최근에는 고성능컴퓨팅 분야에서 주로 활용되던 기술들도 하둡 2.0 플랫폼에서 지원되고 있다. 마지막으로 YARN 어플리케이션 개발 사례로서 본 연구팀에서 개발 중에 있는 Many-Task Computing (MTC) 응용을 위한 신규 데이터 처리 프레임워크를 소개한다.

Improving Hadoop security using TPM (TPM을 이용한 하둡 보안의 강화)

  • Park, Seung-Je;Kim, Hee-Youl
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06c
    • /
    • pp.233-235
    • /
    • 2012
  • 하둡 프레임워크는 현재 오픈소스 기반의 클라우드 인프라의 사실상 표준이다. 최초 하둡은 보안요소를 고려하지 않고 설계 되었지만 현재는 강력한 인증프로토콜인 커버로스를 사용하는 등의 보안 기능이 추가되었다. 하둡 보안은 꽤 안전해 보이지만, 클라우드 컴퓨팅의 범용적인 사용의 가장 중요한 요소는 보안인 것을 감안해보면 클라우드 제공자는 기존보다 더욱 강력한 보안 레벨을 고객에게 보장하여야 한다. 본 논문에서는 하둡 보안의 한계점을 제시하고 하드웨어 보안칩 TPM(Trusted Platform Module)을 이용한 해결방안을 제시한다.

MRQUTER : A Parallel Qualitative Temporal Reasoner Using MapReduce Framework (MRQUTER: MapReduce 프레임워크를 이용한 병렬 정성 시간 추론기)

  • Kim, Jonghoon;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.5
    • /
    • pp.231-242
    • /
    • 2016
  • In order to meet rapid changes of Web information, it is necessary to extend the current Web technologies to represent both the valid time and location of each fact and knowledge, and reason their relationships. Until recently, many researches on qualitative temporal reasoning have been conducted in laboratory-scale, dealing with small knowledge bases. However, in this paper, we propose the design and implementation of a parallel qualitative temporal reasoner, MRQUTER, which can make reasoning over Web-scale large knowledge bases. This parallel temporal reasoner was built on a Hadoop cluster system using the MapReduce parallel programming framework. It decomposes the entire qualitative temporal reasoning process into several MapReduce jobs such as the encoding and decoding job, the inverse and equal reasoning job, the transitive reasoning job, the refining job, and applies some optimization techniques into each component reasoning job implemented with a pair of Map and Reduce functions. Through experiments using large benchmarking temporal knowledge bases, MRQUTER shows high reasoning performance and scalability.

E-Discovery Process Model and Alternative Technologies for an Effective Litigation Response of the Company (기업의 효과적인 소송 대응을 위한 전자증거개시 절차 모델과 대체 기술)

  • Lee, Tae-Rim;Shin, Sang-Uk
    • Journal of Digital Convergence
    • /
    • v.10 no.8
    • /
    • pp.287-297
    • /
    • 2012
  • In order to prepare for the introduction of the E-Discovery system from the United States and to cope with some causable changes of legal systems, we propose a general E-Discovery process and essential tasks of the each phase. The proposed process model is designed by the analysis of well-known projects such as EDRM, The Sedona Conference, which are advanced research for the standardization of E-Discovery task procedures and for the supply of guidelines to hands-on workers. In addition, Machine Learning Algorithms, Open-source libraries for the Information Retrieval and Distributed Processing technologies based on the Hadoop for big data are introduced and its application methods on the E-Discovery work scenario are proposed. All this information will be useful to vendors or people willing to develop the E-Discovery service solution. Also, it is very helpful to company owners willing to rebuild their business process and it enables people who are about to face a major lawsuit to handle a situation effectively.

A Security Log Analysis System using Logstash based on Apache Elasticsearch (아파치 엘라스틱서치 기반 로그스태시를 이용한 보안로그 분석시스템)

  • Lee, Bong-Hwan;Yang, Dong-Min
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.2
    • /
    • pp.382-389
    • /
    • 2018
  • Recently cyber attacks can cause serious damage on various information systems. Log data analysis would be able to resolve this problem. Security log analysis system allows to cope with security risk properly by collecting, storing, and analyzing log data information. In this paper, a security log analysis system is designed and implemented in order to analyze security log data using the Logstash in the Elasticsearch, a distributed search engine which enables to collect and process various types of log data. The Kibana, an open source data visualization plugin for Elasticsearch, is used to generate log statistics and search report, and visualize the results. The performance of Elasticsearch-based security log analysis system is compared to the existing log analysis system which uses the Flume log collector, Flume HDFS sink and HBase. The experimental results show that the proposed system tremendously reduces both database query processing time and log data analysis time compared to the existing Hadoop-based log analysis system.

An Algorithms for Tournament-based Big Data Analysis (토너먼트 기반의 빅데이터 분석 알고리즘)

  • Lee, Hyunjin
    • Journal of Digital Contents Society
    • /
    • v.16 no.4
    • /
    • pp.545-553
    • /
    • 2015
  • While all of the data has a value in itself, most of the data that is collected in the real world is a random and unstructured. In order to extract useful information from the data, it is need to use the data transform and analysis algorithms. Data mining is used for this purpose. Today, there is not only need for a variety of data mining techniques to analyze the data but also need for a computational requirements and rapid analysis time for huge volume of data. The method commonly used to store huge volume of data is to use the hadoop. A method for analyzing data in hadoop is to use the MapReduce framework. In this paper, we developed a tournament-based MapReduce method for high efficiency in developing an algorithm on a single machine to the MapReduce framework. This proposed method can apply many analysis algorithms and we showed the usefulness of proposed tournament based method to apply frequently used data mining algorithms k-means and k-nearest neighbor classification.

A Distributed Cache Management Scheme for Efficient Accesses of Small Files in HDFS (HDFS에서 소형 파일의 효율적인 접근을 위한 분산 캐시 관리 기법)

  • Oh, Hyunkyo;Kim, Kiyeon;Hwang, Jae-Min;Park, Junho;Lim, Jongtae;Bok, Kyoungsoo;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.28-38
    • /
    • 2014
  • In this paper, we propose the distributed cache management scheme to efficiently access small files in Hadoop Distributed File Systems(HDFS). The proposed scheme can reduce the number of metadata managed by a name node since many small files are merged and stored in a chunk. It is also possible to reduce the file access costs, by keeping the information of requested files using the client cache and data node caches. The client cache keeps small files that a user requests and metadata. Each data node cache keeps the small files that are frequently requested by users. It is shown through performance evaluation that the proposed scheme significantly reduces the processing time over the existing scheme.

Statistical Approach to Sentiment Classification using MapReduce (맵리듀스를 이용한 통계적 접근의 감성 분류)

  • Kang, Mun-Su;Baek, Seung-Hee;Choi, Young-Sik
    • Science of Emotion and Sensibility
    • /
    • v.15 no.4
    • /
    • pp.425-440
    • /
    • 2012
  • As the scale of the internet grows, the amount of subjective data increases. Thus, A need to classify automatically subjective data arises. Sentiment classification is a classification of subjective data by various types of sentiments. The sentiment classification researches have been studied focused on NLP(Natural Language Processing) and sentiment word dictionary. The former sentiment classification researches have two critical problems. First, the performance of morpheme analysis in NLP have fallen short of expectations. Second, it is not easy to choose sentiment words and determine how much a word has a sentiment. To solve these problems, this paper suggests a combination of using web-scale data and a statistical approach to sentiment classification. The proposed method of this paper is using statistics of words from web-scale data, rather than finding a meaning of a word. This approach differs from the former researches depended on NLP algorithms, it focuses on data. Hadoop and MapReduce will be used to handle web-scale data.

  • PDF

Big Data Preprocessing for Predicting Box Office Success (영화 흥행 실적 예측을 위한 빅데이터 전처리)

  • Jun, Hee-Gook;Hyun, Geun-Soo;Lim, Kyung-Bin;Lee, Woo-Hyun;Kim, Hyoung-Joo
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.12
    • /
    • pp.615-622
    • /
    • 2014
  • The Korean film market has rapidly achieved an international scale, and this has led to a need for decision-making based on analytical methods that are more precise and appropriate. In this modern era, a highly advanced information environment can provide an overwhelming amount of data that is generated in real time, and this data must be properly handled and analyzed in order to extract useful information. In particular, the preprocessing of large data, which is the most time-consuming step, should be done in a reasonable amount of time. In this paper, we investigated a big data preprocessing method for predicting movie box office success. We analyzed the movie data characteristics for specialized preprocessing methods, and used the Hadoop MapReduce framework. The experimental results showed that the preprocessing methods using big data techniques are more effective than existing methods.

Spark-based Network Log Analysis Aystem for Detecting Network Attack Pattern Using Snort (Snort를 이용한 비정형 네트워크 공격패턴 탐지를 수행하는 Spark 기반 네트워크 로그 분석 시스템)

  • Baek, Na-Eun;Shin, Jae-Hwan;Chang, Jin-Su;Chang, Jae-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.4
    • /
    • pp.48-59
    • /
    • 2018
  • Recently, network technology has been used in various fields due to development of network technology. However, there has been an increase in the number of attacks targeting public institutions and companies by exploiting the evolving network technology. Meanwhile, the existing network intrusion detection system takes much time to process logs as the amount of network log increases. Therefore, in this paper, we propose a Spark-based network log analysis system that detects unstructured network attack pattern. by using Snort. The proposed system extracts and analyzes the elements required for network attack pattern detection from large amount of network log data. For the analysis, we propose a rule to detect network attack patterns for Port Scanning, Host Scanning, DDoS, and worm activity, and can detect real attack pattern well by applying it to real log data. Finally, we show from our performance evaluation that the proposed Spark-based log analysis system is more than two times better on log data processing performance than the Hadoop-based system.