• Title/Summary/Keyword: Big Data Processing Technology

Search Result 385, Processing Time 0.029 seconds

로그 분석 처리율 향상을 위한 맵리듀스 기반 분할 빅데이터 분석 기법 (MapReduce-Based Partitioner Big Data Analysis Scheme for Processing Rate of Log Analysis)

  • 이협건;김영운;박지용;이진우
    • 한국정보전자통신기술학회논문지
    • /
    • 제11권5호
    • /
    • pp.593-600
    • /
    • 2018
  • 인터넷과 스마트기기의 발달로 인해 소셜미디어 등 다양한 미디어의 접근의 용이해짐에 따라 많은 양의 빅데이터들이 생성되고 있다. 특히 다양한 인터넷 서비스를 제공하는 기업들은 고객 성향 및 패턴, 보안성 강화를 위해 맵리듀스 기반 빅데이터 분석 기법들을 활용하여 빅데이터 분석하고 있다. 그러나 맵리듀스는 리듀스 단계에서 생성되는 리듀서 객체의 수를 한 개로 정의하고 있어, 빅데이터 분석할 때 처리될 많은 데이터들이 하나의 리듀서 객체에 집중된다. 이로 인해 리듀서 객체는 병목현상이 발생으로 빅데이터 분석 처리율이 감소한다. 이에 본 논문에서는 로그 분석처리율 향상을 위한 맵리듀스 기반 분할 빅데이터 분석 기법을 제안한다. 제안한 기법은 리듀서 분할 단계와 분석 결과병합 단계로 구분하며 리듀서 객체의 수를 유동적으로 생성하여 병목현상을 감소시켜 빅데이터 처리율을 향상시킨다.

Development of the design methodology for large-scale database based on MongoDB

  • Lee, Jun-Ho;Joo, Kyung-Soo
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권11호
    • /
    • pp.57-63
    • /
    • 2017
  • The recent sudden increase of big data has characteristics such as continuous generation of data, large amount, and unstructured format. The existing relational database technologies are inadequate to handle such big data due to the limited processing speed and the significant storage expansion cost. Thus, big data processing technologies, which are normally based on distributed file systems, distributed database management, and parallel processing technologies, have arisen as a core technology to implement big data repositories. In this paper, we propose a design methodology for large-scale database based on MongoDB by extending the information engineering methodology based on E-R data model.

Cascaded-Hop For DeepFake Videos Detection

  • Zhang, Dengyong;Wu, Pengjie;Li, Feng;Zhu, Wenjie;Sheng, Victor S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권5호
    • /
    • pp.1671-1686
    • /
    • 2022
  • Face manipulation tools represented by Deepfake have threatened the security of people's biological identity information. Particularly, manipulation tools with deep learning technology have brought great challenges to Deepfake detection. There are many solutions for Deepfake detection based on traditional machine learning and advanced deep learning. However, those solutions of detectors almost have problems of poor performance when evaluated on different quality datasets. In this paper, for the sake of making high-quality Deepfake datasets, we provide a preprocessing method based on the image pixel matrix feature to eliminate similar images and the residual channel attention network (RCAN) to resize the scale of images. Significantly, we also describe a Deepfake detector named Cascaded-Hop which is based on the PixelHop++ system and the successive subspace learning (SSL) model. By feeding the preprocessed datasets, Cascaded-Hop achieves a good classification result on different manipulation types and multiple quality datasets. According to the experiment on FaceForensics++ and Celeb-DF, the AUC (area under curve) results of our proposed methods are comparable to the state-of-the-art models.

Big Data 분석을 활용한 통신망 관리 시스템의 개선방안에 관한 연구 (A Study on the Enhancement Process of the Telecommunication Network Management using Big Data Analysis)

  • 구성환;신민수
    • 한국산학기술학회논문지
    • /
    • 제13권12호
    • /
    • pp.6060-6070
    • /
    • 2012
  • 시장의 변화 및 소비자의 요구 변화를 비롯한 기업 내외부의 상황변화에 대응해서 얼마나 빠르게 적응할 수 있는가 하는 것이 실시간 기업의 핵심요건이다. 이러한 실시간 기업이 가진 변화의 속도를 지원하기 위해서 최근 Big Data 처리 기술이 각광받고 있다. 특히 최근 유무선 통신망의 진화 및 고도화가 가속되고 있는 상황에서 대규모 통신 트래픽을 실시간으로 처리하여 안정된 서비스를 제공하는 것과 강력한 보안 관제 기능은 매우 필요하다. 따라서 본 논문은 클라우드 컴퓨팅 기반의 Big Data처리기술을 활용하여 통신 사업자들이 갖고 있는 경영상의 문제점을 해결하고 효과적인 통신망 관리 시스템의 운영에 관한 연구를 진행한다.

LDBAS: Location-aware Data Block Allocation Strategy for HDFS-based Applications in the Cloud

  • Xu, Hua;Liu, Weiqing;Shu, Guansheng;Li, Jing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권1호
    • /
    • pp.204-226
    • /
    • 2018
  • Big data processing applications have been migrated into cloud gradually, due to the advantages of cloud computing. Hadoop Distributed File System (HDFS) is one of the fundamental support systems for big data processing on MapReduce-like frameworks, such as Hadoop and Spark. Since HDFS is not aware of the co-location of virtual machines in the cloud, the default scheme of block allocation in HDFS does not fit well in the cloud environments behaving in two aspects: data reliability loss and performance degradation. In this paper, we present a novel location-aware data block allocation strategy (LDBAS). LDBAS jointly optimizes data reliability and performance for upper-layer applications by allocating data blocks according to the locations and different processing capacities of virtual nodes in the cloud. We apply LDBAS to two stages of data allocation of HDFS in the cloud (the initial data allocation and data recovery), and design the corresponding algorithms. Finally, we implement LDBAS into an actual Hadoop cluster and evaluate the performance with the benchmark suite BigDataBench. The experimental results show that LDBAS can guarantee the designed data reliability while reducing the job execution time of the I/O-intensive applications in Hadoop by 8.9% on average and up to 11.2% compared with the original Hadoop in the cloud.

Big Data Smoothing and Outlier Removal for Patent Big Data Analysis

  • Choi, JunHyeog;Jun, Sunghae
    • 한국컴퓨터정보학회논문지
    • /
    • 제21권8호
    • /
    • pp.77-84
    • /
    • 2016
  • In general statistical analysis, we need to make a normal assumption. If this assumption is not satisfied, we cannot expect a good result of statistical data analysis. Most of statistical methods processing the outlier and noise also need to the assumption. But the assumption is not satisfied in big data because of its large volume and heterogeneity. So we propose a methodology based on box-plot and data smoothing for controling outlier and noise in big data analysis. The proposed methodology is not dependent upon the normal assumption. In addition, we select patent documents as target domain of big data because patent big data analysis is a important issue in management of technology. We analyze patent documents using big data learning methods for technology analysis. The collected patent data from patent databases on the world are preprocessed and analyzed by text mining and statistics. But the most researches about patent big data analysis did not consider the outlier and noise problem. This problem decreases the accuracy of prediction and increases the variance of parameter estimation. In this paper, we check the existence of the outlier and noise in patent big data. To know whether the outlier is or not in the patent big data, we use box-plot and smoothing visualization. We use the patent documents related to three dimensional printing technology to illustrate how the proposed methodology can be used for finding the existence of noise in the searched patent big data.

Big Data Platform Based on Hadoop and Application to Weight Estimation of FPSO Topside

  • Kim, Seong-Hoon;Roh, Myung-Il;Kim, Ki-Su;Oh, Min-Jae
    • Journal of Advanced Research in Ocean Engineering
    • /
    • 제3권1호
    • /
    • pp.32-40
    • /
    • 2017
  • Recently, the amount of data to be processed and the complexity thereof have been increasing due to the development of information and communication technology, and industry's interest in such big data is increasing day by day. In the shipbuilding and offshore industry also, there is growing interest in the effective utilization of data, since various and vast amounts of data are being generated in the process of design, production, and operation. In order to effectively utilize big data in the shipbuilding and offshore industry, it is necessary to store and process large amounts of data. In this study, it was considered efficient to apply Hadoop and R, which are mostly used in big data related research. Hadoop is a framework for storing and processing big data. It provides the Hadoop Distributed File System (HDFS) for storing big data, and the MapReduce function for processing. Meanwhile, R provides various data analysis techniques through the language and environment for statistical calculation and graphics. While Hadoop makes it is easy to handle big data, it is difficult to finely process data; and although R has advanced analysis capability, it is difficult to use to process large data. This study proposes a big data platform based on Hadoop for applications in the shipbuilding and offshore industry. The proposed platform includes the existing data of the shipyard, and makes it possible to manage and process the data. To check the applicability of the platform, it is applied to estimate the weights of offshore structure topsides. In this study, we store data of existing FPSOs in Hadoop-based Hortonworks Data Platform (HDP), and perform regression analysis using RHadoop. We evaluate the effectiveness of large data processing by RHadoop by comparing the results of regression analysis and the processing time, with the results of using the conventional weight estimation program.

빅데이터 처리 프로세스 및 활용 (Big Data Processing and Utilization)

  • 이성훈;이동우
    • 디지털융복합연구
    • /
    • 제11권4호
    • /
    • pp.267-271
    • /
    • 2013
  • 우리사회는 점점 더 융/복합 현상이 가속화되고, 광범위한 영역으로 확대되고 있다. 이러한 중심축에는 정보통신 기술이 자리잡고 있음은 당연한 일이다. 일례로 정보통신기술과 의료산업의 융합의 결과로 스마트 헬스케어 산업이 등장하였으며, 모든 분야에 정보통신 기술을 접목하고자 하는 노력들이 계속되고 있다. 이로 인해 우리주변에는 수많은 디지털 데이터들이 만들어지고 있다. 또 다른 한편으로는 대중화 되고 있는 스마트폰, 태블릿PC와 카메라, 게임기기등을 통하여 다양한 데이터들이 생성되고 있다. 본 연구에서는 광범위하게 발생하고 있는 빅데이터에 대한 활용 상태를 알아보고 빅데이터 플랫폼의 한 축인 처리 프로세스들에 대해 비교, 분석하였다.

IoT-Based Health Big-Data Process Technologies: A Survey

  • Yoo, Hyun;Park, Roy C.;Chung, Kyungyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권3호
    • /
    • pp.974-992
    • /
    • 2021
  • Recently, the healthcare field has undergone rapid changes owing to the accumulation of health big data and the development of machine learning. Data mining research in the field of healthcare has different characteristics from those of other data analyses, such as the structural complexity of the medical data, requirement for medical expertise, and security of personal medical information. Various methods have been implemented to address these issues, including the machine learning model and cloud platform. However, the machine learning model presents the problem of opaque result interpretation, and the cloud platform requires more in-depth research on security and efficiency. To address these issues, this paper presents a recent technology for Internet-of-Things-based (IoT-based) health big data processing. We present a cloud-based IoT health platform and health big data processing technology that reduces the medical data management costs and enhances safety. We also present a data mining technology for health-risk prediction, which is the core of healthcare. Finally, we propose a study using explainable artificial intelligence that enhances the reliability and transparency of the decision-making system, which is called the black box model owing to its lack of transparency.

Hadoop과 HBase 기반의 빅 데이터 처리 응용을 위한 가상 컴퓨팅 자원 이용률 분석 (An Analysis of Utilization on Virtualized Computing Resource for Hadoop and HBase based Big Data Processing Applications)

  • 조나연;구민오;김바울;;민덕기
    • 정보화연구
    • /
    • 제11권4호
    • /
    • pp.449-462
    • /
    • 2014
  • 빅 데이터 시대에서 데이터를 획득하고 저장하며 실시간으로 유입되거나 저장 된 데이터를 분석하는 처리 시스템은 다양한 부분을 고려해야 한다. 기존의 데이터 처리 시스템들과는 상이하게 빅 데이터 처리 시스템들에서는 시스템 내에서 처리될 데이터들의 포맷, 유입 속도, 크기 등의 특성을 고려해야한다. 이러한 상황에서, 가상화된 컴퓨팅 플랫폼은 가상화 기술로써 컴퓨팅 자원들을 동적이고 신축적으로 관리할 수 있음에 따라, 빅 데이터를 효율적으로 처리하기 위해 급부상하고 있는 플랫폼 중 하나이다. 본 논문에서는 가상화 된 컴퓨팅 플랫폼 상에서 Apache Hadoop과 HBase 기반의 빅 데이터처리 미들웨어를 구동하기 위하여 적합한 배포 모델을 위한 가상 컴퓨팅 자원 이용률을 분석하였다. 본 연구 결과, Task Tracker 서비스는 처리 중 높은 CPU 자원 활용율과 중간 결과물 저장 시점에서는 비교적 높은 디스크 I/O 사용을 보였다. 또한 HRegion 서비스의 경우, DataNode와의 데이터 교환을 위한 네트워크 자원 활용 비율이 높았으며, DataNode 서비스는 I/O 집약적인 처리 패턴을 보였다.