• Title/Summary/Keyword: Distributed data

Search Result 6,037, Processing Time 0.033 seconds

Distributed Database Design using Evolutionary Algorithms

  • Tosun, Umut
    • Journal of Communications and Networks
    • /
    • v.16 no.4
    • /
    • pp.430-435
    • /
    • 2014
  • The performance of a distributed database system depends particularly on the site-allocation of the fragments. Queries access different fragments among the sites, and an originating site exists for each query. A data allocation algorithm should distribute the fragments to minimize the transfer and settlement costs of executing the query plans. The primary cost for a data allocation algorithm is the cost of the data transmission across the network. The data allocation problem in a distributed database is NP-complete, and scalable evolutionary algorithms were developed to minimize the execution costs of the query plans. In this paper, quadratic assignment problem heuristics were designed and implemented for the data allocation problem. The proposed algorithms find near-optimal solutions for the data allocation problem. In addition to the fast ant colony, robust tabu search, and genetic algorithm solutions to this problem, we propose a fast and scalable hybrid genetic multi-start tabu search algorithm that outperforms the other well-known heuristics in terms of execution time and solution quality.

Intelligent Distributed Platform using Mobile Agent based on Dynamic Group Binding (동적 그룹 바인딩 기반의 모바일 에이전트를 이용한 인텔리전트 분산 플랫폼)

  • Mateo, Romeo Mark A.;Lee, Jae-Wan
    • Journal of Internet Computing and Services
    • /
    • v.8 no.3
    • /
    • pp.131-143
    • /
    • 2007
  • The current trends in information technology and intelligent systems use data mining techniques to discover patterns and extract rules from distributed databases. In distributed environment, the extracted rules from data mining techniques can be used in dynamic replications, adaptive load balancing and other schemes. However, transmission of large data through the system can cause errors and unreliable results. This paper proposes the intelligent distributed platform based on dynamic group binding using mobile agents which addresses the use of intelligence in distributed environment. The proposed grouping service implements classification scheme of objects. Data compressor agent and data miner agent extracts rules and compresses data, respectively, from the service node databases. The proposed algorithm performs preprocessing where it merges the less frequent dataset using neuro-fuzzy classifier before sending the data. Object group classification, data mining the service node database, data compression method, and rule extraction were simulated. Result of experiments in efficient data compression and reliable rule extraction shows that the proposed algorithm has better performance compared to other methods.

  • PDF

An Efficient Data Replacement Algorithm for Performance Optimization of MapReduce in Non-dedicated Distributed Computing Environments (비-전용 분산 컴퓨팅 환경에서 맵-리듀스 처리 성능 최적화를 위한 효율적인 데이터 재배치 알고리즘)

  • Ryu, Eunkyung;Son, Ingook;Park, Junho;Bok, Kyoungsoo;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.9
    • /
    • pp.20-27
    • /
    • 2013
  • In recently years, with the growth of social media and the development of mobile devices, the data have been significantly increased. MapReduce is an emerging programming model that processes large amount of data. However, since MapReduce evenly places the data in the dedicated distributed computing environment, it is not suitable to the non-dedicated distributed computing environment. The data replacement algorithms were proposed for performance optimization of MapReduce in the non-dedicated distributed computing environments. However, they spend much time for date replacement and cause the network load for unnecessary data transmission. In this paper, we propose an efficient data replacement algorithm for the performance optimization of MapReduce in the non-dedicated distributed computing environments. The proposed scheme computes the ratio of data blocks in the nodes based on the node availability model and reduces the network load by transmitting the data blocks considering the data placement. Our experimental results show that the proposed scheme outperforms the existing scheme.

Comparative Analysis of Centralized Vs. Distributed Locality-based Repository over IoT-Enabled Big Data in Smart Grid Environment

  • Siddiqui, Isma Farah;Abbas, Asad;Lee, Scott Uk-Jin
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2017.01a
    • /
    • pp.75-78
    • /
    • 2017
  • This paper compares operational and network analysis of centralized and distributed repository for big data solutions in the IoT enabled Smart Grid environment. The comparative analysis clearly depicts that centralize repository consumes less memory consumption while distributed locality-based repository reduce network complexity issues than centralize repository in state-of-the-art Big Data Solution.

  • PDF

Randomized Block Size (RBS) Model for Secure Data Storage in Distributed Server

  • Sinha, Keshav;Paul, Partha;Amritanjali, Amritanjali
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4508-4530
    • /
    • 2021
  • Today distributed data storage service are being widely used. However lack of proper means of security makes the user data vulnerable. In this work, we propose a Randomized Block Size (RBS) model for secure data storage in distributed environments. The model work with multifold block sizes encrypted with the Chinese Remainder Theorem-based RSA (C-RSA) technique for end-to-end security of multimedia data. The proposed RBS model has a key generation phase (KGP) for constructing asymmetric keys, and a rand generation phase (RGP) for applying optimal asymmetric encryption padding (OAEP) to the original message. The experimental results obtained with text and image files show that the post encryption file size is not much affected, and data is efficiently encrypted while storing at the distributed storage server (DSS). The parameters such as ciphertext size, encryption time, and throughput have been considered for performance evaluation, whereas statistical analysis like similarity measurement, correlation coefficient, histogram, and entropy analysis uses to check image pixels deviation. The number of pixels change rate (NPCR) and unified averaged changed intensity (UACI) were used to check the strength of the proposed encryption technique. The proposed model is robust with high resilience against eavesdropping, insider attack, and chosen-plaintext attack.

A Quality Evaluation Model for Distributed Processing Systems of Big Data (빅데이터 분산처리시스템의 품질평가모델)

  • Choi, Seung-Jun;Park, Jea-Won;Kim, Jong-Bae;Choi, Jae-Hyun
    • Journal of Digital Contents Society
    • /
    • v.15 no.4
    • /
    • pp.533-545
    • /
    • 2014
  • According to the evolving of IT technologies, the amount of data we are facing increasing exponentially. Thus, the technique for managing and analyzing these vast data that has emerged is a distributed processing system of big data. A quality evaluation for the existing distributed processing systems has been proceeded by the structured data environment. Thus, if we apply this to the evaluation of distributed processing systems of big data which has to focus on the analysis of the unstructured data, a precise quality assessment cannot be made. Therefore, a study of the quality evaluation model for the distributed processing systems is needed, which considers the environment of the analysis of big data. In this paper, we propose a new quality evaluation model by deriving the quality evaluation elements based on the ISO/IEC9126 which is the international standard on software quality, and defining metrics for validating the elements.

Privacy Enhanced Data Security Mechanism in a Large-Scale Distributed Computing System for HTC and MTC

  • Rho, Seungwoo;Park, Sangbae;Hwang, Soonwook
    • International Journal of Contents
    • /
    • v.12 no.2
    • /
    • pp.6-11
    • /
    • 2016
  • We developed a pilot-job based large-scale distributed computing system to support HTC and MTC, called HTCaaS (High-Throughput Computing as a Service), which helps scientists solve large-scale scientific problems in areas such as pharmaceutical domains, high-energy physics, nuclear physics and bio science. Since most of these problems involve critical data that affect the national economy and activate basic industries, data privacy is a very important issue. In this paper, we implement a privacy enhanced data security mechanism to support HTC and MTC in a large-scale distributed computing system and show how this technique affects performance in our system. With this mechanism, users can securely store data in our system.

Distributed Data Processing for Bigdata Analysis in War Game Simulation Environment (워게임 시뮬레이션 환경에 맞는 빅데이터 분석을 위한 분산처리기술)

  • Bae, Minsu
    • The Journal of Bigdata
    • /
    • v.4 no.2
    • /
    • pp.73-83
    • /
    • 2019
  • Since the emergence of the fourth industrial revolution, data analysis is being conducted in various fields. Distributed data processing has already become essential for the fast processing of large amounts of data. However, in the defense sector, simulation used cannot fully utilize the unstructured data which are prevailing at real environments. In this study, we propose a distributed data processing platform that can be applied to battalion level simulation models to provide visualized data for command decisions during training. 500,000 data points of strategic game were analyzed. Considering the winning factors in the data, variance processing was conducted to analyze the data for the top 10% teams. With the increase in the number of nodes, the model becomes scalable.

  • PDF

Discovery Temporal Association Rules in Distributed Database (분산데이터베이스 환경하의 시간연관규칙 적용)

  • Yan Zhao;Kim, Long;Sungbo Seo;Ryu, Keun-Ho
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.115-117
    • /
    • 2004
  • Recently, mining far association rules in distributed database environments is a central problem in knowledge discovery area. While the data are located in different share-nothing machines, and each data site grows by time. Mining global frequent itemsets is hard and not efficient in large number of distributed sewen. In many distributed databases. time component(which is usually attached to transactions in database), contains meaningful time-related rules. In this paper, we design a new DTA(distributed temporal association) algorithm that combines temporal concepts inside distributed association rules. The algorithm confirms the time interval for applying association rules in distributed databases. The experiment results show that DTA can generate interesting correlation frequent itemsets related with time periods.

  • PDF

Improving the Distributed Data Fusion Ability of the JDL Data Fusion Model (JDL 자료융합 모델의 분산 자료융합 능력 개선)

  • Park, Gyu-Dong;Byun, Young-Tae
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.2
    • /
    • pp.147-154
    • /
    • 2012
  • In this paper, we revise the JDL data fusion model to have an ability of distributed data fusion(DDF). Data fusion is a function that produces valuable information using data from multiple sources. After the network centric warfare concept was introduced, the data fusion was required to be expanded to DDF. We identify the data transfer and control between nodes is the core function of DDF. The previous data fusion models can not be used for DDF because they don't include that function. Therefore, we revise the previous JDL data fusion model by adding the core function of DDF and propose this new model as a model for DDF. We show that our model is adequate and useful for DDF by using several examples.