• Title/Summary/Keyword: distributed data storage

Search Result 296, Processing Time 0.023 seconds

Software Defined Storaging Method for Data Sharing and Maintenance on Distributed Storage Envorinment (분산 저장환경의 데이터공유 및 관리를 위한 소프트웨어 정의 저장 방법)

  • Cha, ByungRae;Park, Sun;Kim, JongWon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.644-645
    • /
    • 2014
  • This paper proposes a software defined storaging method to converge the network virtualization techique and the RAID of distributed storage environment. The proposed method designs software based storage which it apply a flexible control and maintenance of storages. In addition, the method overcomes the restricted of physical storage cpapcity and cut cousts of data recovery.

  • PDF

A Time-Parameterized Data-Centric Storage Method for Storage Utilization and Energy Efficiency in Sensor Networks (센서 네트워크에서 저장 공간의 활용성과 에너지 효율성을 위한 시간 매개변수 기반의 데이타 중심 저장 기법)

  • Park, Yong-Hun;Yoon, Jong-Hyun;Seo, Bong-Min;Kim, June;Yoo, Jae-Soo
    • Journal of KIISE:Databases
    • /
    • v.36 no.2
    • /
    • pp.99-111
    • /
    • 2009
  • In wireless sensor networks, various schemes have been proposed to store and process sensed data efficiently. A Data-Centric Storage(DCS) scheme assigns distributed data regions to sensors and stores sensed data to the sensor which is responsible for the data region overlapping the data. The DCS schemes have been proposed to reduce the communication cost for transmitting data and process exact queries and range queries efficiently. Recently, KDDCS that readjusts the distributed data regions dynamically to sensors based on K-D tree was proposed to overcome the storage hot-spots. However, the existing DCS schemes including KDDCS suffer from Query Hot-Spots that are formed if the query regions are not uniformly distributed. As a result, it causes reducing the life time of the sensor network. In this paper, we propose a new DCS scheme, called TPDCS(Time-Parameterized DCS), that avoids the problems of storage hot-spots and query hot-spots. To decentralize the skewed. data and queries, the data regions are assigned by a time dimension as well as data dimensions in our proposed scheme. Therefore, TPDCS extends the life time of sensor networks. It is shown through various experiments that our scheme outperform the existing schemes.

Verification Test of Failover Recovery Technique based on Software-Defined RAID (Software-Defined RAID 기반 장애복구 기법과 실증 테스트)

  • Cha, ByungRae;Choi, MyeongSoo;Park, Sun;Kim, JongWon
    • Smart Media Journal
    • /
    • v.5 no.1
    • /
    • pp.69-77
    • /
    • 2016
  • This paper proposes a software defined storaging method to converge the network virtualization technique and the RAID of distributed storage environment. The proposed method designs software based storage which it apply a flexible control and maintenance of storages. In addition, the method overcomes the restricted of physical storage capacity and cut costs of data recovery. The proposed failover recovery technique based on Software-Defined RAID has been tested the substantial verification and the performance using public AWS and Google Storage.

Distributed data deduplication technique using similarity based clustering and multi-layer bloom filter (SDS 환경의 유사도 기반 클러스터링 및 다중 계층 블룸필터를 활용한 분산 중복제거 기법)

  • Yoon, Dabin;Kim, Deok-Hwan
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.60-70
    • /
    • 2018
  • A software defined storage (SDS) is being deployed in cloud environment to allow multiple users to virtualize physical servers, but a solution for optimizing space efficiency with limited physical resources is needed. In the conventional data deduplication system, it is difficult to deduplicate redundant data uploaded to distributed storages. In this paper, we propose a distributed deduplication method using similarity-based clustering and multi-layer bloom filter. Rabin hash is applied to determine the degree of similarity between virtual machine servers and cluster similar virtual machines. Therefore, it improves the performance compared to deduplication efficiency for individual storage nodes. In addition, a multi-layer bloom filter incorporated into the deduplication process to shorten processing time by reducing the number of the false positives. Experimental results show that the proposed method improves the deduplication ratio by 9% compared to deduplication method using IP address based clusters without any difference in processing time.

MTTDL for Distributed Storage Systems with Dual Node Repair Capability (이중 노드 복구가 가능한 분산 저장 시스템의 MTTDL)

  • Kil, Yong Sung;Kim, Sang-Hyo;Park, Hosung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.2
    • /
    • pp.345-348
    • /
    • 2017
  • MTTDL, a measure for reliability of distributed storage system, is analyzed for the case when double node repair is possible and compared with the single node repair cases.

A Data-Consistency Scheme for the Distributed-Cache Storage of the Memcached System

  • Liao, Jianwei;Peng, Xiaoning
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.3
    • /
    • pp.92-99
    • /
    • 2017
  • Memcached, commonly used to speed up the data access in big-data and Internet-web applications, is a system software of the distributed-cache mechanism. But it is subject to the severe challenge of the loss of recently uncommitted updates in the case where the Memcached servers crash due to some reason. Although the replica scheme and the disk-log-based replay mechanism have been proposed to overcome this problem, they generate either the overhead of the replica synchronization or the persistent-storage overhead that is caused by flushing related logs. This paper proposes a scheme of backing up the write requests (i.e., set and add) on the Memcached client side, to reduce the overhead resulting from the making of disk-log records or performing the replica consistency. If the Memcached server fails, a timestamp-based recovery mechanism is then introduced to replay the write requests (buffered by relevant clients), for regaining the lost-data updates on the rebooted Memcached server, thereby meeting the data-consistency requirement. More importantly, compared with the mechanism of logging the write requests to the persistent storage of the master server and the server-replication scheme, the newly proposed approach of backing up the logs on the client side can greatly decrease the time overhead by up to 116.8% when processing the write workloads.

Standard Status on ITU-T Distributed Ledger Technology (ITU-T에서 분산원장기술 표준화 동향)

  • Kwon, D.S.;Park, J.D.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.2
    • /
    • pp.50-68
    • /
    • 2020
  • Distributed Ledger Technology (DLT) refers to a process and related technologies that enable a person to safely suggest, verify, and record state changes (usually updates) to synchronize ledgers distributed across network nodes. DLTs are becoming increasingly important as data management requirements evolve. Therefore, they need to understand the current state of standards (such as distributed storage and access technologies) to address future requirements. This paper provides ITU-T FG-DLT standard activities, such as standardization ization trends, use cases, reference architectures, platform evaluation criteria and future prospects.

DNA Based Cloud Storage Security Framework Using Fuzzy Decision Making Technique

  • Majumdar, Abhishek;Biswas, Arpita;Baishnab, Krishna Lal;Sood, Sandeep K.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3794-3820
    • /
    • 2019
  • In recent years, a cloud environment with the ability to detect illegal behaviours along with a secured data storage capability is much needed. This study presents a cloud storage framework, wherein a 128-bit encryption key has been generated by combining deoxyribonucleic acid (DNA) cryptography and the Hill Cipher algorithm to make the framework unbreakable and ensure a better and secured distributed cloud storage environment. Moreover, the study proposes a DNA-based encryption technique, followed by a 256-bit secure socket layer (SSL) to secure data storage. The 256-bit SSL provides secured connections during data transmission. The data herein are classified based on different qualitative security parameters obtained using a specialized fuzzy-based classification technique. The model also has an additional advantage of being able to decide on selecting suitable storage servers from an existing pool of storage servers. A fuzzy-based technique for order of preference by similarity to ideal solution (TOPSIS) multi-criteria decision-making (MCDM) model has been employed for this, which can decide on the set of suitable storage servers on which the data must be stored and results in a reduction in execution time by keeping up the level of security to an improved grade.

A Study on the Design and Implementation of the Lightweight Object Model Supporting Distributed Trader (분산 트레이더를 지원하는 경량 (lightweight) 객체 모델 설계 및 구현 방안 연구)

  • Jin, Myeong-Suk;Song, Byeong-Gwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.4
    • /
    • pp.1050-1061
    • /
    • 2000
  • This paper presents a new object model, LOM(Lightweight Object Model) and an implementation method for the distributed trader in heterogeneous distributed computing environment including mobile network. Trader is third party object that enables clients to find suitable servers, which provide the most appropriate services to client in distributed environment including dynamic reconfiguration of services and servers. Trading service requires simpler and more specific object model than genetic object models which provide richer multimedia data types and semantic characteristics with complex data structures. LOM supports a new reference attribute type instead of the relationship, inheritance and composite attribute types of the general object oriented models and so LOM has simple data structures. Also in LOM, the modelling step includes specifying of the information about users and the access right to objects for security in the mobile environment and development of the distributed storage for trading service. Also, we propose and implementation method of the distributed trader, which integrates the LOM-information object model and the OMG (object Management Group) computational object model.

  • PDF

A COMPARATIVE STUDY ON BLOCKCHAIN DATA MANAGEMENT SYSTEMS: BIGCHAINDB VS FALCONDB

  • Abrar Alotaibi;Sarah Alissa;Salahadin Mohammed
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.5
    • /
    • pp.128-134
    • /
    • 2023
  • The widespread usage of blockchain technology in cryptocurrencies has led to the adoption of the blockchain concept in data storage management systems for secure and effective data storage and management. Several innovative studies have proposed solutions that integrate blockchain with distributed databases. In this article, we review current blockchain databases, then focus on two well-known blockchain databases-BigchainDB and FalconDB-to illustrate their architecture and design aspects in more detail. BigchainDB is a distributed database that integrates blockchain properties to enhance immutability and decentralization as well as a high transaction rate, low latency, and accurate queries. Its architecture consists of three layers: the transaction layer, consensus layer, and data model layer. FalconDB, on the other hand, is a shared database that allows multiple clients to collaborate on the database securely and efficiently, even if they have limited resources. It has two layers: the authentication layer and the consensus layer, which are used with client requests and results. Finally, a comparison is made between the two blockchain databases, revealing that they share some characteristics such as immutability, low latency, permission, horizontal scalability, decentralization, and the same consensus protocol. However, they vary in terms of database type, concurrency mechanism, replication model, cost, and the usage of smart contracts.