• 제목/요약/키워드: data storage algorithm

검색결과 447건 처리시간 0.027초

홀로그래픽 정보 저장 장치에서 클러스터링을 이용한 에러 감소 기법 제안 및 비교 (Design and Comparison of Error Reduction Methods Using Clustering in Holographic Data Storage System)

  • 김상훈;김장현;양현석;박영필
    • 정보저장시스템학회:학술대회논문집
    • /
    • 정보저장시스템학회 2005년도 추계학술대회 논문집
    • /
    • pp.83-87
    • /
    • 2005
  • Data storage related with writing and retrieving requires high storage capacity, fast transfer rate and less access time in. Today any data storage system can not satisfy these conditions, but holographic data storage system can perform faster data transfer rate because it is a page oriented memory system using volume hologram in writing and retrieving data. System architecture without mechanical actuating pare is possible, so fast data transfer rate and high storage capacity about 1Tb/cm3 can be realized. In this paper, to correct errors of binary data stored in holographic digital data storage system, find cluster centers using clustering algorithm and reduce intensities of pixels around centers. We archive the procedure by two algorithms of C-mean and subtractive clustering, and compare the results of the two algorithms. By using proper clustering algorithm, the intensity profile of data page will be uniform and the better data storage system can be realized.

  • PDF

Design and Comparison of Error Correctors Using Clustering in Holographic Data Storage System

  • Kim, Sang-Hoon;Kim, Jang-Hyun;Yang, Hyun-Seok;Park, Young-Pil
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.1076-1079
    • /
    • 2005
  • Data storage related with writing and retrieving requires high storage capacity, fast transfer rate and less access time in. Today any data storage system can not satisfy these conditions, but holographic data storage system can perform faster data transfer rate because it is a page oriented memory system using volume hologram in writing and retrieving data. System architecture without mechanical actuating part is possible, so fast data transfer rate and high storage capacity about 1Tb/cm3 can be realized. In this paper, to correct errors of binary data stored in holographic digital data storage system, find cluster centers using clustering algorithm and reduce intensities of pixels around centers. We archive the procedure by two algorithms of C-mean and subtractive clustering, and compare the results of the two algorithms. By using proper clustering algorithm, the intensity profile of data page will be uniform and the better data storage system can be realized.

  • PDF

분산 스토리지 시스템에서 데이터 중복제거를 위한 정보분산 알고리즘 및 소유권 증명 기법 (Information Dispersal Algorithm and Proof of Ownership for Data Deduplication in Dispersed Storage Systems)

  • 신영주
    • 정보보호학회논문지
    • /
    • 제25권1호
    • /
    • pp.155-164
    • /
    • 2015
  • 저장된 데이터에 대한 높은 가용성과 기밀성을 보장하는 정보분산 알고리즘은 클라우드 스토리지 등 장애 발생 비율이 높고 신뢰할 수 없는 분산 스토리지 시스템에서 유용한 방법이다. 스토리지에 저장되는 데이터의 양이 증가하면서 IT 자원을 효율적으로 활용하기 위한 데이터 중복제거기법이 많은 주목을 받고 있으며 이에 따라 데이터 중복제거가 가능한 정보분산기법에 대한 연구도 필요한 시점이다. 본 논문은 분산 스토리지 시스템에서 클라이언트 기반 중복 제거를 위한 정보분산 알고리즘과 소유권 증명 기법을 제안한다. 제안하는 방법은 저장공간 뿐만 아니라 네트워크 대역 절감이 가능하여 높은 효율성을 얻을 수 있으며 신뢰할 수 없는 스토리지 서버와 악의적인 클라이언트로부터 안전성을 보장할 수 있다.

Holographic Data Storage System using prearranged plan table by fuzzy rule and Genetic algorithm

  • Kim, Jang-Hyun;Kim, Sang-Hoon;Yang, Hyun-Seok;Park, Jin-Bae;Park, Young-Pil
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.1260-1263
    • /
    • 2005
  • Data storage related with writing and retrieving requires high storage capacity, fast transfer rate and less access time. Today any data storage system cannot satisfy these conditions, however holographic data storage system can perform faster data transfer rate because it is a page oriented memory system using volume hologram in writing and retrieving data. System can be constructed without mechanical actuating part therefore fast data transfer rate and high storage capacity about 1Tb/cm3 can be realized. In this research, to reduce errors of binary data stored in holographic data storage system, a new method for bit error reduction is suggested. First, find fuzzy rule using experimental system for Element of Holographic Digital Data System. Second, make fuzzy rule table using Genetic algorithm. Third, reduce prior error element and recording Digital Data. Recording ratio and reconstruction ratio will be very good performance

  • PDF

Performance Improvement Using Iterative Two-Dimensional Soft Output Viterbi Algorithm Associated with Noise Filter for Holographic Data Storage Systems

  • 누엔딘지;이재진
    • 한국통신학회논문지
    • /
    • 제39A권3호
    • /
    • pp.121-126
    • /
    • 2014
  • Demand of the data storage becomes more and more growing. This requests the next generation of storage devices to have the dominated storage capability associated with superfast read/write rate. Holographic data storage (HDS) is investigated for a long time and is considered to be a candidate for the future storage system. However, it has two-dimensional intersymbol interference that conventional one-dimensional detection solutions have not yet handled strictly because of the complexity level of system as well as the cost. We propose a new scheme that combines iterative soft output Viterbi algorithm with noise filter for improving the bit error rate performance of HDS.

파운틴 코드 기반의 하이브리드 P2P 스토리지 클라우드 (Fountain Code-based Hybrid P2P Storage Cloud)

  • 박기석;송황준
    • 정보과학회 컴퓨팅의 실제 논문지
    • /
    • 제21권1호
    • /
    • pp.58-63
    • /
    • 2015
  • 본 논문은 높은 데이터 상환율 및 저장된 데이터의 프라이버시를 보장함과 동시에 데이터 전송시간을 최소화하는 클라우드 스토리지와 P2P 스토리지를 결합한 파운틴 코드 기반의 하이브리드 P2P 클라우드 스토리지 시스템을 제안한다. 사용자는 저장 공간의 효율성 및 자신의 데이터 프라이버시를 보장하기 위해 저장하고자 하는 데이터에 대해 파운틴 코드 기반의 인코딩을 시행한 후 인코딩 된 데이터를 분할하여 전송한다. 또한, 제안하는 알고리즘은 각 피어의 생존 확률을 고려하여 데이터를 저장함으로써 사용자의 데이터 상환을 보장한다. 실험 결과는 제안한 알고리즘이 다양한 시스템 안정도에서 사용자의 전송시간을 줄일 수 있음을 보인다.

A Novel Redundant Data Storage Algorithm Based on Minimum Spanning Tree and Quasi-randomized Matrix

  • Wang, Jun;Yi, Qiong;Chen, Yunfei;Wang, Yue
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권1호
    • /
    • pp.227-247
    • /
    • 2018
  • For intermittently connected wireless sensor networks deployed in hash environments, sensor nodes may fail due to internal or external reasons at any time. In the process of data collection and recovery, we need to speed up as much as possible so that all the sensory data can be restored by accessing as few survivors as possible. In this paper a novel redundant data storage algorithm based on minimum spanning tree and quasi-randomized matrix-QRNCDS is proposed. QRNCDS disseminates k source data packets to n sensor nodes in the network (n>k) according to the minimum spanning tree traversal mechanism. Every node stores only one encoded data packet in its storage which is the XOR result of the received source data packets in accordance with the quasi-randomized matrix theory. The algorithm adopts the minimum spanning tree traversal rule to reduce the complexity of the traversal message of the source packets. In order to solve the problem that some source packets cannot be restored if the random matrix is not full column rank, the semi-randomized network coding method is used in QRNCDS. Each source node only needs to store its own source data packet, and the storage nodes choose to receive or not. In the decoding phase, Gaussian Elimination and Belief Propagation are combined to improve the probability and efficiency of data decoding. As a result, part of the source data can be recovered in the case of semi-random matrix without full column rank. The simulation results show that QRNCDS has lower energy consumption, higher data collection efficiency, higher decoding efficiency, smaller data storage redundancy and larger network fault tolerance.

Verification Control Algorithm of Data Integrity Verification in Remote Data sharing

  • Xu, Guangwei;Li, Shan;Lai, Miaolin;Gan, Yanglan;Feng, Xiangyang;Huang, Qiubo;Li, Li;Li, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권2호
    • /
    • pp.565-586
    • /
    • 2022
  • Cloud storage's elastic expansibility not only provides flexible services for data owners to store their data remotely, but also reduces storage operation and management costs of their data sharing. The data outsourced remotely in the storage space of cloud service provider also brings data security concerns about data integrity. Data integrity verification has become an important technology for detecting the integrity of remote shared data. However, users without data access rights to verify the data integrity will cause unnecessary overhead to data owner and cloud service provider. Especially malicious users who constantly launch data integrity verification will greatly waste service resources. Since data owner is a consumer purchasing cloud services, he needs to bear both the cost of data storage and that of data verification. This paper proposes a verification control algorithm in data integrity verification for remotely outsourced data. It designs an attribute-based encryption verification control algorithm for multiple verifiers. Moreover, data owner and cloud service provider construct a common access structure together and generate a verification sentinel to verify the authority of verifiers according to the access structure. Finally, since cloud service provider cannot know the access structure and the sentry generation operation, it can only authenticate verifiers with satisfying access policy to verify the data integrity for the corresponding outsourced data. Theoretical analysis and experimental results show that the proposed algorithm achieves fine-grained access control to multiple verifiers for the data integrity verification.

서브클러스터링을 이용한 홀로그래픽 정보저장 시스템의 비트 에러 보정 기법 (Bit Error Reduction for Holographic Data Storage System Using Subclustering)

  • 김상훈;양현석;박영필
    • 정보저장시스템학회논문집
    • /
    • 제6권1호
    • /
    • pp.31-36
    • /
    • 2010
  • Data storage related with writing and retrieving requires high storage capacity, fast transfer rate and less access time. Today any data storage system cannot satisfy these conditions, however holographic data storage system can perform faster data transfer rate because it is a page oriented memory system using volume hologram in writing and retrieving data. System can be constructed without mechanical actuating part so fast data transfer rate and high storage capacity about 1Tb/cm3 can be realized. In this research, to correct errors of binary data stored in holographic data storage system, a new method for reduction errors is suggested. First, find cluster centers using subtractive clustering algorithm then reduce intensities of pixels around cluster centers. By using this error reduction method following results are obtained ; the effect of Inter Pixel Interference noise in the holographic data storage system is decreased and the intensity profile of data page becomes uniform therefore the better data storage system can be constructed.

Verification Algorithm for the Duplicate Verification Data with Multiple Verifiers and Multiple Verification Challenges

  • Xu, Guangwei;Lai, Miaolin;Feng, Xiangyang;Huang, Qiubo;Luo, Xin;Li, Li;Li, Shan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권2호
    • /
    • pp.558-579
    • /
    • 2021
  • The cloud storage provides flexible data storage services for data owners to remotely outsource their data, and reduces data storage operations and management costs for data owners. These outsourced data bring data security concerns to the data owner due to malicious deletion or corruption by the cloud service provider. Data integrity verification is an important way to check outsourced data integrity. However, the existing data verification schemes only consider the case that a verifier launches multiple data verification challenges, and neglect the verification overhead of multiple data verification challenges launched by multiple verifiers at a similar time. In this case, the duplicate data in multiple challenges are verified repeatedly so that verification resources are consumed in vain. We propose a duplicate data verification algorithm based on multiple verifiers and multiple challenges to reduce the verification overhead. The algorithm dynamically schedules the multiple verifiers' challenges based on verification time and the frequent itemsets of duplicate verification data in challenge sets by applying FP-Growth algorithm, and computes the batch proofs of frequent itemsets. Then the challenges are split into two parts, i.e., duplicate data and unique data according to the results of data extraction. Finally, the proofs of duplicate data and unique data are computed and combined to generate a complete proof of every original challenge. Theoretical analysis and experiment evaluation show that the algorithm reduces the verification cost and ensures the correctness of the data integrity verification by flexible batch data verification.