• Title/Summary/Keyword: Server-side Deduplication

Search Result 8, Processing Time 0.026 seconds

Secure and Efficient Client-side Deduplication for Cloud Storage (안전하고 효율적인 클라이언트 사이드 중복 제거 기술)

  • Park, Kyungsu;Eom, Ji Eun;Park, Jeongsu;Lee, Dong Hoon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.1
    • /
    • pp.83-94
    • /
    • 2015
  • Deduplication, which is a technique of eliminating redundant data by storing only a single copy of each data, provides clients and a cloud server with efficiency for managing stored data. Since the data is saved in untrusted public cloud server, however, both invasion of data privacy and data loss can be occurred. Over recent years, although many studies have been proposed secure deduplication schemes, there still remains both the security problems causing serious damages and inefficiency. In this paper, we propose secure and efficient client-side deduplication with Key-server based on Bellare et. al's scheme and challenge-response method. Furthermore, we point out potential risks of client-side deduplication and show that our scheme is secure against various attacks and provides high efficiency for uploading big size of data.

Side-Channel Attack against Secure Data Deduplication over Encrypted Data in Cloud Storage (암호화된 클라우드 데이터의 중복제거 기법에 대한 부채널 공격)

  • Shin, Hyungjune;Koo, Dongyoung;Hur, Junbeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.4
    • /
    • pp.971-980
    • /
    • 2017
  • Data deduplication can be utilized to reduce storage space in cloud storage services by storing only a single copy of data rather than all duplicated copies. Users who are concerned the confidentiality of their outsourced data can use secure encryption algorithms, but it makes data deduplication ineffective. In order to reconcile data deduplication with encryption, Liu et al. proposed a new server-side cross-user deduplication scheme by exploiting password authenticated key exchange (PAKE) protocol in 2015. In this paper, we demonstrate that this scheme has side channel which causes insecurity against the confirmation-of-file (CoF), or duplicate identification attack.

Client-Side Deduplication to Enhance Security and Reduce Communication Costs

  • Kim, Keonwoo;Youn, Taek-Young;Jho, Nam-Su;Chang, Ku-Young
    • ETRI Journal
    • /
    • v.39 no.1
    • /
    • pp.116-123
    • /
    • 2017
  • Message-locked encryption (MLE) is a widespread cryptographic primitive that enables the deduplication of encrypted data stored within the cloud. Practical client-side contributions of MLE, however, are vulnerable to a poison attack, and server-side MLE schemes require large bandwidth consumption. In this paper, we propose a new client-side secure deduplication method that prevents a poison attack, reduces the amount of traffic to be transmitted over a network, and requires fewer cryptographic operations to execute the protocol. The proposed primitive was analyzed in terms of security, communication costs, and computational requirements. We also compared our proposal with existing MLE schemes.

Deduplication Technologies over Encrypted Data (암호데이터 중복처리 기술)

  • Kim, Keonwoo;Chang, Ku-Young;Kim, Ik-Kyun
    • Electronics and Telecommunications Trends
    • /
    • v.33 no.1
    • /
    • pp.68-77
    • /
    • 2018
  • Data deduplication is a common used technology in backup systems and cloud storage to reduce storage costs and network traffic. To preserve data privacy from servers or malicious attackers, there has been a growing demand in recent years for individuals and companies to encrypt data and store encrypted data on a server. In this study, we introduce two cryptographic primitives, Convergent Encryption and Message-Locked Encryption, which enable deduplication of encrypted data between clients and a storage server. We analyze the security of these schemes in terms of dictionary and poison attacks. In addition, we introduce deduplication systems that can be implemented in real cloud storage, which is a practical application environment, and describes the proof of ownership on client-side deduplication.

Privacy Preserving Source Based Deduplication In Cloud Storage (클라우드 스토리지 상에서의 프라이버시 보존형 소스기반 중복데이터 제거기술)

  • Park, Cheolhee;Hong, Dowon;Seo, Changho;Chang, Ku-Young
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.1
    • /
    • pp.123-132
    • /
    • 2015
  • In cloud storage, processing the duplicated data, namely deduplication, is necessary technology to save storage space. Users who store sensitive data in remote storage want data be encrypted. However Cloud storage server do not detect duplication of conventionally encrypted data. To solve this problem, Convergent Encryption has been proposed. But it inherently have weakness due to brute-force attack. On the other hand, to save storage space as well as save bandwidths, client-side deduplication have been applied. Recently, various client-side deduplication technology has been proposed. However, this propositions still cannot solve the security problem. In this paper, we suggest a secure source-based deduplication technology, which encrypt data to ensure the confidentiality of sensitive data and apply proofs of ownership protocol to control access to the data, from curious cloud server and malicious user.

Privacy Preserving source Based Deuplication Method (프라이버시 보존형 소스기반 중복제거 기술 방법 제안)

  • Nam, Seung-Soo;Seo, Chang-Ho;Lee, Joo-Young;Kim, Jong-Hyun;Kim, Ik-Kyun
    • Smart Media Journal
    • /
    • v.4 no.4
    • /
    • pp.33-38
    • /
    • 2015
  • Cloud storage server do not detect duplication of conventionally encrypted data. To solve this problem, Convergent Encryption has been proposed. Recently, various client-side deduplication technology has been proposed. However, this propositions still cannot solve the security problem. In this paper, we suggest a secure source-based deduplication technology, which encrypt data to ensure the confidentiality of sensitive data and apply proofs of ownership protocol to control access to the data, from curious cloud server and malicious user.

Privacy Preserving Source Based Deduplicaton Method (프라이버시 보존형 소스기반 중복제거 방법)

  • Nam, Seung-Soo;Seo, Chang-Ho
    • Journal of Digital Convergence
    • /
    • v.14 no.2
    • /
    • pp.175-181
    • /
    • 2016
  • Cloud storage servers do not detect duplication of conventionally encrypted data. To solve this problem, convergent encryption has been proposed. Recently, various client-side deduplication technology has been proposed. However, this propositions still cannot solve the security problem. In this paper, we suggest a secure source-based deduplication technology, which encrypt data to ensure the confidentiality of sensitive data and apply proofs of ownership protocol to control access to the data, from curious cloud server and malicious user.

A Scheme on High-Performance Caching and High-Capacity File Transmission for Cloud Storage Optimization (클라우드 스토리지 최적화를 위한 고속 캐싱 및 대용량 파일 전송 기법)

  • Kim, Tae-Hun;Kim, Jung-Han;Eom, Young-Ik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.8C
    • /
    • pp.670-679
    • /
    • 2012
  • The recent dissemination of cloud computing makes the amount of data storage to be increased and the cost of storing the data grow rapidly. Accordingly, data and service requests from users also increases the load on the cloud storage. There have been many works that tries to provide low-cost and high-performance schemes on distributed file systems. However, most of them have some weaknesses on performing parallel and random data accesses as well as data accesses of frequent small workloads. Recently, improving the performance of distributed file system based on caching technology is getting much attention. In this paper, we propose a CHPC(Cloud storage High-Performance Caching) framework, providing parallel caching, distributed caching, and proxy caching in distributed file systems. This study compares the proposed framework with existing cloud systems in regard to the reduction of the server's disk I/O, prevention of the server-side bottleneck, deduplication of the page caches in each client, and improvement of overall IOPS. As a results, we show some optimization possibilities on the cloud storage systems based on some evaluations and comparisons with other conventional methods.