• 제목/요약/키워드: Data Integrity

검색결과 1,390건 처리시간 0.023초

Efficient Public Verification on the Integrity of Multi-Owner Data in the Cloud

  • Wang, Boyang;Li, Hui;Liu, Xuefeng;Li, Fenghua;Li, Xiaoqing
    • Journal of Communications and Networks
    • /
    • 제16권6호
    • /
    • pp.592-599
    • /
    • 2014
  • Cloud computing enables users to easily store their data and simply share data with others. Due to the security threats in an untrusted cloud, users are recommended to compute verification metadata, such as signatures, on their data to protect the integrity. Many mechanisms have been proposed to allow a public verifier to efficiently audit cloud data integrity without receiving the entire data from the cloud. However, to the best of our knowledge, none of them has considered about the efficiency of public verification on multi-owner data, where each block in data is signed by multiple owners. In this paper, we propose a novel public verification mechanism to audit the integrity of multi-owner data in an untrusted cloud by taking the advantage of multisignatures. With our mechanism, the verification time and storage overhead of signatures on multi-owner data in the cloud are independent with the number of owners. In addition, we demonstrate the security of our scheme with rigorous proofs. Compared to the straightforward extension of previous mechanisms, our mechanism shows a better performance in experiments.

A Security-Enhanced Identity-Based Batch Provable Data Possession Scheme for Big Data Storage

  • Zhao, Jining;Xu, Chunxiang;Chen, Kefei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권9호
    • /
    • pp.4576-4598
    • /
    • 2018
  • In big data age, flexible and affordable cloud storage service greatly enhances productivity for enterprises and individuals, but spontaneously has their outsourced data susceptible to integrity breaches. Provable Data Possession (PDP) as a critical technology, could enable data owners to efficiently verify cloud data integrity, without downloading entire copy. To address challenging integrity problem on multiple clouds for multiple owners, an identity-based batch PDP scheme was presented in ProvSec 2016, which attempted to eliminate public key certificate management issue and reduce computation overheads in a secure and batch method. In this paper, we firstly demonstrate this scheme is insecure so that any clouds who have outsourced data deleted or modified, could efficiently pass integrity verification, simply by utilizing two arbitrary block-tag pairs of one data owner. Specifically, malicious clouds are able to fabricate integrity proofs by 1) universally forging valid tags and 2) recovering data owners' private keys. Secondly, to enhance the security, we propose an improved scheme to withstand these attacks, and prove its security with CDH assumption under random oracle model. Finally, based on simulations and overheads analysis, our batch scheme demonstrates better efficiency compared to an identity based multi-cloud PDP with single owner effort.

블록체인을 활용한 ECU 데이터 무결성 검증 시스템 (ECU Data Integrity Verification System Using Blockchain)

  • 변상필;김호윤;신승수
    • 산업융합연구
    • /
    • 제20권11호
    • /
    • pp.57-63
    • /
    • 2022
  • 자동차의 센서, 신호 등 데이터를 수집·처리하는 ECU 데이터가 공격에 의해 조작되면 운전자에게 피해를 줄 수 있다. 본 논문에서는 블록체인을 이용하여 자동차 ECU 데이터의 무결성을 검증하는 시스템을 제안한다. 자동차와 서버는 세션 키를 이용해 데이터를 암호화하여 송·수신하기 때문에 통신 과정에서 신뢰성을 보장한다. 서버는 해시 함수를 이용해 전송받은 데이터의 무결성을 검증한 후, 데이터에 이상이 없으면 블록체인과 off-chain인 분산저장소에 저장한다. ECU 데이터 해시값은 블록체인에 저장하여 변조할 수 없으며, 원본 ECU 데이터는 분산저장소에 저장한다. 해당 검증 시스템을 이용해 ECU 데이터에 대한 공격 및 변조를 사용자가 검증할 수 있으며, 악의적인 사용자가 ECU 데이터에 접근하여 데이터 변조 시 무결성 검증을 수행할 수 있다. 보험, 자동차 수리, 거래 및 판매 등의 상황에서 사용자의 필요에 따라 사용할 수 있다. 향후 연구로는 실시간 데이터 무결성 검증을 위한 효율적인 시스템 구축이 필요하다.

A Rapid Locating Protocol of Corrupted Data for Cloud Data Storage

  • Xu, Guangwei;Yang, Yanbin;Yan, Cairong;Gan, Yanglan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권10호
    • /
    • pp.4703-4723
    • /
    • 2016
  • The verification of data integrity is an urgent topic in remote data storage environments with the wide deployment of cloud data storage services. Many traditional verification algorithms focus on the block-oriented verification to resolve the dispute of dynamic data integrity between the data owners and the storage service providers. However, these algorithms scarcely pay attention to the data verification charge and the users' verification experience. The users more concern about the availability of accessed files rather than data blocks. Moreover, the data verification charge limits the number of checked data in each verification. Therefore, we propose a mixed verification protocol to verify the data integrity, which rapidly locates the corrupted files by the file-oriented verification, and then identifies the corrupted blocks in these files by the block-oriented verification. Theoretical analysis and simulation results demonstrate that the protocol reduces the cost of the metadata computation and transmission relative to the traditional block-oriented verification at the expense of little cost of additional file-oriented metadata computation and storage at the data owner. Both the opportunity of data extracted and the scope of suspicious data are optimized to improve the verification efficiency under the same verification cost.

A study for system design that guarantees the integrity of computer files based on blockchain and checksum

  • Kim, Minyoung
    • International Journal of Advanced Culture Technology
    • /
    • 제9권4호
    • /
    • pp.392-401
    • /
    • 2021
  • When a data file is shared through various methods on the Internet, the data file may be damaged in various cases. To prevent this, some websites provide the checksum value of the download target file in text data type. The checksum value provided in this way is then compared with the checksum value of the downloaded file and the published checksum value. If they are the same, the file is regarded as the same. However, the checksum value provided in text form is easily tampered with by an attacker. Because of this, if the correct checksum cannot be verified, the reliability and integrity of the data file cannot be ensured. In this paper, a checksum value is generated to ensure the integrity and reliability of a data file, and this value and related file information are stored in the blockchain. After that, we will introduce the research contents for designing and implementing a system that provides a function to share the checksum value stored in the block chain and compare it with other people's files.

무선 센서 네트워크에서 데이터 무결성을 보장하기 위한 다중 해쉬 체인 기법 (A Multi-hash Chain Scheme for Ensure Data Integirty Nodes in Wireless Sensor Network)

  • 박길철;정윤수;김용태;이상호
    • 한국정보통신학회논문지
    • /
    • 제14권10호
    • /
    • pp.2358-2364
    • /
    • 2010
  • 최근 무선 센서 네트워크에서는 센서 노드의 에너지 소모뿐만 아니라 센서 노드에서 수집된 데이터의 무결성을 보장하기 위한 연구가 진행되고 있다. 그러나 기존 연구에서는 센서 노드로부터 데이터를 병합하는 클러스터 헤드의 오버헤드 및 데이터의 무결성을 보장하지 못하고 있다. 이 논문에서는 센서 노드로부터 전달된 데이터를 클러스터 헤드가 병합할 때 클러스터 헤드의 오버헤드를 줄이고 병합된 데이터의 무결성을 보장하기 위한 다중 경로 해쉬체인 기법을 제안한다. 제안된 기법은 데이터 병합시 클러스터 헤드의 데이터 무결성을 보장하기 위해서 주경로와 보조경로로 나누어 다중 해쉬 체인을 형성한다. 주경로 이외에 사용되는 보조 경로는 센서 노드의 인증 시 클러스터 헤드의 오버헤드를 최소화하면서 센서 노드의 무결성을 지원한다.

Maintaining Integrity Constraints in Spatiotemporal Databases

  • Moon Kyung Do;Woo SungKu;Kim ByungCheol;Ryu KeunHo
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2004년도 Proceedings of ISRS 2004
    • /
    • pp.726-729
    • /
    • 2004
  • Spatiotemporal phenomena are ubiquitous aspects of real world. In the spatial and temporal databases, integrity constraints maintain the semantics of specific application domain and relationship between domains when proceed update in the database. Efficient maintenance of data integrity has become a critical problem, since testing the validity of a large number of constraints in a large database and after each transaction is an expensive task. Especially, in spatiotemporal domain, data is more complex than traditional domains and very active. Additionally, it is not considered that unified frameworks deal with both spatial and temporal properties to handle integrity constraints. Therefore, there need a model to maintain integrity constraints in the unified frameworks and enforcement and management techniques in order to preserve consistence.

  • PDF

재가 노인의 자아통합감에 영향을 미치는 요인 (Factors Influencing Ego-Integrity in Community Dwelling Elders)

  • 장혜경;오원옥
    • 기본간호학회지
    • /
    • 제18권4호
    • /
    • pp.529-537
    • /
    • 2011
  • Purpose: This study was done to identify the relationship of perceived health status, depression, meaning of life, and family function and to ego integrity, and to investigate the main factors influencing ego-integrity in community dwelling elders. Method: The research design for this study was a descriptive survey design using a convenience sampling. Data collection was done using self-report questionnaires with 157 community dwelling elders located in 3 cities, Seoul, Seosan and Gyungju. Data analysis was done using SPSS 15.0 pc+ program for descriptive statistics, Pearson correlation coefficients and stepwise multiple regression. Results: There were significant differences between ego-integrity according to gender, religion, economic level and amount of spending money. Ego-integrity had significant positive correlations with perceived health status, meaning of life, family function and a negative correlations with depression. The major factors that affect ego-integrity in community dwelling elders were self-awareness and acceptance, contentedness with past and present, gender and family function, which explained 62.7% of ego-integrity. Conclusion: Findings from this study provide a comprehensive understanding of ego-integrity and related factors for community dwelling elders.

Secure Data Sharing in The Cloud Through Enhanced RSA

  • Islam abdalla mohamed;Loay F. Hussein;Anis Ben Aissa;Tarak kallel
    • International Journal of Computer Science & Network Security
    • /
    • 제23권2호
    • /
    • pp.89-95
    • /
    • 2023
  • Cloud computing today provides huge computational resources, storage capacity, and many kinds of data services. Data sharing in the cloud is the practice of exchanging files between various users via cloud technology. The main difficulty with file sharing in the public cloud is maintaining privacy and integrity through data encryption. To address this issue, this paper proposes an Enhanced RSA encryption schema (ERSA) for data sharing in the public cloud that protects privacy and strengthens data integrity. The data owners store their files in the cloud after encrypting the data using the ERSA which combines the RSA algorithm, XOR operation, and SHA-512. This approach can preserve the confidentiality and integrity of a file in any cloud system while data owners are authorized with their unique identities for data access. Furthermore, analysis and experimental results are presented to verify the efficiency and security of the proposed schema.

A Scalable Data Integrity Mechanism Based on Provable Data Possession and JARs

  • Zafar, Faheem;Khan, Abid;Ahmed, Mansoor;Khan, Majid Iqbal;Jabeen, Farhana;Hamid, Zara;Ahmed, Naveed;Bashir, Faisal
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권6호
    • /
    • pp.2851-2873
    • /
    • 2016
  • Cloud storage as a service provides high scalability and availability as per need of user, without large investment on infrastructure. However, data security risks, such as confidentiality, privacy, and integrity of the outsourced data are associated with the cloud-computing model. Over the year's techniques such as, remote data checking (RDC), data integrity protection (DIP), provable data possession (PDP), proof of storage (POS), and proof of retrievability (POR) have been devised to frequently and securely check the integrity of outsourced data. In this paper, we improve the efficiency of PDP scheme, in terms of computation, storage, and communication cost for large data archives. By utilizing the capabilities of JAR and ZIP technology, the cost of searching the metadata in proof generation process is reduced from O(n) to O(1). Moreover, due to direct access to metadata, disk I/O cost is reduced and resulting in 50 to 60 time faster proof generation for large datasets. Furthermore, our proposed scheme achieved 50% reduction in storage size of data and respective metadata that result in providing storage and communication efficiency.