• Title/Summary/Keyword: Data Storage

Search Result 4,575, Processing Time 0.034 seconds

Determination of the Storage Constant for the Clark Model by based on the Observed Rainfall-Runoff Data (강우-유출 자료에 의한 Clark 모형의 저류상수 결정)

  • Ahn, Tae-Jin;Choi, Kwang-Hoon
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2007.05a
    • /
    • pp.1454-1458
    • /
    • 2007
  • The determination of feasible design flood is the most important to control flood damage in river management. Model parameters should be calibrated using observed discharge but due to deficiency of observed data the parameters have been adopted by engineer's empirical sense. Storage constant in the Clark unit hydrograph method mainly affects magnitude of peak flood. This study is to estimate the storage constant based on the observed rainfall-runoff data at the three stage stations in the Imjin river basin and the three stage stations in the Ansung river basin. In this study four methods have been proposed to estimate the storage constant from observed rainfall-runoff data. The HEC-HMS model has been adopted to execute the sensitivity of storage constant. A criteria has been proposed to determine storage constant based on the results of the observed hydrograph and the HEC-HMS model.

  • PDF

Light-weight Preservation of Access Pattern Privacy in Un-trusted Storage

  • Yang, Ka;Zhang, Jinsheng;Zhang, Wensheng;Qiao, Daji
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.5
    • /
    • pp.282-296
    • /
    • 2013
  • With the emergence of cloud computing, more and more sensitive user data are outsourced to remote storage servers. The privacy of users' access pattern to the data should be protected to prevent un-trusted storage servers from inferring users' private information or launching stealthy attacks. Meanwhile, the privacy protection schemes should be efficient as cloud users often use thin client devices to access the data. In this paper, we propose a lightweight scheme to protect the privacy of data access pattern. Comparing with existing state-of-the-art solutions, our scheme incurs less communication and computational overhead, requires significantly less storage space at the user side, while consuming similar storage space at the server. Rigorous proofs and extensive evaluations have been conducted to show that the proposed scheme can hide the data access pattern effectively in the long run after a reasonable number of accesses have been made.

  • PDF

A Method for Data Access Control and Key Management in Mobile Cloud Storage Services (모바일 클라우드 스토리지 서비스에서의 데이터 보안을 위한 데이터 접근 제어 및 보안 키 관리 기법)

  • Shin, Jaebok;Kim, Yungu;Park, Wooram;Park, Chanik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.8 no.6
    • /
    • pp.303-309
    • /
    • 2013
  • Cloud storage services are used for efficient sharing or synchronizing of user's data across multiple mobile devices. Although cloud storages provide flexibility and scalability in storing data, security issues should be handled. Currently, typical cloud storage services offer data encryption for security purpose but we think such method is not secure enough because managing encryption keys by software and identifying users by simple ID and password are main defectives of current cloud storage services. We propose a secure data access method to cloud storage in mobile environment. Our framework supports hardware-based key management, attestation on the client software integrity, and secure key sharing across the multiple devices. We implemented our prototype using ARM TrustZone and TPM Emulator which is running on secure world of the TrustZone environment.

Inter Pixel Interference Reduction using Interference Ratio Mask for Holographic Data Storage (홀로그래픽 정보 저장장치에서의 간섭 비율 마스크를 이용한 인접 픽셀 간섭의 개선을 위한 연구)

  • Lee, Jae-Seong;Lim, Sung-Yong;Kim, Nak-Yeong;Kim, Do-Hyung;Park, Kyoung-Su;Park, No-Cheol;Yang, Hyun-Seok;Park, Young-Pil
    • Transactions of the Society of Information Storage Systems
    • /
    • v.7 no.1
    • /
    • pp.42-46
    • /
    • 2011
  • Holographic Data Storage System (HDSS), one of the next generation data storage devices, is a 2-dimensional page oriented memory system using volume hologram. HDSS has many noise sources such as crosstalk, scattering and inter pixel interference, etc. The noise source is changing intensity of the light used for carrying the data signal in HDSS. The inter pixel interference results in decrease of Signal to Noise Ratio and increase of Bit Error Rate. In order to improve these problems, this paper proposes to compensate the inter pixel interference with simple interference mask.

Characteristics of 5/8 Modulation Code of Misalignments for Holographic Data Storage (홀로그래픽 저장장치용 5/8변조 부호의 어긋남 특성)

  • Kim, Jin-Young;Lee, Jae-Jin
    • Transactions of the Society of Information Storage Systems
    • /
    • v.6 no.2
    • /
    • pp.47-51
    • /
    • 2010
  • We investigate misalignment characteristics of 5/8 modulation code for holographic data storage. The 5/8 modulation code does not have any isolated patterns that is the most unwanted problem for holographic data storage. As the results, the 5/8 modulation code showed a strong side of misalignments, and the code has the best performance among uncoded, 5/9, and 6/8 modulation codes when there are large misalignments.

Development of scalable big data storage system using network computing technology (네트워크 컴퓨팅 기술을 활용한 확장 가능형 빅데이터 스토리지 시스템 개발)

  • Park, Jung Kyu;Park, Eun Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.11
    • /
    • pp.1330-1336
    • /
    • 2019
  • As the Fourth Industrial Revolution era began, a variety of devices are running on the cloud. These various devices continue to generate various types of data or large amounts of multimedia data. To handle this situation, a large amount of storage is required, and big data technology is required to process stored data and obtain accurate information. NAS (Network Attached Storage) or SAN (Storage Area Network) technology is typically used to build high-speed, high-capacity storage in a network-based environment. In this paper, we propose a method to construct a mass storage device using Network-DAS which is an extension technology of DAS (Direct Attached Storage). Benchmark experiments were performed to verify the scalability of the storage system with 76 HDD. Experimental results show that the proposed high performance mass storage system is scalable and reliable.

Ferroelectric ultra high-density data storage based on scanning nonlinear dielectric microscopy

  • Cho, Ya-Suo;Odagawa, Nozomi;Tanaka, Kenkou;Hiranaga, Yoshiomi
    • Transactions of the Society of Information Storage Systems
    • /
    • v.3 no.2
    • /
    • pp.94-112
    • /
    • 2007
  • Nano-sized inverted domain dots in ferroelectric materials have potential application in ultrahigh-density rewritable data storage systems. Herein, a data storage system is presented based on scanning non-linear dielectric microscopy and a thin film of ferroelectric single-crystal lithium tantalite. Through domain engineering, we succeeded to form an smallest artificial nano-domain single dot of 5.1 nm in diameter and artificial nano-domain dot-array with a memory density of 10.1 Tbit/$inch^2$ and a bit spacing of 8.0 nm, representing the highest memory density for rewritable data storage reported to date. Sub-nanosecond (500psec) domain switching speed also has been achieved. Next, long term retention characteristic of data with inverted domain dots is investigated by conducting heat treatment test. Obtained life time of inverted dot with the radius of 50nm was 16.9 years at $80^{\circ}C$. Finally, actual information storage with low bit error and high memory density was performed. A bit error ratio of less than $1\times10^{-4}$ was achieved at an areal density of 258 Gbit/inch2. Moreover, actual information storage is demonstrated at a density of 1 Tbit/$inch^2$.

  • PDF

Simulation of Storage Capacity Analysis with Queuing Network Models (큐잉 네트워크 모델을 적용한 저장용량 분석 시뮬레이션)

  • Kim, Yong-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.4 s.36
    • /
    • pp.221-228
    • /
    • 2005
  • Data storage was thought to be inside of or next to server cases but advances in networking technology make the storage system to be located far away from the main computer. In Internet era with explosive data increases, balanced development of storage and transmission systems is required. SAN(Storage Area Network) and NAS(Network Attached Storage) reflect these requirements. It is important to know the capacity and limit of the complex storage network system to got the optimal performance from it. The capacity data is used for performance tuning and making purchasing decision of storage. This paper suggests an analytic model of storage network system as queuing network and proves the model though simulation model.

  • PDF

A Rapid Locating Protocol of Corrupted Data for Cloud Data Storage

  • Xu, Guangwei;Yang, Yanbin;Yan, Cairong;Gan, Yanglan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.10
    • /
    • pp.4703-4723
    • /
    • 2016
  • The verification of data integrity is an urgent topic in remote data storage environments with the wide deployment of cloud data storage services. Many traditional verification algorithms focus on the block-oriented verification to resolve the dispute of dynamic data integrity between the data owners and the storage service providers. However, these algorithms scarcely pay attention to the data verification charge and the users' verification experience. The users more concern about the availability of accessed files rather than data blocks. Moreover, the data verification charge limits the number of checked data in each verification. Therefore, we propose a mixed verification protocol to verify the data integrity, which rapidly locates the corrupted files by the file-oriented verification, and then identifies the corrupted blocks in these files by the block-oriented verification. Theoretical analysis and simulation results demonstrate that the protocol reduces the cost of the metadata computation and transmission relative to the traditional block-oriented verification at the expense of little cost of additional file-oriented metadata computation and storage at the data owner. Both the opportunity of data extracted and the scope of suspicious data are optimized to improve the verification efficiency under the same verification cost.

A Scalable Data Integrity Mechanism Based on Provable Data Possession and JARs

  • Zafar, Faheem;Khan, Abid;Ahmed, Mansoor;Khan, Majid Iqbal;Jabeen, Farhana;Hamid, Zara;Ahmed, Naveed;Bashir, Faisal
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2851-2873
    • /
    • 2016
  • Cloud storage as a service provides high scalability and availability as per need of user, without large investment on infrastructure. However, data security risks, such as confidentiality, privacy, and integrity of the outsourced data are associated with the cloud-computing model. Over the year's techniques such as, remote data checking (RDC), data integrity protection (DIP), provable data possession (PDP), proof of storage (POS), and proof of retrievability (POR) have been devised to frequently and securely check the integrity of outsourced data. In this paper, we improve the efficiency of PDP scheme, in terms of computation, storage, and communication cost for large data archives. By utilizing the capabilities of JAR and ZIP technology, the cost of searching the metadata in proof generation process is reduced from O(n) to O(1). Moreover, due to direct access to metadata, disk I/O cost is reduced and resulting in 50 to 60 time faster proof generation for large datasets. Furthermore, our proposed scheme achieved 50% reduction in storage size of data and respective metadata that result in providing storage and communication efficiency.