• Title/Summary/Keyword: cache storage

Search Result 138, Processing Time 0.031 seconds

Flash-Aware Transaction Management Scheme for flash Memory Database (플래시 메모리 데이터베이스를 위한 플래시인지 트랜잭션 관리 기법)

  • Byun Si Woo
    • Journal of Internet Computing and Services
    • /
    • v.6 no.1
    • /
    • pp.65-72
    • /
    • 2005
  • Flash memories are one of best media to support portable computers in mobile computing environment. The features of non-volatility, low power consumption. and fast access time for read operations are sufficient grounds to support flash memory as major database storage components of portable computers. However. we need to Improve traditional transaction management scheme due to the relatively slow characteristics of flash operation as compared to RAM memory. In order to achieve this goal. we devise a new scheme called flash-aware transaction management (FATM). FATM improves transaction performance by exploiting SRAM and W-Cache, We also propose a simulation model to show the performance of FATM. Based on the results of the performance evaluation, we conclude that FATM scheme outperforms the traditional scheme.

  • PDF

AIOPro: A Fully-Integrated Storage I/O Profiler for Android Smartphones (AIOPro: 안드로이드 스마트폰을 위한 통합된 스토리지 I/O 분석도구)

  • Hahn, Sangwook Shane;Yee, Inhyuk;Ryu, Donguk;Kim, Jihong
    • Journal of KIISE
    • /
    • v.44 no.3
    • /
    • pp.232-238
    • /
    • 2017
  • Application response time is critical to end-user response time in Android smartphones. Due to the plentiful resources of recent smartphones, storage I/O response time becomes a major key factor in application response time. However, existing storage I/O trace tools for Android and Linux give limited information only for a specific I/O layer which makes it difficult to combine I/O information from different I/O layers, because not helpful for application developer and researchers. In this paper, we propose a novel storage I/O trace tool for Android, called AIOPro (Android I/O profiler). It traces storage I/O from application - Android platform - system call - virtual file system - native file system - page cache - block layer - SCSI layer and device driver. It then combines the storage I/O information from I/O layers by linking them with file information and physical address. Our evaluations of real smartphone usage scenarios and benchmarks show that AIOPro can track storage I/O information from all I/O layers without any data loss under 0.1% system overheads.

Performance Evaluation of Disk I/O for Web Proxy Servers (웹 프락시 서버의 디스크 I/O 성능 평가)

  • Shim Jong-Ik
    • The KIPS Transactions:PartC
    • /
    • v.12C no.4 s.100
    • /
    • pp.603-608
    • /
    • 2005
  • Disk I/O is a major performance bottleneck of web proxy server. Today's most web proxy sowers are design to run on top of a general purpose file system. But general purpose file system can not efficiently handle web cache workload, small files, leading to the performance degradation of entire web proxy servers. In this paper we evaluate the performance potential of raw disk to reduce disk I/O overhead of web proxy servers. To show the performance potential of raw disk, we design a storage management system called Block-structured Storage Management System (BSMS). And we also actually implement web proxy server that incorporate BSMS in Squid. Comprehensive experimental evaluations show that raw disk can be a good solution to improve disk I/O performance significantly for web proxy servers.

Adaptive Deadline-aware Scheme (ADAS) for Data Migration between Cloud and Fog Layers

  • Khalid, Adnan;Shahbaz, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1002-1015
    • /
    • 2018
  • The advent of Internet of Things (IoT) and the evident inadequacy of Cloud networks concerning management of numerous end nodes have brought about a shift of paradigm giving birth to Fog computing. Fog computing is an extension of Cloud computing that extends Cloud resources at the edge of the network, closer to the user. Cloud computing has become one of the essential needs of people over the Internet but with the emerging concept of IoT, traditional Clouds seem inadequate. IoT entails extremely low latency and for that, the Cloud servers that are distant and unknown to the user appear to be unsuitable. With the help of Fog computing, the Fog devices installed would be closer to the user that will provide an immediate storage for the frequently needed data. This paper discusses data migration between different storage types especially between Cloud devices and then presents a mechanism to migrate data between Cloud and Fog Layer. We call this mechanism Adaptive Deadline-Aware Scheme (ADAS) for Data migration between Cloud and Fog. We will demonstrate that we can access and process latency sensitive "hot" data through the proposed ADAS more efficiently than with a traditional Cloud setup.

Neighbor Caching for P2P Applications in MUlti-hop Wireless Ad Hoc Networks (멀티 홉 무선 애드혹 네트워크에서 P2P 응용을 위한 이웃 캐싱)

  • 조준호;오승택;김재명;이형호;이준원
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.5
    • /
    • pp.631-640
    • /
    • 2003
  • Because of multi-hop wireless communication, P2P applications in ad hoc networks suffer poor performance. We Propose neighbor caching strategy to overcome this shortcoming and show it is more efficient than self caching that nodes store data in theirs own cache individually. A node can extend its caching storage instantaneously with neighbor caching by borrowing the storage from idle neighbors, so overcome multi-hop wireless communications with data source long distance away from itself. We also present the ranking based prediction that selects the most appropriate neighbor which data can be stored in. The node that uses the ranking based prediction can select the neighbor that has high possibility to keep data for a long time and avoid caching the low ranked data. Therefore the ranking based prediction improves the throughput of neighbor caching. In the simulation results, we observe that neighbor caching has better performance, as large as network size, as long as idle time, and as small as cache size. We also show the ranking based prediction is an adaptive algorithm that adjusts times of data movement into the neighbor, so makes neighbor caching flexible according to the idleness of nodes

Improving Flash Translation Layer for Hybrid Flash-Disk Storage through Sequential Pattern Mining based 2-Level Prefetching Technique (하이브리드 플래시-디스크 저장장치용 Flash Translation Layer의 성능 개선을 위한 순차패턴 마이닝 기반 2단계 프리패칭 기법)

  • Chang, Jae-Young;Yoon, Un-Keum;Kim, Han-Joon
    • The Journal of Society for e-Business Studies
    • /
    • v.15 no.4
    • /
    • pp.101-121
    • /
    • 2010
  • This paper presents an intelligent prefetching technique that significantly improves performance of hybrid fash-disk storage, a combination of flash memory and hard disk. Since flash memory embedded in a hybrid device is much faster than hard disk in terms of I/O operations, it can be utilized as a 'cache' space to improve system performance. The basic strategy for prefetching is to utilize sequential pattern mining, with which we can extract the access patterns of objects from historical access sequences. We use two techniques for enhancing the performance of hybrid storage with prefetching. One of them is to modify a FAST algorithm for mapping the flash memory. The other is to extend the unit of prefetching to a block level as well as a file level for effectively utilizing flash memory space. For evaluating the proposed technique, we perform the experiments using the synthetic data and real UCC data, and prove the usability of our technique.

The development of the high effective and stoppageless file system for high performance computing (High Performance Computing 환경을 위한 고성능, 무정지 파일시스템 구현)

  • Park, Yeong-Bae;Choe, Seung-Hwan;Lee, Sang-Ho;Kim, Gyeong-Su;Gong, Yong-Jun
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2004.11a
    • /
    • pp.395-401
    • /
    • 2004
  • In the current high network-centralized computing and enterprising environment, it is getting essential to transmit data reliably at very high rates. Until now previous client/server model based NFS(Network File System) or AFS(Andrew's Files System) have met the various demands but from now couldn't satisfy those of the today's scalable high-performance computing environment. Not only performance but data sharing service redundancy have risen as a serious problem. In case of NFS, the locking issue and cache cause file system to reboot and make problem when it is used simply as ip-take over for H/A service. In case of AFS, it provides file sharing redundancy but it is not possible until the storage supporting redundancy and equipments are prepared. Lustre is an open source based cluster file system developed to meet both demands. Lustre consists of three types of subsystems : MDS(Meta-Data Server) which offers the meta-data services, OST(Objec Storage Targets) which provide file I/O, and Lustre Clients which interact with OST and MDS. These subsystems with message exchanging and pursuing scalable high-performance file system service. In this paper, we compare the transmission speed of gigabytes file between Lustre and NFS on the basis of concurrent users and also present the high availability of the file system by removing more than one OST in operation.

  • PDF

Social-Aware Collaborative Caching Based on User Preferences for D2D Content Sharing

  • Zhang, Can;Wu, Dan;Ao, Liang;Wang, Meng;Cai, Yueming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1065-1085
    • /
    • 2020
  • With rapid growth of content demands, device-to-device (D2D) content sharing is exploited to effectively improve the service quality of users. Considering the limited storage space and various content demands of users, caching schemes are significant. However, most of them ignore the influence of the asynchronous content reuse and the selfishness of users. In this work, the user preferences are defined by exploiting the user-oriented content popularity and the current caching situation, and further, we propose the social-aware rate, which comprehensively reflects the achievable contents download rate affected by the social ties, the caching indicators, and the user preferences. Guided by this, we model the collaborative caching problem by making a trade-off between the redundancy of caching contents and the cache hit ratio, with the goal of maximizing the sum of social-aware rate over the constraint of limited storage space. Due to its intractability, it is computationally reduced to the maximization of a monotone submodular function, subject to a matroid constraint. Subsequently, two social-aware collaborative caching algorithms are designed by leveraging the standard and continuous greedy algorithms respectively, which are proved to achieve different approximation ratios in unequal polynomial-time. We present the simulation results to illustrate the performance of our schemes.

Analysis on the Effectiveness of the Filter Buffer for Low Power NAND Flash Memory (저전력 NAND 플래시 메모리를 위한 필터 버퍼의 효율성 분석)

  • Jung, Bo-Sung;Lee, Jung-Hoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.7 no.4
    • /
    • pp.201-207
    • /
    • 2012
  • Currently, NAND Flash memory has been widely used in consumer storage devices due to its non-volatility, stability, economical feasibility, low power usage, durability, and high density. However, a high capacity of NAND flash memory causes the high power consumption and the low performance. In the convention memory research, a hierarchical filter mechanism can archive an effective performance improvement in terms of the power consumption. In order to attain the best filter structure for NAND flash memory, we selected a direct-mapped filter, a victim filter, a fully associative filter and a 4-way set associative filter for comparison in the performance analysis. According to the results of the simulation, the fully associative filter buffer with a 128byte fetching size can obtain the bet performance compared to another filter structures, and it can reduce the energy*delay product(EDP) by about 93% compared to the conventional NAND Flash memory.

Performance Relationship Analysis in Map Block Number of NAND Flash Storage Device Using Map Cache Techniques (맵 캐시 기법을 사용하는 낸드 플래시 저장장치의 맵 블록 개수에 따른 성능 관계 분석)

  • Lee, Daeyong;Song, Yong Ho
    • Annual Conference of KIPS
    • /
    • 2016.10a
    • /
    • pp.22-25
    • /
    • 2016
  • 맵 캐시 기법을 사용하는 낸드 플래시 저장장치는 맵 데이터를 저장하기 위한 공간을 필요로 한다. 이 공간을 맵 블록이라 브르며 시스템 유지 및 성능 개선을 위해 사용되는 낸드 블록의 일부를 점유한다. 맵 블록의 개수가 너무 많을 경우 시스템 유지에 블록이 부족해지기 때문에 전반적인 성능이 하락하게 된다. 하지만 맵 블록이 너무 적은 경우에도 전체 맵 데이터를 유지하기 위한 동작이 과도하게 수행되어 성능이 크게 하락하는 문제가 발생한다. 본 논문은 맵 블록 개수에 따른 성능 변화를 분석하고 최적의 맵 블록 개수를 제안한다.