• Title/Summary/Keyword: file prefetching

Search Result 18, Processing Time 0.027 seconds

Fips : Dynamic File Prefetching Scheme based on File Access Patterns (Fips : 파일 접근 유형을 고려한 동적 파일 선반입 기법)

  • Lee, Yoon-Young;Kim, Chei-Yol;Seo, Dae-Wha
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.7
    • /
    • pp.384-393
    • /
    • 2002
  • A Parallel file system is normally used to support excessive file requests from parallel applications in a cluster system, whereas prefetching is useful for improving the file system performance. This paper proposes a new prefetching method, Fips(dynamic File Prefetching Scheme based on file access patterms), that is particularly suitable for parallel scientific applications and multimedia web services in a parallel file system. The proposed prefetching method introduces a dynamic prefetching scheme to predict data blocks precisely in run-time although the file access patterns are irregular. In addition, it includes an algorithm to determine whether and when the prefetching is performed using the current available I/O bandwidth. Experimental results confirmed that the use of the proposed prefetching policy in a parallel file system produced a higher file system performance.

Prefetching System based on File Access Pattern Applicable to Multimedia Prefetching Scheme (멀티미디어 선반입에 적용 가능한 파일 액세스 패턴 기반의 선반입 시스템)

  • 황보준형;서대화
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.6
    • /
    • pp.489-499
    • /
    • 2001
  • This paper presents the SIC(Size-Interval-Count) prefetching system that can record the file access patterns of applications within a relatively small space of memory based on the repetitiveness of the file access patterns. The SICPS(SIC Prefetching System) is based on knowledge-based prefetching methods which includes high correctness in predicting future accesses of applications. The proposed system then uses the recorded file access patterns, referred to as "SIC access pattern information", to correctly predict the future accesses of the applications. The proposed prefetching system improved the response time by about 40% compared to the general file system and showed remarkable memory efficiency compared to the previously knowledge-based prefetching methods.

  • PDF

Prefetching Policy based on File Acess Pattern and Cache Area (파일 접근 패턴과 캐쉬 영역을 고려한 선반입 기법)

  • Lim, Jae-Deok;Hwang-Bo, Jun-Hyeong;Koh, Kwang-Sik;Seo, Dae-Hwa
    • The KIPS Transactions:PartA
    • /
    • v.8A no.4
    • /
    • pp.447-454
    • /
    • 2001
  • Various caching and prefetching algorithms have been investigated to identify and effective method for improving the performance of I/O devices. A prefetching algorithm decreases the processing time of a system by reducing the number of disk accesses when an I/O is needed. This paper proposes an AMBA prefetching method that is an extended version of the OBA prefetching method. The AMBA prefetching method will prefetching blocks continuously as long as disk bandwidth is enough. In this method, though there were excessive data request rate, we would expect efficient prefetching. And in the AMBA prefetching method, to prevent the cache pollution, it limits the number of data blocks to be prefetched within the cache area. It can be implemented in a user-level File System based on a Linux Operating System. In particular, the proposed prefetching policy improves the system performance by about 30∼40% for large files that are accessed sequentially.

  • PDF

File Access Pattern Collection Scheme based on Repetitiveness (반복성을 고려한 파일 액세스 패턴 수집 기법)

  • Hwnag-Bo, Jun-Hyoung;Seok, Seong-U;Seo, Dae-Hwa
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.28 no.12
    • /
    • pp.674-684
    • /
    • 2001
  • This paper presents the SIC(Size-Interval-Count) prefetching scheme that can record the file access patterns of applications within a relatively small space of memory based on the repetitiveness of the file access patterns. Several knowledge-based prefetching methods were recently introduced, which includes high correctness in predicting future accesses of applications. They records the access patterns of applications and uses recorded access pattern information to predict which blocks will be requested next. Yet, these methods require to much memory space. Accordingly, the proposed method then uses the recorded file access patterns, referred to as "SIC access pattern information", to correctly predict the future accesses of the applications. The proposed prefetching method improved the response time by about 40% compared to the general file system and showed remarkable memory efficiency compared to the previously knowledge-based prefetching methods.

  • PDF

Caching and Prefetching Policies Using Program Page Reference Patterns on a File System Layer for NAND Flash Memory (NAND 플래시 메모리용 파일 시스템 계층에서 프로그램의 페이지 참조 패턴을 고려한 캐싱 및 선반입 정책)

  • Park, Sang-Oh;Kim, Kyung-San;Kim, Sung-Jo
    • The KIPS Transactions:PartA
    • /
    • v.14A no.4
    • /
    • pp.235-244
    • /
    • 2007
  • Caching and prefetching policies have been used in most of computer systems to compensate speed differences between primary memory and secondary storage devices. In this paper, we design and implement a Flash Cache Core Module(FCCM) on the YAFFS which operates on a file system layer for NAND flash memory. The FCCM is independent of the underlying kernel in order to support its stability and compatibility. Also, we implement the Dirty-Last memory replacement technique considering the characteristics of flash memory, and the waiting queue for pages to be prefetched according to page hit. The FCCM reduced the number of I/Os and the amount of prefetched pages by maximum 55%(20% on average) and maximum 55%(24% on average), respectively, comparing with caching and prefetching policies of Linux.

Caching and Prefetching Policies Using Program Page Reference Patterns on a File System Layer for NAND Flash Memory (NAND 플래시 메모리용 파일 시스템 계층에서 프로그램의 페이지 참조 패턴을 고려한 캐싱 및 선반입 정책)

  • Kim, Gyeong-San;Kim, Seong-Jo
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.777-778
    • /
    • 2006
  • In this thesis, we design and implement a Flash Cache Core Module (FCCM) which operates on the YAFFS NAND flash memory. The FCCM applies memory replacement policy and prefetching policy based on the page reference pattern of applications. Also, implement the Clean-First memory replacement technique considering the characteristics of flash memory. In this method the decision is made according to page hit to apply prefetched waiting area. The FCCM decrease I/O hit frequency up to 37%, Compared with the linux cache and prefetching policy. Also, it operated using less memory for prefetching(maximum 24% and average 16%) compared with the linux kernel.

  • PDF

Prefetching Mechanism using the User's File Access Pattern Profile in Mobile Computing Environment (이동 컴퓨팅 환경에서 사용자의 FAP 프로파일을 이용한 선인출 메커니즘)

  • Choi, Chang-Ho;Kim, Myung-Il;Kim, Sung-Jo
    • Journal of KIISE:Information Networking
    • /
    • v.27 no.2
    • /
    • pp.138-148
    • /
    • 2000
  • In the mobile computing environment, in order to make copies of important files available when being disconnected the mobile host(client) must store them in its local cache while the connection is maintained. In this paper, we propose the prefetching mechanism for the client to save files which may be accessed in the near future. Our mechanism utilizes analyzer, prefetch-list producer, and prefetch manager. The analyzer records file access patterns of the user in a FAP(File Access Patterns) profile. Using the profile, the prefetch-list producer creates the prefetch-list. The prefetch manager requests a file server to return this list. We set the parameter TRP(Threshold of Reference Probability) to ensure that only reasonably related files can be prefetched. The prefetch-list producer adds the files to a prefetch-list if their reference probability is greater than the TRP. We also use the parameter TACP(Threshold of Access Counter Probability) to reduce the hoarding size required to store a prefetch-list. Finally, we measure the metrics such as the cache hit ratio, the number of files referenced by the client after disconnection and the hoarding size. The simulation results show that the performance of our mechanism is superior to that of the LRU caching mechanism. Our results also show that prefetching with the TACP can reduce the hoard size while maintaining similar performance of prefetching without TACP.

  • PDF

Effecient Prefetching Scheme for Hybrid Hard Disk (하이브리드 하드디스크를 위한 효율적인 선반입 기법)

  • Kim, Jeong-Won
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.5
    • /
    • pp.665-671
    • /
    • 2011
  • The Competitiveness of Hybrid hard disk drive(H-HDD) for solid state disk(SSD) comes from both lower power consumption and higher reading speed. This paper suggests a prefetching scheme that can improve the performance of Non-Volatile cache(NVCache) memory installed on the H-HDD through prefetching disk blocks as well as files to the NVCache. The proposed scheme makes the highly used data such as booting files copy to the NVCache as an unit of file and the frequently accessed blocks copy to the NVCache. This prefetching is done on the idle time of disk queue and the priorities of prefetched target blocks are based on both time and spatial locality of blocks. Experiments results show that the suggested method can improve response time of H-HDD and also lower the power consumption.

Article Data Prefetching Policy using User Access Patterns in News-On-demand System (주문형 전자신문 시스템에서 사용자 접근패턴을 이용한 기사 프리패칭 기법)

  • Kim, Yeong-Ju;Choe, Tae-Uk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.5
    • /
    • pp.1189-1202
    • /
    • 1999
  • As compared with VOD data, NOD article data has the following characteristics: it is created at any time, has a short life cycle, is selected as not one article but several articles by a user, and has high access locality in time. Because of these intrinsic features, user access patterns of NOD article data are different from those of VOD. Thus, building NOD system using the existing techniques of VOD system leads to poor performance. In this paper, we analysis the log file of a currently running electronic newspaper, show that the popularity distribution of NOD articles is different from Zipf distribution of VOD data, and suggest a new popularity model of NOD article data MS-Zipf(Multi-Selection Zipf) distribution and its approximate solution. Also we present a life cycle model of NOD article data, which shows changes of popularity over time. Using this life cycle model, we develop LLBF (Largest Life-cycle Based Frequency) prefetching algorithm and analysis he performance by simulation. The developed LLBF algorithm supports the similar level in hit-ratio to the other prefetching algorithms such as LRU(Least Recently Used) etc, while decreasing the number of data replacement in article prefetching and reducing the overhead of the prefetching in system performance. Using the accurate user access patterns of NOD article data, we could analysis correctly the performance of NOD server system and develop the efficient policies in the implementation of NOD server system.

  • PDF

Improving Flash Translation Layer for Hybrid Flash-Disk Storage through Sequential Pattern Mining based 2-Level Prefetching Technique (하이브리드 플래시-디스크 저장장치용 Flash Translation Layer의 성능 개선을 위한 순차패턴 마이닝 기반 2단계 프리패칭 기법)

  • Chang, Jae-Young;Yoon, Un-Keum;Kim, Han-Joon
    • The Journal of Society for e-Business Studies
    • /
    • v.15 no.4
    • /
    • pp.101-121
    • /
    • 2010
  • This paper presents an intelligent prefetching technique that significantly improves performance of hybrid fash-disk storage, a combination of flash memory and hard disk. Since flash memory embedded in a hybrid device is much faster than hard disk in terms of I/O operations, it can be utilized as a 'cache' space to improve system performance. The basic strategy for prefetching is to utilize sequential pattern mining, with which we can extract the access patterns of objects from historical access sequences. We use two techniques for enhancing the performance of hybrid storage with prefetching. One of them is to modify a FAST algorithm for mapping the flash memory. The other is to extend the unit of prefetching to a block level as well as a file level for effectively utilizing flash memory space. For evaluating the proposed technique, we perform the experiments using the synthetic data and real UCC data, and prove the usability of our technique.