• Title/Summary/Keyword: page cache

Search Result 69, Processing Time 0.026 seconds

A Selective Usage of Page Cache towards Low-Power Systems (저전력 시스템을 위한 선택적 페이지 캐쉬 사용 기법)

  • 송형근;차호정
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04a
    • /
    • pp.208-210
    • /
    • 2003
  • 본 논문은 내장형 시스템에서 저전력 소모를 위한 선택적 페이지 캐쉬 사용 기법을 제안한다. 내장형 시스템의 저장매체로 널리 사용되고 있는 플래쉬 메모리는 데이터를 압축하여 저장하기 때문에 리눅스에서 사용되는 페이지 캐쉬가 효과적으로 동작한다. 하지만 플래쉬 메모리는 RAM 보다 전력 소모가 적기 때문에 페이지 캐쉬 사용에 따른 빈번한 RAM 접근 횟수는 전력 소모량을 증가시킨다. 따라서 저전력 시스템 운영을 위해서 페이지 캐쉬를 선택적으로 사용하는 것을 제안한다. 리눅스 운영체제상에서 구현된 시스템을 바탕으로 수행속도가 향상되고 전력 소모량이 감소함을 보인다.

  • PDF

Peducing the Overhead of Virtual Address Translation Process (가상주소 변환 과정에 대한 부담의 줄임)

  • U, Jong-Jeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.1
    • /
    • pp.118-126
    • /
    • 1996
  • Memory hierarchy is a useful mechanism for improving the memory access speed and making the program space larger by layering the memories and separating program spaces from memory spaces. However, it needs at least two memory accesses for each data reference : a TLB(Translation Lookaside Buffer) access for the address translation and a data cache access for the desired data. If the cache size increases to the multiplication of page size and the cache associativity, it is difficult to access the TLB with the cache in parallel, thereby making longer the critical timing path in the processor. To achieve such parallel accesses, we present the hybrid mapped TLB which combines a direct mapped TLB with a very small fully-associative mapped TLB. The former can reduce the TLB access time. while the latter removes the conflict misses from the former. The trace-driven simulation shows that under given workloads the proposed TLB is effective even when a fully-associative mapped TLB with only four entries is added because the effects of its increased misses are offset by its speed benefits.

  • PDF

WWCLOCK: Page Replacement Algorithm Considering Asymmetric I/O Cost of Flash Memory (WWCLOCK: 플래시 메모리의 비대칭적 입출력 비용을 고려한 페이지 교체 알고리즘)

  • Park, Jun-Seok;Lee, Eun-Ji;Seo, Hyun-Min;Koh, Kern
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.12
    • /
    • pp.913-917
    • /
    • 2009
  • Flash memories have asymmetric I/O costs for read and write in terms of latency and energy consumption. However, the ratio of these costs is dependent on the type of storage. Moreover, it is becoming more common to use two flash memories on a system as an internal memory and an external memory card. For this reason, buffer cache replacement algorithms should consider I/O costs of device as well as possibility of reference. This paper presents WWCLOCK(Write-Weighted CLOCK) algorithm which directly uses I/O costs of devices along with recency and frequency of cache blocks to selecting a victim to evict from the buffer cache. WWCLOCK can be used for wide range of storage devices with different I/O cost and for systems that are using two or more memory devices at the same time. In addition to this, it has low time and space complexity comparable to CLOCK algorithm. Trace-driven simulations show that the proposed algorithm reduces the total I/O time compared with LRU by 36.2% on average.

Cache Coherency Schemes for Database Sharing Systems with Primary Copy Authority (주사본 권한을 지원하는 공유 데이터베이스 시스템을 위한 캐쉬 일관성 기법)

  • Kim, Shin-Hee;Cho, Haeng-Rae;Kim, Byeong-Uk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.6
    • /
    • pp.1390-1403
    • /
    • 1998
  • Database sharing system (DSS) refers to a system for high performance transaction processing. In DSS, the processing nodes are locally coupled via a high speed network and share a common database at the disk level. Each node has a local memory, a separate copy of operating system, and a DB'\fS. To reduce the number of disk accesses, the node caches database pages in its local memory buffer. However, since multiple nodes may be simultaneously cached a page, cache consistency must be cnsured so that every node can always access the'latest version of pages. In this paper, we propose efficient cache consistency schemes in DSS, where the database is logically partitioned using primary copy authority to reduce locking overhead, The proposed schemes can improve performance by reducing the disk access overhead and the message overhead due to maintaining cache consistency, Furthermore, they can show good performance when database workloads are varied dynamically.

  • PDF

Caching and Prefetching Policies Using Program Page Reference Patterns on a File System Layer for NAND Flash Memory (NAND 플래시 메모리용 파일 시스템 계층에서 프로그램의 페이지 참조 패턴을 고려한 캐싱 및 선반입 정책)

  • Park, Sang-Oh;Kim, Kyung-San;Kim, Sung-Jo
    • The KIPS Transactions:PartA
    • /
    • v.14A no.4
    • /
    • pp.235-244
    • /
    • 2007
  • Caching and prefetching policies have been used in most of computer systems to compensate speed differences between primary memory and secondary storage devices. In this paper, we design and implement a Flash Cache Core Module(FCCM) on the YAFFS which operates on a file system layer for NAND flash memory. The FCCM is independent of the underlying kernel in order to support its stability and compatibility. Also, we implement the Dirty-Last memory replacement technique considering the characteristics of flash memory, and the waiting queue for pages to be prefetched according to page hit. The FCCM reduced the number of I/Os and the amount of prefetched pages by maximum 55%(20% on average) and maximum 55%(24% on average), respectively, comparing with caching and prefetching policies of Linux.

Buffer Cache Management for Low Power Consumption (저전력을 위한 버퍼 캐쉬 관리 기법)

  • Lee, Min;Seo, Eui-Seong;Lee, Joon-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.6
    • /
    • pp.293-303
    • /
    • 2008
  • As the computing environment moves to the wireless and handheld system, the power efficiency is getting more important. That is the case especially in the embedded hand-held system and the power consumed by the memory system takes the second largest portion in overall. To save energy consumed in the memory system we can utilize low power mode of SDRAM. In the case of RDRAM, nap mode consumes less than 5% of the power consumed in active or standby mode. However hardware controller itself can't use this facility efficiently unless the operating system cooperates. In this paper we focus on how to minimize the number of active units of SDRAM. The operating system allocates its physical pages so that only a few units of SDRAM need to be activated and the unnecessary SDRAM can be put into nap mode. This work can be considered as a generalized and system-wide version of PAVM(Power-Aware Virtual Memory) research. We take all the physical memory into account, especially buffer cache, which takes an half of total memory usage on average. Because of the portion of buffer cache and its importance, PAVM approach cannot be robust without taking the buffer cache into account. In this paper, we analyze the RAM usage and propose power-aware page allocation policy. Especially the pages mapped into the process' address space and the buffer cache pages are considered. The relationship and interactions of these two kinds of pages are analyzed and exploited for energy saving.

Anticipatory I/O Management for Clustered Flash Translation Layer in NAND Flash Memory

  • Park, Kwang-Hee;Yang, Jun-Sik;Chang, Joon-Hyuk;Kim, Deok-Hwan
    • ETRI Journal
    • /
    • v.30 no.6
    • /
    • pp.790-798
    • /
    • 2008
  • Recently, NAND flash memory has emerged as a next generation storage device because it has several advantages, such as low power consumption, shock resistance, and so on. However, it is necessary to use a flash translation layer (FTL) to intermediate between NAND flash memory and conventional file systems because of the unique hardware characteristics of flash memory. This paper proposes a new clustered FTL (CFTL) that uses clustered hash tables and a two-level software cache technique. The CFTL can anticipate consecutive addresses from the host because the clustered hash table uses the locality of reference in a large address space. It also adaptively switches logical addresses to physical addresses in the flash memory by using block mapping, page mapping, and a two-level software cache technique. Furthermore, anticipatory I/O management using continuity counters and a prefetch scheme enables fast address translation. Experimental results show that the proposed address translation mechanism for CFTL provides better performance in address translation and memory space usage than the well-known NAND FTL (NFTL) and adaptive FTL (AFTL).

  • PDF

Performance Evaluation of Disk Replacement Algorithms in a Shared Cluster (공유 디스크 클러스터에서 버퍼 고체 알고리즘의 성능 평가)

  • Cho, Haeng-Rae
    • Journal of KIISE:Databases
    • /
    • v.35 no.6
    • /
    • pp.469-480
    • /
    • 2008
  • A shared disk (SD) cluster couples multiple nodes for high performance transaction processing, and all the coupled nodes share a common database at the disk level. To reduce the number of disk accesses, each node caches database pages in its memory buffer. Since a particular page may be cached simultaneously in different nodes, cache consistency should be maintained to ensure that nodes can always access the most recent version of database pages. Most cache consistency schemes proposed in the SD cluster adopted LRU as a buffer replacement algorithm. In this paper, we first present four buffer replacement algorithms that consider the characteristics of the SD cluster. Then we compare the performance of the buffer replacement algorithms. We perform the experiments on a variety of cluster configurations and database workloads. The experiment results show that the proposed algorithms achieve performance improvement up to 5 times of LRU algorithm.

A Study on Tile Map Service of High Spatial Resolution Image Using Open Source GIS (Open Source GIS를 이용한 고해상도 영상의 Tile Map Service 시스템 구축에 관한 연구)

  • Jeong, Myeong-Hun;Suh, Yong-Cheol
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.17 no.1
    • /
    • pp.167-174
    • /
    • 2009
  • A Tile Map Service is a regular map service that has been enhanced to serve maps very quickly using a cache of static images. The map cache is a directory that contains image tiles of a map extent at specific scale levels. Returning a tile from the cache takes the server much less time than drawing the map image on demand. Use of a Tile Map Service can dramatically improve the time that clients take to display complex base-maps. Using Tile Map Services thus eliminate the need to trade quality for performance. This study provides a way to construct Tile Map Service System using Open Source GIS. We used GDAL(Geospatial Data Abstraction Library) which is one of the Open Source GIS Softwares to make Tile Map Image and OpenLayers to publish Web Page. Moreover, We conducted a performance test on Tile Map System and Dynamic Map System and evaluated the results of it. As a result, the proposed method makes it easier to construct high performance Tile Map Service using Open Source GIS without commercial products.

  • PDF