• Title/Summary/Keyword: page cache

Search Result 69, Processing Time 0.021 seconds

Analysis and Advice on Cache Algorithms of SSD FTL (SSD FTL 캐시 알고리즘 분석 및 제언)

  • Hyung Bong, Lee;Tae Yun, Chung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.1
    • /
    • pp.1-8
    • /
    • 2023
  • It is impossible to overwrite on an already allocated page in SSDs, so whenever a write operation occurs a page replacement with a clean page is required. To resolve this problem, SSDs have an internal flash translation layer called FTL that maps logical pages managed by a file system of operating system to currently allocated physical pages. SSD pages discarded due to write operations must be recycled through initialization, but since the number of initialization times is limited the FTL provides a caching function to reduce the number of writes in addition to the page mapping function, which is a core function. In this study, we focus on the FTL cache methodologies reducing the number of page writes and analyze the related algorithms, and propose a write-only cache strategy. As a result of experimenting with the write-only cache using a simulator, it showed an improvement of up to 29%.

An Efficient Flash Translation Layer Considering Temporal and Spacial Localities for NAND Flash Memory Storage Systems

  • Kim, Yong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.12
    • /
    • pp.9-15
    • /
    • 2017
  • This paper presents an efficient FTL for NAND flash based SSDs. Address translation information of page mapping based FTLs is stored on flash memory pages and address translation cache keeps frequently accessed entries. The proposed FTL of this paper reduces response time by considering both of temporal and spacial localities of page access patterns in translation cache management. The localities of several well-known traces are evaluated and determine the structure of the cache for high hit ratio. A simulation with several well-known traces shows that the presented FTL reduces response time in comparison to previous FTLs and can be used with relatively small size of caches.

A Policy of Page Management Using Double Cache for NAND Flash Memory File System (NAND 플래시 메모리 파일 시스템을 위한 더블 캐시를 활용한 페이지 관리 정책)

  • Park, Myung-Kyu;Kim, Sung-Jo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.5
    • /
    • pp.412-421
    • /
    • 2009
  • Due to the physical characteristics of NAND flash memory, overwrite operations are not permitted at the same location, and therefore erase operations are required prior to rewriting. These extra operations cause performance degradation of NAND flash memory file system. Since it also has an upper limit to the number of erase operations for a specific location, frequent erases should reduce the lifetime of NAND flash memory. These problems can be resolved by delaying write operations in order to improve I/O performance: however, it will lower the cache hit ratio. This paper proposes a policy of page management using double cache for NAND flash memory file system. Double cache consists of Real cache and Ghost cache to analyze page reference patterns. This policy attempts to delay write operations in Ghost cache to maintain the hit ratio in Real cache. It can also improve write performance by reducing the search time for dirty pages, since Ghost cache consists of Dirty and Clean list. We find that the hit ratio and I/O performance of our policy are improved by 20.57% and 20.59% in average, respectively, when comparing them with the existing policies. The number of write operations is also reduced by 30.75% in average, compared with of the existing policies.

The Need of Cache Partitioning on Shared Cache of Integrated Graphics Processor between CPU and GPU (내장형 GPU 환경에서 CPU-GPU 간의 공유 캐시에서의 캐시 분할 방식의 필요성)

  • Sung, Hanul;Eom, Hyeonsang;Yeom, HeonYoung
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.9
    • /
    • pp.507-512
    • /
    • 2014
  • Recently, Distributed computing processing begins using both CPU(Central processing unit) and GPU(Graphic processing unit) to improve the performance to overcome darksilicon problem which cannot use all of the transistors because of the electric power limitation. There is an integrated graphics processor that CPU and GPU share memory and Last level cache(LLC). But, There is no LLC access rules between CPU and GPU, so if GPU and CPU processes run together at the same time, performance of both processes gets worse because of the contention on the LLC. This Paper gives evidence to prove the need of the Cache Partitioning and is mentioned about the cache partitioning design using page coloring to allocate the L3 Cache space only for the GPU process to guarantee GPU process performance.

Mapping Cache for High-Performance Memory Mapped File I/O in Memory File Systems (메모리 파일 시스템 기반 고성능 메모리 맵 파일 입출력을 위한 매핑 캐시)

  • Kim, Jiwon;Choi, Jungsik;Han, Hwansoo
    • Journal of KIISE
    • /
    • v.43 no.5
    • /
    • pp.524-530
    • /
    • 2016
  • The desire to access data faster and the growth of next-generation memories such as non-volatile memories, contribute to the development of research on memory file systems. It is recommended that memory mapped file I/O, which has less overhead than read-write I/O, is utilized in a high-performance memory file system. Memory mapped file I/O, however, brings a page table overhead, which becomes one of the big overheads that needs to be resolved in the entire file I/O performance. We find that same overheads occur unnecessarily, because a page table of a file is removed whenever a file is opened after being closed. To remove the duplicated overhead, we propose the mapping cache, a technique that does not delete a page table of a file but saves the page table to be reused when the mapping of the file is released. We demonstrate that mapping cache improves the performance of traditional file I/O by 2.8x and web server performance by 12%.

Program Cache Busy Time Control Method for Reducing Peak Current Consumption of NAND Flash Memory in SSD Applications

  • Park, Se-Chun;Kim, You-Sung;Cho, Ho-Youb;Choi, Sung-Dae;Yoon, Mi-Sun;Kim, Tae-Yun;Park, Kun-Woo;Park, Jongsun;Kim, Soo-Won
    • ETRI Journal
    • /
    • v.36 no.5
    • /
    • pp.876-879
    • /
    • 2014
  • In current NAND flash design, one of the most challenging issues is reducing peak current consumption (peak ICC), as it leads to peak power drop, which can cause malfunctions in NAND flash memory. This paper presents an efficient approach for reducing the peak ICC of the cache program in NAND flash memory - namely, a program Cache Busy Time (tPCBSY) control method. The proposed tPCBSY control method is based on the interesting observation that the array program current (ICC2) is mainly decided by the bit-line bias condition. In the proposed approach, when peak ICC2 becomes larger than a threshold value, which is determined by a cache loop number, cache data cannot be loaded to the cache buffer (CB). On the other hand, when peak ICC2 is smaller than the threshold level, cache data can be loaded to the CB. As a result, the peak ICC of the cache program is reduced by 32% at the least significant bit page and by 15% at the most significant bit page. In addition, the program throughput reaches 20 MB/s in multiplane cache program operation, without restrictions caused by a drop in peak power due to cache program operations in a solid-state drive.

STP-FTL: An Efficient Caching Structure for Demand-based Flash Translation Layer

  • Choi, Hwan-Pil;Kim, Yong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.7
    • /
    • pp.1-7
    • /
    • 2017
  • As the capacity of NAND flash module increases, the amount of RAM increases for caching and maintaining the FTL mapping information. In order to reduce the amount of mapping information managed in the RAM, a demand-based address mapping method stores the entire mapping information in the flash and some valid mapping information in the form of cache in the RAM so that the RAM can be used efficiently. However, when cache miss occurs, it is necessary to read the mapping information recorded in the flash, so overhead occurs to translate the address. If the RAM space is not enough, the cache hit ratio decreases, resulting in greater overhead. In this paper, we propose a method using two tables called TPMT(Translation Page Mapping Table) and SMT(Segmented Translation Page Mapping Table) to utilize both temporal locality and spatial locality more efficiently. A performance evaluation shows that this method can improve the cache hit ratio by up to 30% and reduces the extra translation operations by up to 72%, compared to the TPM scheme.

Improved Cache-hot Page Allocation Technique for Reducing Page Initialization Latency of Linux Based Systems (리눅스 기반 시스템의 페이지 초기화 지연 단축을 위한 향상된 캐시-핫 페이지 할당 기법)

  • Yang, Seokwoo;Noh, Sunhyeon;Hong, Seongsoo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.415-418
    • /
    • 2019
  • 최근 사용자 대화형(user-interactive) 응용들은 OS에게 많은 양의 메모리를 빈번하게 요구한다는 특징을 보인다. 응용의 메모리 할당 요청이 발생하면 OS는 할당할 페이지의 초기화 작업을 필수적으로 수행하는데, 빈번하게 발생하는 페이지 초기화 작업이 응용의 성능을 저하시키고 있다. 기존 리눅스 기반 시스템은 페이지 초기화 지연을 단축하기 위해 CPU의 캐시에 매핑되어 있어서 초기 값을 빠르게 쓸 수 있는 페이지인 캐시-핫(cache-hot) 페이지를 우선적으로 할당한다. 하지만 기존 리눅스는 각 코어별로 캐시-핫 페이지를 인식하고 관리하며, 다른 코어가 관리하는 캐시-핫 페이지에는 접근할 수 없다. 이러한 정책 때문에 다른 코어가 공유 캐시(shared cache)에 매핑된 캐시-핫 페이지를 관리하고 있더라도, 이를 할당받지 못하고 캐시-콜드(cache-cold) 페이지를 할당받는 경우가 발생한다. 본 논문에서는 공유 캐시에 매핑된 것으로 추정되는 캐시-핫 페이지를 별도로 인식하고 공유 캐시에 매핑된 것으로 추정되는 캐시-핫 페이지를 모든 코어가 활용할 수 있게 하여, 응용이 캐시-핫 페이지를 할당받을 확률을 기존 기법보다 높이는 향상된 캐시-핫 페이지 할당 기법을 제안한다. 제안된 기법은 페이지 할당 요청이 발생하면 먼저 각 코어의 사유 캐시에 매핑된 것으로 추정되는 캐시-핫 페이지를 우선적으로 할당하고, 할당에 실패하면 공유 캐시에 매핑된 것으로 추정되는 캐시-핫 페이지를 할당한다. 이를 통해 캐시-핫 페이지를 할당받을 확률을 기존 기법보다 높이고, 결과적으로 평균 페이지 초기화 지연을 단축한다. 제안된 기법을 리눅스 커널 4.18.10버전 기반 환경에서 구현하여 실험한 결과, 평균 페이지 초기화 지연이 기존 리눅스 시스템과 비교하여 약 7% 단축되었다.

  • PDF

Caching and Prefetching Policies Using Program Page Reference Patterns on a File System Layer for NAND Flash Memory (NAND 플래시 메모리용 파일 시스템 계층에서 프로그램의 페이지 참조 패턴을 고려한 캐싱 및 선반입 정책)

  • Kim, Gyeong-San;Kim, Seong-Jo
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.777-778
    • /
    • 2006
  • In this thesis, we design and implement a Flash Cache Core Module (FCCM) which operates on the YAFFS NAND flash memory. The FCCM applies memory replacement policy and prefetching policy based on the page reference pattern of applications. Also, implement the Clean-First memory replacement technique considering the characteristics of flash memory. In this method the decision is made according to page hit to apply prefetched waiting area. The FCCM decrease I/O hit frequency up to 37%, Compared with the linux cache and prefetching policy. Also, it operated using less memory for prefetching(maximum 24% and average 16%) compared with the linux kernel.

  • PDF

Regular File Access of Embedded System Using Flash Memory as a Storage (플래시 메모리를 저장매체로 사용하는 임베디드 시스템에서의 정규파일 접근)

  • 이은주;박현주
    • Journal of Information Technology Applications and Management
    • /
    • v.11 no.1
    • /
    • pp.189-200
    • /
    • 2004
  • Recently Flash Memory which is small and low-powered is widely used as a storage of embedded system, because an embedded system requests portability and a fast response. To resolve a difference of access time between a storage and RAM, Linux is using disk caching which copies a part of file on disk into RAM. It is not also an exception on embedded system. A READ access-time of flash memory is similar to RAMs. So, when a process on an embedded system reads data, it is similar to the time to access cached data in RAM and to access directly data on a flash memory. On the embedded system using limited memory, using a disk cache is that wastes much time and memory spaces to manage it and can not reflects the characteristic of a flash memory. This paper proposes the regular file access of limited using a page cache in the file system based on a flash memory and reflects the characteristic of a flash memory. The proposed algorithm minimizes power consumption because access numbers of the RAM are reduced and doesn't waste a memory space because it accesses directly to a flash memory Therefore, the performance improvement of the system applying the proposed algorithm is expected.

  • PDF