• Title/Summary/Keyword: 버퍼 캐시 정책

Search Result 21, Processing Time 0.022 seconds

Hybrid Buffer Replacement Scheme Considering Reference Pattern in Multimedia Storage Systems (멀티미디어 저장 시스템에서 참조 유형을 고려한 혼성 버퍼 교체 기법)

  • 류연승
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.1
    • /
    • pp.47-56
    • /
    • 2002
  • Previous buffer cache schemes for multimedia storage systems only exploited the sequential references of multimedia files and didn't consider looping references. However, in some video applications like foreign language learning, users mark the scene as loop area and then application automatically playbacks the scene several times. In this paper, we propose a new buffer replacement scheme, called HBM(Hybrid Buffer Management), for multimedia storage systems that have both sequential and looping references. Proposed scheme assumes that application layer informs reference pattern of files to file system. Then HBM applies an appropriate replacement policy to each file. Our simulation experiments show that HBM outperforms previous buffer cache schemes such as DISTANCE and LRU.

  • PDF

A Buffer Replacement Policy using Hot Page Management Scheme for Improving Performance of Flash Memory (플래시 메모리 성능향상을 위한 핫 페이지 관리 기법을 이용한 버퍼교체 정책)

  • Daeyoung Kim;Junghan Kim;Hyun-jin Cho;Young Ik Eom
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.11a
    • /
    • pp.860-863
    • /
    • 2008
  • 플래시 메모리는 우리 생활에 널리 사용되고 있는 휴대용 저장장치 중의 하나이다. 빠른 입출력 속도와 저전력, 무소음, 작은 크기 등의 장점을 가지나 덮어쓰기가 불가능하고 읽기/쓰기의 속도에 비해 소거 연산의 속도가 매우 느리다는 단점이 있다. 이를 보완하기 위해, 호스트와 플래시 메모리 사이에 버퍼 캐시를 두어 사용하고 있으며, 버퍼 캐시에 사용되는 교체 정책에 따라 플래시 메모리 장치의 성능이 크게 영향을 받는다. 본 논문에서는 블록 단위의 LRU 기법의 단점을 개선한 HPLRU 기법을 제안한다. HPLRU 기법은 최근에 자주 참조되었던 페이지인 핫 페이지 들을 모아 리스트를 만들어 관리하고, 이를 통해 페이지 적중률을 향상시키고 다른 페이지들로 인해 핫 페이지들이 소거되는 현상을 개선하였다. 이 알고리즘은 임의 데이터 패턴에 좋은 성능을 보이며 쓰기 발생 횟수를 많이 감소시키는 결과를 보였다.

Buffer Cache Management for Low Power Consumption (저전력을 위한 버퍼 캐쉬 관리 기법)

  • Lee, Min;Seo, Eui-Seong;Lee, Joon-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.6
    • /
    • pp.293-303
    • /
    • 2008
  • As the computing environment moves to the wireless and handheld system, the power efficiency is getting more important. That is the case especially in the embedded hand-held system and the power consumed by the memory system takes the second largest portion in overall. To save energy consumed in the memory system we can utilize low power mode of SDRAM. In the case of RDRAM, nap mode consumes less than 5% of the power consumed in active or standby mode. However hardware controller itself can't use this facility efficiently unless the operating system cooperates. In this paper we focus on how to minimize the number of active units of SDRAM. The operating system allocates its physical pages so that only a few units of SDRAM need to be activated and the unnecessary SDRAM can be put into nap mode. This work can be considered as a generalized and system-wide version of PAVM(Power-Aware Virtual Memory) research. We take all the physical memory into account, especially buffer cache, which takes an half of total memory usage on average. Because of the portion of buffer cache and its importance, PAVM approach cannot be robust without taking the buffer cache into account. In this paper, we analyze the RAM usage and propose power-aware page allocation policy. Especially the pages mapped into the process' address space and the buffer cache pages are considered. The relationship and interactions of these two kinds of pages are analyzed and exploited for energy saving.

Policy for Selective Flushing of Smartphone Buffer Cache using Persistent Memory (영속 메모리를 이용한 스마트폰 버퍼 캐시의 선별적 플러시 정책)

  • Lim, Soojung;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.1
    • /
    • pp.71-76
    • /
    • 2022
  • Buffer cache bridges the performance gap between memory and storage, but its effectiveness is limited due to periodic flush, performed to prevent data loss in smartphones. This paper shows that selective flushing technique with small persistent memory can reduce the flushing overhead of smartphone buffer cache significantly. This is due to our I/O analysis of smartphone applications in that a certain hot data account for most of file writes, while a large proportion of file data incurs single-writes. The proposed selective flushing policy performs flushing to persistent memory for frequently updated data, and storage flushing is performed only for single-write data. This eliminates storage write traffic and also improves the space efficiency of persistent memory. Simulations with popular smartphone application I/O traces show that the proposed policy reduces write traffic to storage by 24.8% on average and up to 37.8%.

Enhancing LRU Buffer Replacement Policy with Delayed Write of Not-cold-dirty-pages for Flash Memory (플래시 메모리를 위한 Not-cold-Page 쓰기지연을 통한 LRU 버퍼교체 정책 개선)

  • Jung Ho-Young;Park Sung-Min;Cha Jae-Hyuk;Kang Soo-Yong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.9
    • /
    • pp.634-641
    • /
    • 2006
  • Flash memory has many advantages like non-volatility and fast I/O speed, but it has also disadvantages such as not-in-place-update data and asymmetric read/write/erase speed. For the performance of flash memory storage, it is essential for the buffer replacement algorithms to reduce the number of write operations that also affects the number of erase operations. A new buffer replacement algorithm is proposed in this paper, that delays the writes of not-cold-dirty pages in the buffer cache of flash storage. We show that this algorithm effectively decreases the number of write operations and erase operations without much degradation of hit ratio. As a result overall performance of flash I/O speed is improved.

A Policy of Page Management Using Double Cache for NAND Flash Memory File System (NAND 플래시 메모리 파일 시스템을 위한 더블 캐시를 활용한 페이지 관리 정책)

  • Park, Myung-Kyu;Kim, Sung-Jo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.5
    • /
    • pp.412-421
    • /
    • 2009
  • Due to the physical characteristics of NAND flash memory, overwrite operations are not permitted at the same location, and therefore erase operations are required prior to rewriting. These extra operations cause performance degradation of NAND flash memory file system. Since it also has an upper limit to the number of erase operations for a specific location, frequent erases should reduce the lifetime of NAND flash memory. These problems can be resolved by delaying write operations in order to improve I/O performance: however, it will lower the cache hit ratio. This paper proposes a policy of page management using double cache for NAND flash memory file system. Double cache consists of Real cache and Ghost cache to analyze page reference patterns. This policy attempts to delay write operations in Ghost cache to maintain the hit ratio in Real cache. It can also improve write performance by reducing the search time for dirty pages, since Ghost cache consists of Dirty and Clean list. We find that the hit ratio and I/O performance of our policy are improved by 20.57% and 20.59% in average, respectively, when comparing them with the existing policies. The number of write operations is also reduced by 30.75% in average, compared with of the existing policies.

Design of NCQ Scheduler Considering SSD's Characteristics (SSD의 특성을 고려한 NCQ 스케줄러 설계)

  • Cho, Yong-Woon;Kim, Tae-Seok
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06a
    • /
    • pp.288-289
    • /
    • 2012
  • 이 논문에서는 Solid State Drives(SSD)의 구조적인 특성을 활용한 Native Command Queueing(NCQ) 스케줄링 기법을 제안하려 한다. SSD는 Hard Disk Drives(HDD)와 달리 접근시간이 매우 짧고, 읽기/쓰기 속도가 서로 다르다는 특성이 있다. 그리고 SSD 내부에는 HDD와 마찬가지로 버퍼캐시가 존재한다. 이런 특성들을 활용하여 커맨드가 처리되는데 걸리는 시간을 모델링할 수 있다. 이렇게 모델링한 처리시간을 짧은 순서대로 스케줄링 정책에 적용하여 응답속도를 개선할 수 있다.

Instructions and Data Prefetch Mechanism using Displacement History Buffer (변위 히스토리 버퍼를 이용한 명령어 및 데이터 프리페치 기법)

  • Jeong, Yong Su;Kim, JinHyuk;Cho, Tae Hwan;Choi, SangBang
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.10
    • /
    • pp.82-94
    • /
    • 2015
  • In this paper, we propose hardware prefetch mechanism with an efficient cache replacement policy by giving priority to the trigger block in which a spatial region and producing a spatial region by using the displacement field. It could be taken into account the sequence of the program since a history is based on the trigger block of history record, and it could be quickly prefetching the instructions or data address by adding a stored value to the trigger address and displacement field since a history is stored as a displacement value. Also, we proposed a method of replacing at random by the cache replacement policy from the low priority block when the cache area is full after giving priority to the trigger block. We analyzed using the memory simulator program gem5 and PARSEC benchmark to assess the performance of the hardware prefetcher. As a result, compared to the existing hardware prefecture to generate the spatial region using a bit vector, L1 data cache miss rate was reduced about 44.5% on average and an average of 26.1% of L1 instruction misses occur. In addition, IPC (Instruction Per Cycle) showed an improvement of about 23.7% on average.

Hierarchically Encoded Multimedia-data Management System for Over The Top Service (OTT 서비스를 위한 계층적 부호화 기반 멀티미디어 데이터 관리 시스템)

  • Lee, Taehoon;Jung, Kidong
    • Journal of KIISE
    • /
    • v.42 no.6
    • /
    • pp.723-733
    • /
    • 2015
  • The OTT service that provides multimedia video has spread over the Internet for terminals with a variety of resolutions. The terminals are in communication via a networks such as 3G, LTE, VDSL, ADSL. The service of the network has been increased for a variety of terminals giving rise to the need for a new way of encoding multimedia is increasing. SVC is an encoding technique optimized for OTT services. We proposed an efficient multimedia management system for the SVC encoded multimedia data. The I/O trace was generated using a zipf distribution, and were comparatively evaluated for performance with the existing system.

Performance Enhancement through Prefetching Based On Looping Reference Characteristics (순환 참조 특성을 기반한 선반입 성능의 개선)

  • Lee, Hyo-Jeong;Doh, In-Hwan;Noh, Sam-H.
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06b
    • /
    • pp.327-332
    • /
    • 2007
  • 버퍼캐시에서 선반입은 교체정책과 함께 중요한 성능 향상 기법 중의 하나이다. 하지만 참조 패턴의 특성에 따라서는 선반입을 수행하면 오히려 전체 수행시간을 증가시키는 경우도 보고된 바 있다. 본 논문에서는 참조 패턴을 탐지하고 탐지된 패턴에 적절히 대응하여, 선반입의 이익은 유지하되 성능에 악영향을 미치지 않는 선반입 기법으로 순환 참조 선반입을 제안한다. 성능 평가를 위해서 리눅스에서 현재 사용되고 있는 미리 읽기 선반입과 순환 참조 선반입의 수행 시간을 비교했다. 다양한 참조 패턴을 가지는 트레이스들에 대한 시뮬레이션 성능 평가 결과, 순차 참조를 많이 포함하는 트레이스에 대해서는 순환참조 선반입이 리눅스의 미리 읽기 선반입과 유사한 정도의 $3\sim5%$ 성능향상을 보였다. 뿐만 아니라, 미리 읽기 선반입 정책을 적용했을 때 오히려 40% 가량의 성능 악화를 초래하는 특정 트레이스에 대해서도 순환 참조 선반입을 적용할 경우 0.07%의 아주 미미한 성능 저하만을 유발하였다. 본 연구에서 제안하는 순환 참조 선반입 기법은 이득이 있을 때만 적극적인 선반입을 수행하여 시스템 성능을 향상시키며, 손해가 발생할 때는 선반입을 중지하여 시스템 성능 악화를 방지함을 실험을 통해 알 수 있다.

  • PDF