• Title/Summary/Keyword: Buffer cache

Search Result 131, Processing Time 0.031 seconds

An Area Efficient Low Power Data Cache for Multimedia Embedded Systems (멀티미디어 내장형 시스템을 위한 저전력 데이터 캐쉬 설계)

  • Kim Cheong-Ghil;Kim Shin-Dug
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.101-110
    • /
    • 2006
  • One of the most effective ways to improve cache performance is to exploit both temporal and spatial locality given by any program executional characteristics. This paper proposes a data cache with small space for low power but high performance on multimedia applications. The basic architecture is a split-cache consisting of a direct-mapped cache with small block sire and a fully-associative buffer with large block size. To overcome the disadvantage of small cache space, two mechanisms are enhanced by considering operational behaviors of multimedia applications: an adaptive multi-block prefetching to initiate various fetch sizes and an efficient block filtering to remove rarely reused data. The simulations on MediaBench show that the proposed 5KB-cache can provide equivalent performance and reduce energy consumption up to 40% as compared with 16KB 4-way set associative cache.

The Efficient Buffer Size in A Dual Flash Memory Structure with Buffer System (이중 NAND 플래시 구조의 버퍼시스템에서 효율적 버퍼 크기)

  • Jung, Bo-Sung;Lee, Jung-Hoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.6 no.6
    • /
    • pp.383-391
    • /
    • 2011
  • As we know the effects of cache memory research, instruction and data caches can be separated for higher performance with Harvard CPUs. In this paper, we shows the efficiency of buffer system in the instruction and data flash storage medium. And we analyzed characteristics of the data and instruction flash and evaluated the performance. Finally, we propose the best buffer structure with an optimal block size and buffer size for the instruction and data flash.

A New Flash Memory Package Structure with Intelligent Buffer System and Performance Evaluation (버퍼 시스템을 내장한 새로운 플래쉬 메모리 패키지 구조 및 성능 평가)

  • Lee Jung-Hoon;Kim Shin-Dug
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.2
    • /
    • pp.75-84
    • /
    • 2005
  • This research is to design a high performance NAND-type flash memory package with a smart buffer cache that enhances the exploitation of spatial and temporal locality. The proposed buffer structure in a NAND flash memory package, called as a smart buffer cache, consists of three parts, i.e., a fully-associative victim buffer with a small block size, a fully-associative spatial buffer with a large block size, and a dynamic fetching unit. This new NAND-type flash memory package can achieve dramatically high performance and low power consumption comparing with any conventional NAND-type flash memory. Our results show that the NAND flash memory package with a smart buffer cache can reduce the miss ratio by around 70% and the average memory access time by around 67%, over the conventional NAND flash memory configuration. Also, the average miss ratio and average memory access time of the package module with smart buffer for a given buffer space (e.g., 3KB) can achieve better performance than package modules with a conventional direct-mapped buffer with eight times(e.g., 32KB) as much space and a fully-associative configuration with twice as much space(e.g., 8KB)

Buffer Cache Management for Low Power Consumption (저전력을 위한 버퍼 캐쉬 관리 기법)

  • Lee, Min;Seo, Eui-Seong;Lee, Joon-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.6
    • /
    • pp.293-303
    • /
    • 2008
  • As the computing environment moves to the wireless and handheld system, the power efficiency is getting more important. That is the case especially in the embedded hand-held system and the power consumed by the memory system takes the second largest portion in overall. To save energy consumed in the memory system we can utilize low power mode of SDRAM. In the case of RDRAM, nap mode consumes less than 5% of the power consumed in active or standby mode. However hardware controller itself can't use this facility efficiently unless the operating system cooperates. In this paper we focus on how to minimize the number of active units of SDRAM. The operating system allocates its physical pages so that only a few units of SDRAM need to be activated and the unnecessary SDRAM can be put into nap mode. This work can be considered as a generalized and system-wide version of PAVM(Power-Aware Virtual Memory) research. We take all the physical memory into account, especially buffer cache, which takes an half of total memory usage on average. Because of the portion of buffer cache and its importance, PAVM approach cannot be robust without taking the buffer cache into account. In this paper, we analyze the RAM usage and propose power-aware page allocation policy. Especially the pages mapped into the process' address space and the buffer cache pages are considered. The relationship and interactions of these two kinds of pages are analyzed and exploited for energy saving.

Study of Zero-copy Mechanism in TCP/IP (TCP/IP 에서의 Zero-copy 매커니즘 연구)

  • Chae, Byoung soo;Tcha, Seung Ju
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.1 no.2
    • /
    • pp.131-136
    • /
    • 2008
  • From the reciprocal connection by this Internet network researchs about the efficiency improvement of the whole system is accomplished with the method which reduces delays in message transmission. From here, we will do a comparative study between the user data program protocol (UDP) and the zero copy which does not use the buffer cache to fine out the valid method to improve the efficiency. In this thesis, I will change the message copy from execution process of the buffer cache of the TCP/IP on Unix OS with process on Linux OS. The object of conversion is to show you that the zero copy which doesn't use the buffer cache from transfer control class improves the communication efficiency.

  • PDF

Low-Power Cache Design by using Locality Buffer and Address Compression (지역 버퍼와 주소 압축을 통한 저전력 캐시 설계)

  • Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.9
    • /
    • pp.11-19
    • /
    • 2013
  • Most modern computer systems employ cache systems in order to alleviate the access time gap between processor and memory system. The power dissipated by the cache systems becomes a significant part of the total power dissipated by whole microprocessor chip. Therefore, power reduction in the cache system becomes one of the important issues. Partial tag cache is the system for the least power consumption. The main power reduction for this method is due to the use of small partial tag matching, not full tag matching. In this paper, we first analyze the previous regular partial tag cache systems and propose a new address matching mechanism by using locality buffer and address compression. In simulation results, the proposed model shows 18% power reduction in average, still providing same performance level, compared to regular cache.

General Web Cache Implementation Using NIO (NIO를 이용한 범용 웹 캐시 구현)

  • Lee, Chul-Hui;Shin, Yong-Hyeon
    • Journal of Advanced Navigation Technology
    • /
    • v.20 no.1
    • /
    • pp.79-85
    • /
    • 2016
  • Network traffic is increased rapidly, due to mobile and social network, such as smartphones and facebook, in recent web environment. In this paper, we improved web response time of existing system using direct buffer of NIO and DMA. This solved the disadvantage of JAVA, such as CPU performance reduction due to the blocking of I/O, garbage collection of buffer. Key values circulated many data due to priority change put on a hash map operated easily and apply a priority modification algorithm. Large response data is separated and stored at a fast direct buffer and improved performance. This paper showed that the proposed method using NIO was much improved performance, in many test situations of cache hit and cache miss.

Policy for Selective Flushing of Smartphone Buffer Cache using Persistent Memory (영속 메모리를 이용한 스마트폰 버퍼 캐시의 선별적 플러시 정책)

  • Lim, Soojung;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.1
    • /
    • pp.71-76
    • /
    • 2022
  • Buffer cache bridges the performance gap between memory and storage, but its effectiveness is limited due to periodic flush, performed to prevent data loss in smartphones. This paper shows that selective flushing technique with small persistent memory can reduce the flushing overhead of smartphone buffer cache significantly. This is due to our I/O analysis of smartphone applications in that a certain hot data account for most of file writes, while a large proportion of file data incurs single-writes. The proposed selective flushing policy performs flushing to persistent memory for frequently updated data, and storage flushing is performed only for single-write data. This eliminates storage write traffic and also improves the space efficiency of persistent memory. Simulations with popular smartphone application I/O traces show that the proposed policy reduces write traffic to storage by 24.8% on average and up to 37.8%.

Extended Buffer Management with Flash Memory SSDs (플래시메모리 SSD를 이용한 확장형 버퍼 관리)

  • Sim, Do-Yoon;Park, Jang-Woo;Kim, Sung-Tan;Lee, Sang-Won;Moon, Bong-Ki
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.308-314
    • /
    • 2010
  • As the price of flash memory continues to drop and the technology of flash SSD controller innovates, high performance flash SSDs with affordable prices flourish in the storage market. Nevertheless, it is hard to expect that flash SSDs will replace harddisks completely as database storage. Instead, the approach to use flash SSD as a cache for harddisks would be more practical, and, in fact, several hybrid storage architectures for flash memory and harddisk have been suggested in the literature. In this paper, we propose a new approach to use flash SSD as an extended buffer for main buffer in database systems, which stores the pages replaced out from main buffer and returns the pages which are re-referenced in the upper buffer layer, improving the system performance drastically. In contrast to the existing approaches to use flash SSD as a cache in the lower storage layer, our approach, which uses flash SSD as an extended buffer in the upper host, can provide fast random read speed for the warm pages which are being replaced out from the limited main buffer. In fact, for all the pages which are missing from the main buffer in a real TPC-C trace, the hit ratio in the extended buffer could be more than 60%, and this supports our conjecture that our simple extended buffer approach could be very effective as a cache. In terms of performance/price, our extended buffer architecture outperforms two other alternative approaches with the same cost, 1) large main buffer and 2) more harddisks.

Buffer Invalidation Schemes for High Performance Transaction Processing in Shared Database Environment (공유 데이터베이스 환경에서 고성능 트랜잭션 처리를 위한 버퍼 무효화 기법)

  • 김신희;배정미;강병욱
    • The Journal of Information Systems
    • /
    • v.6 no.1
    • /
    • pp.159-180
    • /
    • 1997
  • Database sharing system(DBSS) refers to a system for high performance transaction processing. In DBSS, the processing nodes are locally coupled via a high speed network and share a common database at the disk level. Each node has a local memory, a separate copy of operating system, and a DBMS. To reduce the number of disk accesses, the node caches database pages in its local memory buffer. However, since multiple nodes may be simultaneously cached a page, cache consistency must be ensured so that every node can always access the latest version of pages. In this paper, we propose efficient buffer invalidation schemes in DBSS, where the database is logically partitioned using primary copy authority to reduce locking overhead. The proposed schemes can improve performance by reducing the disk access overhead and the message overhead due to maintaining cache consistency. Furthermore, they can show good performance when database workloads are varied dynamically.

  • PDF