• Title/Summary/Keyword: Cache Miss

Search Result 99, Processing Time 0.019 seconds

Warp-Based Load/Store Reordering to Improve GPU Time Predictability

  • Huangfu, Yijie;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.2
    • /
    • pp.58-68
    • /
    • 2017
  • While graphics processing units (GPUs) can be used to improve the performance of real-time embedded applications that require high throughput, it is challenging to estimate the worst-case execution time (WCET) of GPU programs, because modern GPUs are designed for improving the average-case performance rather than time predictability. In this paper, a reordering framework is proposed to regulate the access to the GPU data cache, which helps to improve the accuracy of the estimation of GPU L1 data cache miss rate with low performance overhead. Also, with the improved cache miss rate estimation, tighter WCET estimations can be achieved for GPU programs.

High Performance Data Cache Memory Architecture (고성능 데이터 캐시 메모리 구조)

  • Kim, Hong-Sik;Kim, Cheong-Ghil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.9 no.4
    • /
    • pp.945-951
    • /
    • 2008
  • In this paper, a new high performance data cache scheme that improves exploitation of both the spatial and temporal locality is proposed. The proposed data cache consists of a hardware prefetch unit and two sub-caches such as a direct-mapped (DM) cache with a large block size and a fully associative buffer with a small block size. Spatial locality is exploited by fetching and storing large blocks into a direct mapped cache, and is enhanced by prefetching a neighboring block when a DM cache hit occurs. Temporal locality is exploited by storing small blocks from the DM cache in the fully associative buffer according to their activity in the DM cache when they are replaced. Experimental results on Spec2000 programs show that the proposed scheme can reduce the average miss ratio by $12.53%\sim23.62%$ and the AMAT by $14.67%\sim18.60%$ compared to the previous schemes such as direct mapped cache, 4-way set associative cache and SMI(selective mode intelligent) cache[8].

Performance Analyses of Instruction Fetch Models Considering Cache Miss and Branch Misprediction (캐쉬 미스와 분기예측 실패를 고려한 명령어 페치 모델의 성능분석)

  • Kim, Seon-Mo;Jeong, Jin-Ha;Choe, Sang-Bang
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.28 no.12
    • /
    • pp.685-697
    • /
    • 2001
  • Cache memories are small fast memories used to temporarily hold the contents of main memory that are likely to be referenced by processors so as to reduce instruction and data access time. In this paper, we represent analytical models of instruction fetch process for four types of instruction cache structures that can be used for superscalar processors. In the models, we define various kinds of architectural parameters and take cache miss and branch misprediction into consideration. To prove the correctness of the proposed models, we performed extensive simulations and compared the results with the analytical models. Simulation results showed that the proposed model can estimate the instruction fetch rate accurately within 10% error in most cases. Both analytical model and simulation show that the increase of cache misses reduces the instruction fetch rate more severely than that of branch misprediction does. However, the analytical model can explain the causes of performance degradation which cannot be uncovered by the simulation method only. The model is also able to provide exact relationship between cache miss and branch misprediction for instruction fetch analysis.

  • PDF

The Early Write Back Scheme For Write-Back Cache (라이트 백 캐쉬를 위한 빠른 라이트 백 기법)

  • Chung, Young-Jin;Lee, Kil-Whan;Lee, Yong-Surk
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.11
    • /
    • pp.101-109
    • /
    • 2009
  • Generally, depth cache and pixel cache of 3D graphics are designed by using write-back scheme for efficient use of memory bandwidth. Also, there are write after read operations of same address or only write operations are occurred frequently in 3D graphics cache. If a cache miss is detected, an access to the external memory for write back operation and another access to the memory for handling the cache miss are operated simultaneously. So on frequent cache miss situations, as the memory access bandwidth limited, the access time of the external memory will be increased due to memory bottleneck problem. As a result, the total performance of the processor or the IP will be decreased, also the problem will increase peak power consumption. So in this paper, we proposed a novel early write back cache architecture so as to solve the problems issued above. The proposed architecture controls the point when to access the external memory as to copy the valid data block. And this architecture can improve the cache performance with same hit ratio and same capacity cache. As a result, the proposed architecture can solve the memory bottleneck problem by preventing intensive memory accesses. We have evaluated the new proposed architecture on 3D graphics z cache and pixel cache on a SoC environment where ARM11, 3D graphic accelerator and various IPs are embedded. The simulation results indicated that there were maximum 75% of performance increase when using various simulation vectors.

Image Cache for FPGA-based Real-time Image Warping (FPGA 기반 실시간 영상 워핑을 위한 영상 캐시)

  • Choi, Yong Joon;Ryoo, Jung Rae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.6
    • /
    • pp.91-100
    • /
    • 2016
  • In FPGA-based real-time image warping systems, image caches are utilized for fast readout of image pixel data and reduction of memory access rate. However, a cache algorithm for a general computer system is not suitable for real-time performance because of time delays from cache misses and on-line computation complexity. In this paper, a simple image cache algorithm is presented for a FPGA-based real-time image warping system. Considering that pixel data access sequence is determined from the 2D coordinate transformation and repeated identically at every image frame, a cache load sequence is off-line programmed to guarantee no cache miss condition, and reduced on-line computation results in a simple cache controller. An overall system structure using a FPGA is presented, and experimental results are provided to show accuracy and validity of the proposed cache algorithm.

A Hybrid Prefix Cashing Scheme for Efficient IP Address Lookup

  • Kim, Jinsoo;Kim, Junghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.12
    • /
    • pp.45-52
    • /
    • 2015
  • We propose a hybrid prefix caching scheme to enable high speed IP address lookup. All prefixes loaded in a prefix cache should not be overlapped in address range for correct IP lookup. So, every non-leaf prefix needs to be expanded not so as to be overlapped. The shorter expanded prefix is more preferable because it can cover wider address range just as an single entry in a prefix cache. We exploits advantages of two dynamic prefix expansion techniques, bounded prefix expansion technique and bitmap-based prefix expansion technique. The proposed scheme uses dual bound values whereas just one bound value is used in bounded prefix expansion. Our elaborated technique make the dual bound values be associated with several subtries flexibly using bitmap information, rather than with fixed subtries. We evaluate the performance of the proposed scheme in terms of the average length of the expanded prefixes and cache miss ratio. The experiment results show the proposed scheme has lower cache miss ratio than other previous schemes including both bounded prefix expansion and bitmap-based expansion irrespective of the cache size.

A Cache Management Technique Based on Eviction Cost Estimation for Heterogeneous Storage Devices (이기종 저장장치를 위한 제거 비용 평가 기반 캐시 관리 기법)

  • Park, SeJin;Park, ChanIk
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.7 no.3
    • /
    • pp.129-134
    • /
    • 2012
  • The objective of cache is to reduce I/O access of physical storage device so that user accesses their data faster. Traditionally, the most important metric to measure the performance of cache is hitratio. Thus, when the cache maintains hitratio high, it is regarded as a good cache replacement policy. However, the cache miss latency is different when the storages are heterogeneous. Though the cache hitratio is high, if the cache often misses with low performance disk, then the user experiences low performance. To address this problem we proposed eviction cost estimation based cache management. In our result, the eviction cost estimation based cache management has 10~30% throughput improvement compared with LRU cache management.

A Cache-Conscious Compression Index Based on the Level of Compression Locality (압축 지역성 수준에 기반한 캐쉬 인식 압축 색인)

  • Kim, Won-Sik;Yoo, Jae-Jun;Lee, Jin-Soo;Han, Wook-Shin
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.7
    • /
    • pp.1023-1043
    • /
    • 2010
  • As main memory get cheaper, it becomes increasingly affordable to load entire index of DBMS and to access the index. Since speed gap between CPU and main memory is growing bigger, many researches to reduce a cost of main memory access are under the progress. As one of those, cache conscious trees can reduce the cost of main memory access. Since cache conscious trees reduce the number of cache miss by compressing data in node, cache conscious trees can reduce the cost of main memory. Existing cache conscious trees use only fixed one compression technique without consideration of properties of data in node. First, this paper proposes the DC-tree that uses various compression techniques and change data layout in a node according to properties of data in order to reduce cache miss. Second, this paper proposes the level of compression locality that describes properties of data in node by formula. Third, this paper proposes Forced Partial Decomposition (FPD) that reduces the nutter of cache miss. DC-trees outperform 1.7X than B+-tree, 1.5X than simple prefix B+-tree, and 1.3X than pkB-tree, in terms of the number of cache misses. Since proposed DC-trees can be adopted in commercial main memory database system, we believe that DC-trees are practical result.

A Hardware Cache Prefetching Scheme for Multimedia Data with Intermittently Irregular Strides (단속적(斷續的) 불규칙 주소간격을 갖는 멀티미디어 데이타를 위한 하드웨어 캐시 선인출 방법)

  • Chon Young-Suk;Moon Hyun-Ju;Jeon Joongnam;Kim Sukil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.11
    • /
    • pp.658-672
    • /
    • 2004
  • Multimedia applications are required to process the huge amount of data at high speed in real time. The memory reference instructions such as loads and stores are the main factor which limits the high speed execution of processor. To enhance the memory reference speed, cache prefetch schemes are used so as to reduce the cache miss ratio and the total execution time by previously fetching data into cache that is expected to be referenced in the future. In this study, we present an advanced data cache prefetching scheme that improves the conventional RPT (reference prediction table) based scheme. We considers the cache line size in calculation of the address stride referenced by the same instruction, and enhances the prefetching algorithm so that the effect of prefetching could be maintained even if an irregular address stride is inserted into the series of uniform strides. According to experiment results on multimedia benchmark programs, the cache miss ratio has been improved 29% in average compared to the conventional RPT scheme while the bus usage has increased relatively small amount (0.03%).

Compact Field Remapping for Dynamically Allocated Structures (동적으로 할당된 구조체를 위한 압축된 필드 재배치)

  • Kim, Jeong-Eun;Han, Hwan-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.10
    • /
    • pp.1003-1012
    • /
    • 2005
  • The most significant difference of embedded systems from general purpose systems is that embedded systems are allowed to use only limited resources including battery and memory. Especially, the number of applications increases which deal with multimedia data. In those systems with high data computations, the delay of memory access is one of the major bottlenecks hurting the system performance. As a result, many researchers have investigated various techniques to reduce the memory access cost. Most programs generally have locality in memory references. Temporal locality of references means that a resource accessed at one point will be used again in the near future. Spatial locality of references is that likelihood of using a resource gets higher if resources near it were just accessed. The latest embedded processors usually adapt cache memory to exploit these two types of localities. Processors access faster cache memory than off-chip memory, reducing the latency. In this paper we will propose the enhanced dynamic allocation technique for structure-type data in order to eliminate unused memory space and to reduce both the cache miss rate and the application execution time. The proposed approach aggregates fields from multiple records dynamically allocated and consecutively remaps them on the memory space. Experiments on Olden benchmarks show $13.9\%$ L1 cache miss rate drop and $15.9\%$ L2 cache miss drop on average, compared to the previously proposed techniques. We also find execution time reduced by $10.9\%$ on average, compared to the previous work.