• Title/Summary/Keyword: Victim Cache

Search Result 22, Processing Time 0.03 seconds

An Energy-Delay Efficient System with Adaptive Victim Caches (선택적 희생 캐쉬를 이용한 저전력 고성능 시스템 설계 방안)

  • Kim Cheol Hong;Shim Sunghoon;Jhon Chu Shik;Jhang Seong Tae
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.11_12
    • /
    • pp.663-674
    • /
    • 2005
  • We propose a system aimed at achieving high energy-delay efficiency by using adaptive victim caches. Particularly, we investigate methods to improve the hit rates in the first level of memory hierarchy, which reduces the number of accesses to mort power consuming memory structures such as L2 cache. Victim cache is a memory element for reducing conflict misses in a direct-mapped L1 cache. We present two techniques to fill the victim cache with the blocks that have higher probability to be re-reqeusted by processor. Hit-based victim cache ks tilled with the blocks which were referenced frequently by processor. Replacement-based victim cache is filled with the blocks which were evicted from the sets where block replacements had happened frequently According to our simulations, replacement-based victim cache scheme outperforms the conventional victim cache scheme about $2\%$ on average and refutes the power consumption by up to $8\%$.

Data Cache System based on the Selective Bank Algorithm for Embedded System (내장형 시스템을 위한 선택적 뱅크 알고리즘을 이용한 데이터 캐쉬 시스템)

  • Jung, Bo-Sung;Lee, Jung-Hoon
    • The KIPS Transactions:PartA
    • /
    • v.16A no.2
    • /
    • pp.69-78
    • /
    • 2009
  • One of the most effective way to improve cache performance is to exploit both temporal and spatial locality given by any program executive characteristics. In this paper we present a high performance and low power cache structure with a bank selection mechanism that enhances exploitation of spatial and temporal locality. The proposed cache system consists of two parts, i.e., a main direct-mapped cache with a small block size and a fully associative buffer with a large block size as a multiple of the small block size. Especially, the main direct-mapped cache is constructed as two banks for low power consumption and stores a small block which is selected from fully associative buffer by the proposed bank selection algorithm. By using the bank selection algorithm and three state bits, We selectively extend the lifetime of those small blocks with high temporal locality by storing them in the main direct-mapped caches. This approach effectively reduces conflict misses and cache pollution at the same time. According to the simulation results, the average miss ratio, compared with the Victim and STAS caches with the same size, is improved by about 23% and 32% for Mibench applications respectively. The average memory access time is reduced by about 14% and 18% compared with the he victim and STAS caches respectively. It is also shown that energy consumption of the proposed cache is around 10% lower than other cache systems that we examine.

Advanced Victim Cache with Processor Reuse Information (프로세서의 재사용 정보를 이용하는 개선된 고성능 희생 캐쉬)

  • Kwak Jong Wook;Lee Hyunbae;Jhang Seong Tae;Jhon Chu Shik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.12
    • /
    • pp.704-715
    • /
    • 2004
  • Recently, a single or multi processor system uses the hierarchical memory structure to reduce the time gap between processor clock rate and memory access time. A cache memory system includes especially two or three levels of caches to reduce this time gap. Moreover, one of the most important things In the hierarchical memory system is the hit rate in level 1 cache, because level 1 cache interfaces directly with the processor. Therefore, the high hit rate in level 1 cache is critical for system performance. A victim cache, another high level cache, is also important to assist level 1 cache by reducing the conflict miss in high level cache. In this paper, we propose the advanced high level cache management scheme based on the processor reuse information. This technique is a kind of cache replacement policy which uses the frequency of processor's memory accesses and makes the higher frequency address of the cache location reside longer in cache than the lower one. With this scheme, we simulate our policy using Augmint, the event-driven simulator, and analyze the simulation results. The simulation results show that the modified processor reuse information scheme(LIVMR) outperforms the level 1 with the simple victim cache(LIV), 6.7% in maximum and 0.5% in average, and performance benefits become larger as the number of processors increases.

Impact Evaluation of DDoS Attacks on DNS Cache Server Using Queuing Model

  • Wang, Zheng;Tseng, Shian-Shyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.895-909
    • /
    • 2013
  • Distributed Denial-of-Service (DDoS) attacks towards name servers of the Domain Name System (DNS) have threaten to disrupt this critical service. This paper studies the vulnerability of the cache server to the flooding DNS query traffic. As the resolution service provided by cache server, the incoming DNS requests, even the massive attacking traffic, are maintained in the waiting queue. The sojourn of requests lasts until the corresponding responses are returned from the authoritative server or time out. The victim cache server is thus overloaded by the pounding traffic and thereafter goes down. The impact of such attacks is analyzed via the model of queuing process in both cache server and authoritative server. Some specific limits hold for this practical dual queuing process, such as the limited sojourn time in the queue of cache server and the independence of the two queuing processes. The analytical results are presented to evaluate the impact of DDoS attacks on cache server. Finally, numerical results are provided for further analysis.

Improving Hit Ratio and Hybrid Branch Prediction Performance with Victim BTB (Victim BTB를 활용한 히트율 개선과 효율적인 통합 분기 예측)

  • Joo, Young-Sang;Cho, Kyung-San
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.10
    • /
    • pp.2676-2685
    • /
    • 1998
  • In order to improve the branch prediction accuracy and to reduce the BTB miss rate, this paper proposes a two-level BTB structure that adds small-sized victim BTB to the convetional BTB. With small cost, two-level BTB can reduce the BTB miss rate as well as improve the prediction accuracy of the hybrid branch prediction strategy which combines dynamic prediction and static prediction. Through the trace-driven simulation of four bechmark programs, the performance improvement by the proposed two-level BTB structure is analysed and validated. Our proposed BTB structure can improve the BTB miss rate by 26.5% and the misprediction rate by 26.75%

  • PDF

Low-power Filter Cache Design Technique for Multicore Processors (멀티 코어 프로세서를 위한 저전력 필터 캐쉬 설계 기법)

  • Park, Young-Jin;Kim, Jong-Myon;Kim, Cheol-Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.12
    • /
    • pp.9-16
    • /
    • 2009
  • Energy consumption as well as performance should be considered when designing up-to-date multicore processors. In this paper, we propose new design technique to reduce the energy consumption in the instruction cache for multicore processors by using modified filter cache. The filter cache has been recognized as one of the most energy-efficient design techniques for singlecore processors. The energy consumed in the instruction cache accounts for a significant portion of total processor energy consumption. Therefore, energy-aware instruction cache design techniques are essential to reduce the energy consumption in a multicore processor. The proposed technique reduces the energy consumption in the instruction cache for multicore processors by reducing the number of accesses to the level-1 instruction cache. We evaluate the proposed design using a simulation infrastructure based on SimpleScalar and CACTI. Simulation results show that the proposed architecture reduces the energy consumption in the instruction cache for multicore processors by up to 3.4% compared to the conventional filter cache architecture. Moreover, the proposed architecture shows better performance over the conventional filter cache architecture.

Efficient Cache Architecture for Transactional Memory (트랜잭셔널 메모리를 위한 효율적인 캐시 구조)

  • Choi, Dong-Min;Kim, Seung-Hun;Ro, Won-Woo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.4
    • /
    • pp.1-8
    • /
    • 2011
  • Traditional transactional memory systems are no longer able to guarantee the performance of diverse applications with overflowed transactions since there is the drawback that tracking the data for logging is difficult. Especially, this mechanism has a disadvantage of increasing communication delay for sustaining the state which is required to detect the conflict on the overflowed transactions from the first level cache in the transactional memory systems. To address this point, we have focused on the cache architecture of the systems to reduce the overhead caused by overflows and cache misses. In this paper, we present Supportive Cache which reduces additional overhead during transactions. Supportive Cache performs a parallel look-up with L1 private cache and uses the same replacement policy as L1 private cache. We evaluate the performance of the proposed design by comparing LogTM-SE with and without Supportive Cache. The simulation results show that our system improves the performance by 37% on average, compared to the original LogTM-SE which uses the same hardware resource.

Cache memory system for high performance CPU with 4GHz (4Ghz 고성능 CPU 위한 캐시 메모리 시스템)

  • Jung, Bo-Sung;Lee, Jung-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.2
    • /
    • pp.1-8
    • /
    • 2013
  • TIn this paper, we propose a high performance L1 cache structure on the high clock CPU of 4GHz. The proposed cache memory consists of three parts, i.e., a direct-mapped cache to support fast access time, a two-way set associative buffer to exploit temporal locality, and a buffer-select table. The most recently accessed data is stored in the direct-mapped cache. If a data has a high probability of a repeated reference, when the data is replaced from the direct-mapped cache, the data is selectively stored into the two-way set associative buffer. For the high performance and low power consumption, we propose an one way among two ways set associative buffer is selectively accessed based on the buffer-select table(BST). According to simulation results, Energy $^*$ Delay product can improve about 45%, 70% and 75% compared with a direct mapped cache, a four-way set associative cache, and a victim cache with two times more space respectively.

A Cache Replacement Strategy based on the Analysis of Request Patterns in Mobile Computing Environments (이동 컴퓨팅 환경에서 요구 패턴 분석을 기반으로 하는 캐쉬 대체 전략)

  • 이윤장;신동천
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.780-791
    • /
    • 2003
  • Caching is a useful technique to improve the response time by reducing contention of requests in mobile computing environments with a narrow bandwidth. in the traditional cache-based systems, to improve the hit ratio has been usually one of main concerns for the time. However, in mobile computing environments, it is necessary to consider the cost of cache miss as well as the hit ratio. In this paper, we propose a new cache replacement strategy in pull-based data dissemination systems. Then, we evaluate performance of the proposed strategy by a simulation approach. The proposed strategy considers both the popularity and the wating time together, so the page with the smallest value of multiplying popularity by waiting time is selected as a victim.

A Low-Power Texture Mapping Technique for Mobile 3D Graphics (모바일 3D 그래픽스를 위한 저전력 텍스쳐 맵핑 기법)

  • Kim, Hyun-Hee;Kim, Ji-Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.45-57
    • /
    • 2009
  • ETexture mapping is a technique used for adding reality to an image in 3D graphics. However. this technique becomes the bottleneck of the 3D graphics pipeline because it requires large processing power and high memory bandwidth. For reducing memory latency in texture mapping, texture cache is used. As portable devices become smaller and they have power constraint, it is important to reduce the area and the power consumption of the texture cache. In this paper we propose using a small texture cache to reduce the area and the power consumption of the texture cache. Furthermore, we propose techniques to keep a performance comparable to large texture caches by using prefetch techniques and a victim cache. Simulation results show the proposed small texture cache can reduce the area and the power consumption up to 70% and 60%, respectively, by using $1{\sim}2K$ bytes texture cache compared to the conventional 16K bytes cache while keeping the performance.