• Title/Summary/Keyword: Policy Cache

Search Result 137, Processing Time 0.027 seconds

Advanced Victim Cache with Processor Reuse Information (프로세서의 재사용 정보를 이용하는 개선된 고성능 희생 캐쉬)

  • Kwak Jong Wook;Lee Hyunbae;Jhang Seong Tae;Jhon Chu Shik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.12
    • /
    • pp.704-715
    • /
    • 2004
  • Recently, a single or multi processor system uses the hierarchical memory structure to reduce the time gap between processor clock rate and memory access time. A cache memory system includes especially two or three levels of caches to reduce this time gap. Moreover, one of the most important things In the hierarchical memory system is the hit rate in level 1 cache, because level 1 cache interfaces directly with the processor. Therefore, the high hit rate in level 1 cache is critical for system performance. A victim cache, another high level cache, is also important to assist level 1 cache by reducing the conflict miss in high level cache. In this paper, we propose the advanced high level cache management scheme based on the processor reuse information. This technique is a kind of cache replacement policy which uses the frequency of processor's memory accesses and makes the higher frequency address of the cache location reside longer in cache than the lower one. With this scheme, we simulate our policy using Augmint, the event-driven simulator, and analyze the simulation results. The simulation results show that the modified processor reuse information scheme(LIVMR) outperforms the level 1 with the simple victim cache(LIV), 6.7% in maximum and 0.5% in average, and performance benefits become larger as the number of processors increases.

Multi-path Routing Policy for Content Distribution in Content Network

  • Yang, Lei;Tang, Chaowei;Wang, Heng;Tang, Hui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.5
    • /
    • pp.2379-2397
    • /
    • 2017
  • Content distribution technology, which routes content to the cache servers, is considered as an effective method to reduce the response time of the user requests. However, due to the exponential increases of content traffic, traditional content routing methods suffer from high delay and consequent inefficient delivery. In this paper, a content selection policy is proposed, which combines the histories of cache hit and cache hit rate to collaboratively determine the content popularity. Specifically, the CGM policy promotes the probability of possible superior paths considering the storage cost and transmission cost of content network. Then, the content routing table is updated with the proportion of the distribution on the paths. Extensive simulation results show that our proposed scheme improves the content routing and outperforms existing routing schemes in terms of Internet traffic and access latency.

Performance Analysis of Adaptive Partition Cache Replacement using Various Monitoring Ratios for Non-volatile Memory Systems

  • Hwang, Sang-Ho;Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.4
    • /
    • pp.1-8
    • /
    • 2018
  • In this paper, we propose an adaptive partition cache replacement policy and evaluate the performance of our scheme using various monitoring ratios to help lifetime extension of non-volatile main memory systems without performance degradation. The proposal combines conventional LRU (Least Recently Used) replacement policy and Early Eviction Zone (E2Z), which considers a dirty bit as well as LRU bits to select a candidate block. In particular, this paper shows the performance of non-volatile memory using various monitoring ratios and determines optimized monitoring ratio and partition size of E2Z for reducing the number of writebacks using cache hit counter logic and hit predictor. In the experiment evaluation, we showed that 1:128 combination provided the best results of writebacks and runtime, in terms of performance and complexity trade-off relation, and our proposal yielded up to 42% reduction of writebacks, compared with others.

Core-aware Cache Replacement Policy for Reconfigurable Last Level Cache (재구성 가능한 라스트 레벨 캐쉬 구조를 위한 코어 인지 캐쉬 교체 기법)

  • Son, Dong-Oh;Choi, Hong-Jun;Kim, Jong-Myon;Kim, Cheol-Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.1-12
    • /
    • 2013
  • In multi-core processors, Last Level Cache(LLC) can reduce the speed gap between the memory and the core. For this reason, LLC has big impact on the performance of processors. LLC is composed of shared cache and private cache. In computer architecture community, most researchers have mainly focused on the management techniques for shared cache, while management techniques for private cache have not been widely researched. In conventional private LLC, memory is statically assigned to each core, resulting in serious performance degradation when the workloads are not fairly distributed. To overcome this problem, this paper proposes the replacement policy for managing private cache of LLC efficiently. As proposed core-aware cache replacement policy can reconfigure LLC dynamically, hit rate of LLC is increases drastically. Moreover, proposed policy uses 2-bit saturating counters to improve the performance. According to our simulation results, the proposed method can improve hit rates by 9.23% and reduce the access time by 12.85% compared to the conventional method.

A Cache Management Technique Based on Eviction Cost Estimation for Heterogeneous Storage Devices (이기종 저장장치를 위한 제거 비용 평가 기반 캐시 관리 기법)

  • Park, SeJin;Park, ChanIk
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.7 no.3
    • /
    • pp.129-134
    • /
    • 2012
  • The objective of cache is to reduce I/O access of physical storage device so that user accesses their data faster. Traditionally, the most important metric to measure the performance of cache is hitratio. Thus, when the cache maintains hitratio high, it is regarded as a good cache replacement policy. However, the cache miss latency is different when the storages are heterogeneous. Though the cache hitratio is high, if the cache often misses with low performance disk, then the user experiences low performance. To address this problem we proposed eviction cost estimation based cache management. In our result, the eviction cost estimation based cache management has 10~30% throughput improvement compared with LRU cache management.

A New Cache Replacement Policy for Improving Last Level Cache Performance (라스트 레벨 캐쉬 성능 향상을 위한 캐쉬 교체 기법 연구)

  • Do, Cong Thuan;Son, Dong Oh;Kim, Jong Myon;Kim, Cheol Hong
    • Journal of KIISE
    • /
    • v.41 no.11
    • /
    • pp.871-877
    • /
    • 2014
  • Cache replacement algorithms have been developed in order to reduce miss counts. In modern processors, the performance gap between the processor and main memory has been increasing, creating a more important role for cache replacement policies. The Least Recently Used (LRU) policy is one of the most common policies used in modern processors. However, recent research has shown that the performance gap between the LRU and the theoretical optimal replacement algorithm (OPT) is large. Although LRU replacement has been proven to be adequate over and over again, the OPT/LRU performance gap is continuously widening as the cache associativity becomes large. In this study, we observed that there is a potential chance to improve cache performance based on existing LRU mechanisms. We propose a method that enhances the performance of the LRU replacement algorithm based on the access proportion among the lines in a cache set during a period of two successive replacement actions that make the final replacement action. Our experimental results reveals that the proposed method reduced the average miss rate of the baseline 512KB L2 cache by 15 percent when compared to conventional LRU. In addition, the performance of the processor that applied our proposed cache replacement policy improved by 4.7 percent over LRU, on average.

Exploiting Memory Sequence Analysis to Defense Wear-out Attack for Non-Volatile Memory (동작 분석을 통한 비휘발성 메모리에 대한 Wear-out 공격 방지 기법)

  • Choi, Juhee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.4
    • /
    • pp.86-91
    • /
    • 2022
  • Cache bypassing is a scheme to prevent unnecessary cache blocks from occupying the capacity of the cache for avoiding cache contamination. This method is introduced to alleviate the problems of non-volatile memories (NVMs)-based memory system. However, the prior works have been studied without considering wear-out attack. Malicious writing to a small area in NVMs leads to the failure of the system due to the limited write endurance of NVMs. This paper proposes a novel scheme to prolong the lifetime with higher resistance for the wear-out attack. First, the memory reference pattern is found by modified reuse distance calculation for each cache block. If a cache block is determined as the target of the attack, it is forwarded to higher level cache or main memory without updating the NVM-based cache. The experimental results show that the write endurance is improved by 14% on average and 36% on maximum.

A Cache Policy Based on Producer Popularity-Distance in CCN (CCN에서 생성자 인기도 및 거리 기반의 캐시정책)

  • Min, Ji-Hwan;Kwon, Tae-Wook
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.5
    • /
    • pp.791-800
    • /
    • 2022
  • CCN, which has emerged to overcome the limitations of existing network structures, enables efficient networking by changing the IP Address-based network method to the Content-based network method. At this time, the contents are stored on each node(router) rather than on the top server, and considering the limitation of storage capacity, it is very important to determine which contents to store and release through the cache policy, and there are several cache policies that have been studied so far. In this paper, two of the existing cache policies, producer popularity-based and distance-based, were mixed. In addition, the mixing ratio was set differently to experiment, and we proved which experiement was the most efficient one.

Performance Evaluation of SSD Cache Based on DM-Cache (DM-Cache를 이용해 구현한 SSD 캐시의 성능 평가)

  • Lee, Jaemyoun;Kang, Kyungtae
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.3 no.11
    • /
    • pp.409-418
    • /
    • 2014
  • The amount of data located in storage servers has dramatically increased with the growth in cloud and social networking services. Storage systems with very large capacities may suffer from poor reliability and long latency, problems which can be addressed by the use of a hybrid disk, in which mechanical and flash memory storage are combined. The Linux-based SSD(solid-state disk) uses a caching technique based on the DM-cache utility. We assess the limitations of DM-cache by evaluating its performance in diverse environments, and identify problems with the caching policy that it operates in response to various commands. This policy is effective in reducing latency when Linux is running in native mode; but when Linux is installed as a guest operating systems on a virtual machine, the overhead incurred by caching actually reduces performance.

Energy-Performance Efficient 2-Level Data Cache Architecture for Embedded System (내장형 시스템을 위한 에너지-성능 측면에서 효율적인 2-레벨 데이터 캐쉬 구조의 설계)

  • Lee, Jong-Min;Kim, Soon-Tae
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.5
    • /
    • pp.292-303
    • /
    • 2010
  • On-chip cache memories play an important role in both performance and energy consumption points of view in resource-constrained embedded systems by filtering many off-chip memory accesses. We propose a 2-level data cache architecture with a low energy-delay product tailored for the embedded systems. The L1 data cache is small and direct-mapped, and employs a write-through policy. In contrast, the L2 data cache is set-associative and adopts a write-back policy. Consequently, the L1 data cache is accessed in one cycle and is able to provide high cache bandwidth while the L2 data cache is effective in reducing global miss rate. To reduce the penalty of high miss rate caused by the small L1 cache and power consumption of address generation, we propose an ECP(Early Cache hit Predictor) scheme. The ECP predicts if the L1 cache has the requested data using both fast address generation and L1 cache hit prediction. To reduce high energy cost of accessing the L2 data cache due to heavy write-through traffic from the write buffer laid between the two cache levels, we propose a one-way write scheme. From our simulation-based experiments using a cycle-accurate simulator and embedded benchmarks, the proposed 2-level data cache architecture shows average 3.6% and 50% improvements in overall system performance and the data cache energy consumption.