• Title/Summary/Keyword: cache replacement

Search Result 168, Processing Time 0.024 seconds

Cache Optimization on Hot-Point Proxy Caching Using Weighted-Rank Cache Replacement Policy

  • Ponnusamy, S.P.;Karthikeyan, E.
    • ETRI Journal
    • /
    • v.35 no.4
    • /
    • pp.687-696
    • /
    • 2013
  • The development of proxy caching is essential in the area of video-on-demand (VoD) to meet users' expectations. VoD requires high bandwidth and creates high traffic due to the nature of media. Many researchers have developed proxy caching models to reduce bandwidth consumption and traffic. Proxy caching keeps part of a media object to meet the viewing expectations of users without delay and provides interactive playback. If the caching is done continuously, the entire cache space will be exhausted at one stage. Hence, the proxy server must apply cache replacement policies to replace existing objects and allocate the cache space for the incoming objects. Researchers have developed many cache replacement policies by considering several parameters, such as recency, access frequency, cost of retrieval, and size of the object. In this paper, the Weighted-Rank Cache replacement Policy (WRCP) is proposed. This policy uses such parameters as access frequency, aging, and mean access gap ratio and such functions as size and cost of retrieval. The WRCP applies our previously developed proxy caching model, Hot-Point Proxy, at four levels of replacement, depending on the cache requirement. Simulation results show that the WRCP outperforms our earlier model, the Dual Cache Replacement Policy.

Document Replacement Policy by Site Popularity in Web Cache (웹 캐시에서 사이트의 인기도에 의한 도큐먼트 교체정책)

  • Yoo, Hang-Suk;Jang, Tea-Mu
    • Journal of Korea Game Society
    • /
    • v.3 no.1
    • /
    • pp.67-73
    • /
    • 2003
  • Most web caches save documents temporarily into themselves on the basis of those documents. And when a corresponding document exists within the cache on wei s request, web cache sends the document to corresponding user. On the contrary, when there is not any document within the cache, web cache requests a new document to the related server to copy the document into the cache and then rum it back to user. Here, web cache uses a replacement policy to change existing document into a new one due to exceeded capacity of cache. Typical replacement policy includes document-based LRU or LFU technique and other various replacement policies are used to replace the documents within cache effectively. However, these replacement policies function only with regard to the time and frequency of document request, not considering the popularity of each web site. Based on replacement policies with regard to documents on frequent requests and the popularity of each web site, this paper aims to present the document replacement policies with regard to the popularity of each web site, which are suitable for latest network environments to enhance the hit-ratio of cache and efficiently manage the contents of cache by effectively replacing documents on intermittent requests by new ones.

  • PDF

A cache placement algorithm based on comprehensive utility in big data multi-access edge computing

  • Liu, Yanpei;Huang, Wei;Han, Li;Wang, Liping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.3892-3912
    • /
    • 2021
  • The recent rapid growth of mobile network traffic places multi-access edge computing in an important position to reduce network load and improve network capacity and service quality. Contrasting with traditional mobile cloud computing, multi-access edge computing includes a base station cooperative cache layer and user cooperative cache layer. Selecting the most appropriate cache content according to actual needs and determining the most appropriate location to optimize the cache performance have emerged as serious issues in multi-access edge computing that must be solved urgently. For this reason, a cache placement algorithm based on comprehensive utility in big data multi-access edge computing (CPBCU) is proposed in this work. Firstly, the cache value generated by cache placement is calculated using the cache capacity, data popularity, and node replacement rate. Secondly, the cache placement problem is then modeled according to the cache value, data object acquisition, and replacement cost. The cache placement model is then transformed into a combinatorial optimization problem and the cache objects are placed on the appropriate data nodes using tabu search algorithm. Finally, to verify the feasibility and effectiveness of the algorithm, a multi-access edge computing experimental environment is built. Experimental results show that CPBCU provides a significant improvement in cache service rate, data response time, and replacement number compared with other cache placement algorithms.

Document Replacement Policy by Web Site Popularity (웹 사이트의 인기도에 의한 도큐먼트 교체정책)

  • Yoo, Hang-Suk;Chang, Tae-Mu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.1
    • /
    • pp.227-232
    • /
    • 2008
  • General web caches save documents temporarily into themselves on the basis of those documents. And when a corresponding document exists within the cache on user's request. web cache sends the document to corresponding user. On the contrary. when there is not any document within the cache, web cache requests a new document to the related server to copy the document into the cache and then turn it back to user. Here, web cache uses a replacement policy to change existing document into a new one due to exceeded capacity of cache. Typical replacement policy includes document-based LRU or LFU technique and other various replacement policies are used to replace the documents within cache effectively. However. these replacement policies function only with regard to the time and frequency of document request. not considering the popularity of each web site. Based on replacement policies with regard to documents on frequent requests and the popularity of each web site, this paper aims to present the document replacement policies with regard to the popularity of each web site, which are suitable for latest network environments to enhance the hit-ratio of cache and efficiently manage the contents of cache by effectively replacing documents on intermittent requests by new ones.

  • PDF

Remote Cache Replacement Policy using Processor Locality in Multi-Processor System (다중 프로세서 시스템에서 프로세서 지역성을 이용한 원격 캐쉬 교체 정책)

  • Han Sang Yoon;Kwak Jong Wook;Jhang Seong Tae;Jhon Chu Shik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.11_12
    • /
    • pp.541-556
    • /
    • 2005
  • The memory access latency of the system has been a primary factor of performance degradation in single-processor system and multi-processor system. The remote memory access latency takes a lot of overhead over the local memory access latency especially in the distributed shared-memory system. To resolve this problem, the multi-level cache architecture that contains a remote cache in the multi-processor system has been proposed. In this paper, we propose a new cache replacement policy that improves the performance of the multi-processor system with the remote cache. If the multi-level cache keeps the multi-level inclusion(MLI) property and uses the LRU(Least Recently Used) cache replacement policy, the LRU information of the higher-level cache(a processor cache) would be different with that of the lower-level cache(a remote cache). In this situation, the replacement of a remote cache line can induce the exchange of a processor cache line that is used by the processor. It is a main factor of performance degradation in a whole system. To alleviate this disadvantage of the LRU replacement polity, the new policy analyses tht processor's remote memory access pattern of each node and uses this information to reduce the number of invalidations of the useful cache line in the higher-level cache. The new replacement policy of the remote cache can improve the performance by $3.5\%$ in maximum and $2.5\%$ in average on SPLASH-2 benchmarks, compared to the general LRU cache replacement policy.

Keeping-ownership Cache Replacement Policies for Remote Access Caches of NUMA System (NUMA 시스템에서 소유권에 근거한 원격 캐시 교체 정책)

  • 신숭현;곽종욱;장성태;전주식
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.8
    • /
    • pp.473-486
    • /
    • 2004
  • NUMA systems have remote access caches(RAC) in each local node to reduce the overhead for repeated remote memory accesses. By this RAC, memory latency and network traffic can be reduced and the performance of the multiprocessor system can be improved. Until now, several cache replacement policies have been proposed in recent years, and there also is cache replacement policy for multiprocessor systems. In this paper, we propose a cache replacement policy which is based on cache line coherence information. In this policy, the cache line that does not have an ownership is replaced first with respect to cache line that has an ownership. Like this way, the overhead to transfer ownership is avoided and the memory latency can be decreased. We also propose “Keeping-Ownership replacement policy with MRU (KOM)” and “Keeping-Ownership replacement policy with Reference Bit(KORB)” to reduce the frequent replacement penalty of the ownership-lacking cache line. We compare and analyze these with LRU and Pseudo LRU(PLRU). The simulation shows that KOM outperforms the PLRU by 25%, and KORB outperforms the PLRU by 13%. Although the hardware cost of KOM is very small, the performance of KOM is nearly equal to that of the LRU.

Shelf-Life Time Based Cache Replacement Policy Suitable for Web Environment (웹 환경에 적합한 보관수명 기반 캐시 교체정책)

  • Han, Sungmin;Park, Heungsoon;Kwon, Taewook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.6
    • /
    • pp.1091-1101
    • /
    • 2015
  • Cache mechanism, which has been a research subject for a significant period of time in computer science, has become realized in the form of web caching in network practice. Web caching has various advantages, such as saving of network resources and response time reduction, depends its performance on cache replacement policy, therefore, analysis and consideration of the environment in which a web cache operates is essential for designing better replacement policies. Thus, in the current web environment where is rapidly changing relative to the past, a new cache replacement policy is necessary to reflect those changes. In this paper we stipulate some characteristics of the web at present, propose a new cache replacement policy, and evaluate it.

Dynamic Cache Partitioning Strategy for Efficient Buffer Cache Management (효율적인 버퍼 캐시 관리를 위한 동적 캐시 분할 블록교체 기법)

  • 진재선;허의남;추현승
    • Journal of the Korea Society for Simulation
    • /
    • v.12 no.2
    • /
    • pp.35-44
    • /
    • 2003
  • The effectiveness of buffer cache replacement algorithms is critical to the performance of I/O systems. In this paper, we propose the degree of inter-reference gap (DIG) based block replacement scheme that retains merits of the least recently used (LRU) such as simple implementation and good cache hit ratio (CHR) for general patterns of references, and improves CHR further. In the proposed scheme, cache blocks with low DIGs are distinguished from blocks with high DIGs and the replacement block is selected among high DIGs blocks as done in the low inter-reference recency set (LIRS) scheme. Thus, by having the effect of the partitioning the cache memory dynamically based on DIGs, CHR is improved. Trace-driven simulation is employed to verified the superiority of the DIG based scheme and shows that the performance improves up to about 175% compared to the LRU scheme and 3% compared to the LIRS scheme for the same traces.

  • PDF

Efficient Document Replacement Policy by Web Site Popularity

  • Han, Jun-Tak
    • International Journal of Contents
    • /
    • v.3 no.1
    • /
    • pp.14-18
    • /
    • 2007
  • General replacement policy includes document-based LRU or LFU technique and other various replacement policies are used to replace the documents within cache effectively. But, these replacement policies function only with regard to the time and frequency of document request, not considering the popularity of each web site. In this paper, we present the document replacement policies with regard to the popularity of each web site, which are suitable for modern network environments to enhance the hit-ratio and efficiently manage the contents of cache by effectively replacing documents on intermittent requests by new ones.

A New Cache Replacement Policy for Improving Last Level Cache Performance (라스트 레벨 캐쉬 성능 향상을 위한 캐쉬 교체 기법 연구)

  • Do, Cong Thuan;Son, Dong Oh;Kim, Jong Myon;Kim, Cheol Hong
    • Journal of KIISE
    • /
    • v.41 no.11
    • /
    • pp.871-877
    • /
    • 2014
  • Cache replacement algorithms have been developed in order to reduce miss counts. In modern processors, the performance gap between the processor and main memory has been increasing, creating a more important role for cache replacement policies. The Least Recently Used (LRU) policy is one of the most common policies used in modern processors. However, recent research has shown that the performance gap between the LRU and the theoretical optimal replacement algorithm (OPT) is large. Although LRU replacement has been proven to be adequate over and over again, the OPT/LRU performance gap is continuously widening as the cache associativity becomes large. In this study, we observed that there is a potential chance to improve cache performance based on existing LRU mechanisms. We propose a method that enhances the performance of the LRU replacement algorithm based on the access proportion among the lines in a cache set during a period of two successive replacement actions that make the final replacement action. Our experimental results reveals that the proposed method reduced the average miss rate of the baseline 512KB L2 cache by 15 percent when compared to conventional LRU. In addition, the performance of the processor that applied our proposed cache replacement policy improved by 4.7 percent over LRU, on average.