• Title/Summary/Keyword: Cache Replacement Scheme

Search Result 42, Processing Time 0.023 seconds

Cache Table Management for Effective Label Switching (효율적인 레이블 스위칭을 위한 캐쉬 테이블 관리)

  • Kim, Nam-Gi;Yoon, Hyun-Soo
    • Journal of KIISE:Information Networking
    • /
    • v.28 no.2
    • /
    • pp.251-261
    • /
    • 2001
  • The traffic on the Internet has been growing exponentially for some time. This growth is beginning to stress the current-day routers. However, switching technology offers much higher performance. So the label switching network which combines IP routing with switching technology, is emerged. EspeciaJJy in the data driven label switching, flow classification and cache table management are needed. Flow classification is to classify packets into switching and non-switching packets, and cache table management is to maintain the cache table which contains information for flow classification and label switching. However, the cache table management affects the performance of label switching network considerably as well as flowclassification because the bigger cache table makes more packet switched and maintains setup cost lower, but cache is restricted by local router resources. For that reason, there is need to study the cache replacement scheme for the efficient cache table management with the Internet traffic characterized by user. So in this paper, we propose several cache replacement schemes for label switching network. First, without the limitation at switching capacity in the router. we introduce FIFO(First In First Out). LFC(Least Flow Count), LRU(Least Recently Used! scheme and propose priority LRU, weighted priority LRU scheme. Second, with the limitation at switching capacity in the router, we introduce LFC-LFC, LFC-LRU, LRU-LFC, LRU-LRU scheme and propose LRU-weighted LRU scheme. Without limitation, weighted priority LRU scheme and with limitation, LRU-weighted LRU scheme showed best performance in this paper.

  • PDF

Design and evaluation of a fuzzy cooperative caching scheme for MANETs

  • Bae, Ihn-Han
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.3
    • /
    • pp.605-619
    • /
    • 2010
  • Caching of frequently accessed data in multi-hop ad hoc environment is a technique that can improve data access performance and availability. Cooperative caching, which allows sharing and coordination of cached data among several clients, can further en-hance the potential of caching techniques. In this paper, we propose a fuzzy cooperative caching scheme in mobile ad hoc networks. The cache management of the proposed caching scheme not only uses adaptively CacheData or CachePath based on data sim-ilarity and data utility, but also uses the replacement manager based on data pro t. Also, the proposed caching scheme uses a prefetch manager. When the TTL of the cached data expires, the prefetch manager evaluates the popularity index of the data. If the popularity index is larger than a threshold, the data is prefetched. Otherwise, its space is released. The performance of the proposed scheme is evaluated analytically and is compared to that of other cooperative caching schemes.

Improving Performance of Internet by Using Hierarchical Proxy Cache (계층적 프록시 캐쉬를 이용한 인터넷 성능 향상 기법)

  • 이효일;김종현
    • Journal of the Korea Society for Simulation
    • /
    • v.9 no.2
    • /
    • pp.1-14
    • /
    • 2000
  • Recently, as construction of information infra including high-speed communication networks remarkably expands, more various information services have been provided. Thus the number of internet users rapidly increases, and it results in heavy load on Web server and higher traffics on networks. The phenomena cause longer response time that means worse quality of service. To solve such problems, much effort has been attempted to loosen bottleneck on Web server, reduce traffic on networks and shorten response times by caching informations being accessed more frequently at the proxy server that is located near to clients. And it is also possible to improve internet performance further by allowing clients to share informations stored in proxy caches. In this paper, we perform simulations of hierarchical proxy caches with the 3-level 4-ary tree structure by using real web traces, and analyze cache hit ratio for various cache replacement policies and cache sizes when the delayed-store scheme is applied. According to simulation results, the delayed-store scheme increases the remote cache hit ratio, that improves quality of service by shortening the service response time.

  • PDF

Advanced Victim Cache with Processor Reuse Information (프로세서의 재사용 정보를 이용하는 개선된 고성능 희생 캐쉬)

  • Kwak Jong Wook;Lee Hyunbae;Jhang Seong Tae;Jhon Chu Shik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.12
    • /
    • pp.704-715
    • /
    • 2004
  • Recently, a single or multi processor system uses the hierarchical memory structure to reduce the time gap between processor clock rate and memory access time. A cache memory system includes especially two or three levels of caches to reduce this time gap. Moreover, one of the most important things In the hierarchical memory system is the hit rate in level 1 cache, because level 1 cache interfaces directly with the processor. Therefore, the high hit rate in level 1 cache is critical for system performance. A victim cache, another high level cache, is also important to assist level 1 cache by reducing the conflict miss in high level cache. In this paper, we propose the advanced high level cache management scheme based on the processor reuse information. This technique is a kind of cache replacement policy which uses the frequency of processor's memory accesses and makes the higher frequency address of the cache location reside longer in cache than the lower one. With this scheme, we simulate our policy using Augmint, the event-driven simulator, and analyze the simulation results. The simulation results show that the modified processor reuse information scheme(LIVMR) outperforms the level 1 with the simple victim cache(LIV), 6.7% in maximum and 0.5% in average, and performance benefits become larger as the number of processors increases.

A Neighbor Prefetching Scheme for a Hybrid Storage System (SSD 캐시를 위한 이웃 프리페칭 기법)

  • Baek, Sung Hoon
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.40-52
    • /
    • 2018
  • Solid state drive (SSD) cache technologies that are used as a second-tier cache between the main memory and hard disk drive (HDD) have been widely studied. The SSD cache requires a new prefetching scheme as well as cache replacement algorithms. This paper presents a prefetching scheme for a storage-class cache using SSD. This prefetching scheme is designed for the storage-class cache and based on a long-term scheduling in contrast to the short-term prefetching in the main memory. Traditional prefetching algorithms just consider only read, but the presented prefetching scheme considers both read and write. An experimental evaluation shows 2.3% to 17.8% of hit rate with a 64GB of SSD and the 4GiB of prefetching size using an I/O trace of 14 days. The proposed prefetching scheme showed significant improvement of cache hit rate and can be easily implemented in storage-class cache systems.

Neighbor Cooperation Based In-Network Caching for Content-Centric Networking

  • Luo, Xi;An, Ying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.5
    • /
    • pp.2398-2415
    • /
    • 2017
  • Content-Centric Networking (CCN) is a new Internet architecture with routing and caching centered on contents. Through its receiver-driven and connectionless communication model, CCN natively supports the seamless mobility of nodes and scalable content acquisition. In-network caching is one of the core technologies in CCN, and the research of efficient caching scheme becomes increasingly attractive. To address the problem of unbalanced cache load distribution in some existing caching strategies, this paper presents a neighbor cooperation based in-network caching scheme. In this scheme, the node with the highest betweenness centrality in the content delivery path is selected as the central caching node and the area of its ego network is selected as the caching area. When the caching node has no sufficient resource, part of its cached contents will be picked out and transferred to the appropriate neighbor by comprehensively considering the factors, such as available node cache, cache replacement rate and link stability between nodes. Simulation results show that our scheme can effectively enhance the utilization of cache resources and improve cache hit rate and average access cost.

Dynamic Query Processing Using Description-Based Semantic Prefetching Scheme in Location-Based Services (위치 기반 서비스에서 서술 기반의 시멘틱 프리페칭 기법을 이용한 동적 질의 처리)

  • Kang, Sang-Won;Song, Ui-Sung
    • Journal of KIISE:Databases
    • /
    • v.34 no.5
    • /
    • pp.448-464
    • /
    • 2007
  • Location-Based Services (LBSs) provide results to queries according to the location of the client issuing the query. In LBS, techniques such as caching and prefetching are effective approaches to reducing the data transmission from a server and query response time. However, they can lead to cache inefficiency and network overload due to the client's mobility and query pattern. To solve these drawbacks, we propose a semantic prefetching (SP) scheme using prefetching segment concept and improved cache replacement policies. When a mobile client enters a new service area, called semantic prefetching area, proposed scheme fetches the necessary semantic information from the server in advance. The mobile client maintains the information in its own cache for query processing of location-dependent data (LDD) in mobile computing environment. The performance of the proposed scheme is investigated in relation to various environmental variables, such as the mobility and query pattern of user, the distributions of LDDs and applied cache replacement strategies. Simulation results show that the proposed scheme is more efficient than the well-known existing scheme for range query and nearest neighbor query. In addition, applying the two queries dynamically to query processing improves the performance of the proposed scheme.

Low-power Buffer Cache Management for Mixed HDD and SSD Storage Systems (HDD와 SSD의 혼합형 저장 시스템을 위한 절전형 버퍼 캐쉬 관리)

  • Kang, Hyo-Jung;Park, Jun-Seok;Koh, Kern;Bahn, Hyo-Kyung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.462-466
    • /
    • 2010
  • A new buffer cache management scheme that aims at reducing power consumption in mixed HDD and NAND flash memory storage systems is presented. The proposed scheme reduces power consumption by considering different energy-consumption rate of storage devices, I/O operation type (read or write), and reference potential of cached blocks in terms of both recency and frequency. Simulation shows that the proposed scheme reduces power consumption by 18.0% on average and up to 58.9%.

A Popularity-driven Cache Management and its Performance Evaluation in Meta-search Engines (메타 검색 엔진을 위한 인기도 기반 캐쉬 관리 및 성능 평가)

  • Hong, Jin-Seon;Lee, Sang-Ho
    • Journal of KIISE:Databases
    • /
    • v.29 no.2
    • /
    • pp.148-157
    • /
    • 2002
  • Caching in meta-search engines can improve the response time of users' request. We describe the cache scheme in our meta-search engine in terms of its architecture and operational flow. In particular, we propose a popularity-driven cache algorithm that utilizes popularities of queries to determine cached data to be purged. The popularity is a value that represents the normalized occurrence frequency of user queries. This paper presents how to collect popular queries and how to calculate query popularities. An empirical performance evaluation of the popularity-driven caching with the traditional schemes (i.e., least recently used (LRU) and least frequently used (LFU)) has been carried out on a collection of real data. In almost all cases, the proposed replacement policy outperforms LRU and LFU.

Considering Data Reference Pattern in Buffer Cache for Continuous Media File System (연속미디어 파일 시스템의 버퍼 캐시에서 데이터 참조 유형의 고려)

  • Cho, Kyung-Woon;Ryu, Yeon-Seung;Koh, Kern
    • The KIPS Transactions:PartA
    • /
    • v.9A no.2
    • /
    • pp.163-170
    • /
    • 2002
  • Previous buffer cache schemes for continuous media file system only exploited the sequentiality of continuous media accesses and didn't consider looping references. However, in some video applications like foreign language learning, users mark the scene as loop area and then application automatically playbacks the scene several times. In this paper, we propose a novel buffer cache scheme for continuous media file system that sequential and looping references exist together. Proposed scheme increases the cache hit ratio by detecting reference pattern of files and appling an appropriate replacement policy to each file.