• Title/Summary/Keyword: Cache Cost

Search Result 102, Processing Time 0.024 seconds

A Cache Management Technique Based on Eviction Cost Estimation for Heterogeneous Storage Devices (이기종 저장장치를 위한 제거 비용 평가 기반 캐시 관리 기법)

  • Park, SeJin;Park, ChanIk
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.7 no.3
    • /
    • pp.129-134
    • /
    • 2012
  • The objective of cache is to reduce I/O access of physical storage device so that user accesses their data faster. Traditionally, the most important metric to measure the performance of cache is hitratio. Thus, when the cache maintains hitratio high, it is regarded as a good cache replacement policy. However, the cache miss latency is different when the storages are heterogeneous. Though the cache hitratio is high, if the cache often misses with low performance disk, then the user experiences low performance. To address this problem we proposed eviction cost estimation based cache management. In our result, the eviction cost estimation based cache management has 10~30% throughput improvement compared with LRU cache management.

Cache Management Method for Query Forwarding Optimization in the Grid Database (그리드 데이터베이스에서 질의 전달 최적화를 위한 캐쉬 관리 기법)

  • Shin, Soong-Sun;Jang, Yong-Il;Lee, Soon-Jo;Bae, Hae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.1
    • /
    • pp.13-25
    • /
    • 2007
  • A cache is used for optimization of query forwarding in the Grid database. To decrease network transmission cost, frequently used data is cached from meta database. Existing cache management method has a unbalanced resource problem, because it doesn't manage replicated data in each node. Also, it increases network cost by cache misses. In the case of data modification, if cache is not updated, queries can be transferred to wrong nodes and it can be occurred others nodes which have same cache. Therefore, it is necessary to solve the problems of existing method that are using unbalanced resource of replica and increasing network cost by cache misses. In this paper, cache management method for query forwarding optimization is proposed. The proposed method manages caches through cache manager. To optimize query forwarding, the cache manager makes caching data from lower loaded replicated node. The query processing cost and the network cost will decrease for the reducing of wrong query forwarding. The performance evaluation shows that proposed method performs better than the existing method.

  • PDF

A cache placement algorithm based on comprehensive utility in big data multi-access edge computing

  • Liu, Yanpei;Huang, Wei;Han, Li;Wang, Liping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.3892-3912
    • /
    • 2021
  • The recent rapid growth of mobile network traffic places multi-access edge computing in an important position to reduce network load and improve network capacity and service quality. Contrasting with traditional mobile cloud computing, multi-access edge computing includes a base station cooperative cache layer and user cooperative cache layer. Selecting the most appropriate cache content according to actual needs and determining the most appropriate location to optimize the cache performance have emerged as serious issues in multi-access edge computing that must be solved urgently. For this reason, a cache placement algorithm based on comprehensive utility in big data multi-access edge computing (CPBCU) is proposed in this work. Firstly, the cache value generated by cache placement is calculated using the cache capacity, data popularity, and node replacement rate. Secondly, the cache placement problem is then modeled according to the cache value, data object acquisition, and replacement cost. The cache placement model is then transformed into a combinatorial optimization problem and the cache objects are placed on the appropriate data nodes using tabu search algorithm. Finally, to verify the feasibility and effectiveness of the algorithm, a multi-access edge computing experimental environment is built. Experimental results show that CPBCU provides a significant improvement in cache service rate, data response time, and replacement number compared with other cache placement algorithms.

Efficient Cooperative Caching Algorithm for Distributed File Systems (분산 파일시스템을 위한 효율적인 협력캐쉬 알고리즘)

  • 박새미;이석재;유재수
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2003.11a
    • /
    • pp.234-244
    • /
    • 2003
  • In distributed file-systems, cooperative caching algorithm which owns the data cached at each node jointly is used to reduce an expense of disk access. Cooperative caching algorithm is the method that increases a cache hit-ratio and decrease a disk access as it holds the cache information of distributed systems in common and makes cache larger virtually. Recently, several cooperative caching algorithms decrease the message costs by using approximate information of the cache and increase the cache hit-ratio by using local and global cache fields dynamically. And they have an advantage that increases the whole field hit-ratio by sending a replaced block to the idel node on cache replacement in order to maintain the replaced block in the cache field. However the wrong approximate information deteriorates the performance, the concistency maintenance goes to great expense to exchange messeges and the cost that manages Age-information of each node to choose the idle node increases. In this thesis, we propose a cooperative cache algorithm that maintains correct cache information, minimizes the maintance cost for consistency and the management cost for cache Age-information. Also, we show the superiority of our algorithm through the performance evaluation.

  • PDF

IT-based Technology An Efficient Global Buffer Management ,algorithm for SAN Environments (SAN 환경을 위한 효율적인 전역버퍼 관리 알고리즘)

  • 이석재;박새미;송석일;유재수;이장선
    • The Journal of the Korea Contents Association
    • /
    • v.4 no.3
    • /
    • pp.71-80
    • /
    • 2004
  • In distributed file-systems, cooperative caching algorithm which owns the data cached at each node jointly is used to reduce an expense of disk access. Cooperative caching algorithm is the method that increases a cache hit-ratio and decrease a disk access as it holds the cache information of distributed systems in common and makes cache larger virtually. Recently, several cooperative caching algorithms decrease the message costs by using approximate information of the cache and increase the cache hit-ratio by using local and global cache fields dynamically. And they have an advantage that increases the whole field hit-ratio by sending a replaced buffer to the idle node on buffers replacement in order to maintain the replaced cache in the cache field. However the wrong approximate information deteriorates the performance, the consistency maintenance goes to great expense to exchange messages and the cost that manages Age-information of each node to choose the idle node increases. In this thesis, we propose a cooperative cache algorithm that maintains correct cache information, minimizes the maintenance cost for consistency and the management cost for buffer Age-information. Also, we show the superiority of our algorithm through the performance evaluation.

  • PDF

Cache Optimization on Hot-Point Proxy Caching Using Weighted-Rank Cache Replacement Policy

  • Ponnusamy, S.P.;Karthikeyan, E.
    • ETRI Journal
    • /
    • v.35 no.4
    • /
    • pp.687-696
    • /
    • 2013
  • The development of proxy caching is essential in the area of video-on-demand (VoD) to meet users' expectations. VoD requires high bandwidth and creates high traffic due to the nature of media. Many researchers have developed proxy caching models to reduce bandwidth consumption and traffic. Proxy caching keeps part of a media object to meet the viewing expectations of users without delay and provides interactive playback. If the caching is done continuously, the entire cache space will be exhausted at one stage. Hence, the proxy server must apply cache replacement policies to replace existing objects and allocate the cache space for the incoming objects. Researchers have developed many cache replacement policies by considering several parameters, such as recency, access frequency, cost of retrieval, and size of the object. In this paper, the Weighted-Rank Cache replacement Policy (WRCP) is proposed. This policy uses such parameters as access frequency, aging, and mean access gap ratio and such functions as size and cost of retrieval. The WRCP applies our previously developed proxy caching model, Hot-Point Proxy, at four levels of replacement, depending on the cache requirement. Simulation results show that the WRCP outperforms our earlier model, the Dual Cache Replacement Policy.

Increasing a Mobile Client's Cache Reusability in Wireless Client - Server Environments (무선 클라이언트-서버 환경에서 이동 클라이언트의 캐시 데이타 재사용율 향상기법)

  • Yi Song-Yi
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.5
    • /
    • pp.282-296
    • /
    • 2006
  • In a wireless client server environment, data broadcasting is an efficient data dissemination method; a server broadcasts data, and some of broadcasted data are cached in a mobile client's cache to save the narrow communication bandwidth, limited resources, and data access time. A server also broadcasts invalidation reports to maintain the consistency between server data and a client's cached data. Most of existing works on the cache consistency problems simply purge the entire cache when the disconnection time is long enough to miss the certain amount(window size) of IRs. This paper presents a cache invalidation method to increase mobile clients' cache reusability in case of a long disconnection. Instead of simply dropping the entire cache regardless of its consistency, a client estimates the cost of purging all the data with the cost of selective purge. If the cost of dropping entire cache is higher, a client maintains the cache and selectively purge inconsistent data using uplink bandwidth for validation request. The simulation results show that this scheme increases the cache reusability since it effectively considers the update rates and the broadcast frequencies of cached data in estimating the cost of cache maintenance.

A Locality-Aware Write Filter Cache for Energy Reduction of STTRAM-Based L1 Data Cache

  • Kong, Joonho
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.1
    • /
    • pp.80-90
    • /
    • 2016
  • Thanks to superior leakage energy efficiency compared to SRAM cells, STTRAM cells are considered as a promising alternative for a memory element in on-chip caches. However, the main disadvantage of STTRAM cells is high write energy and latency. In this paper, we propose a low-cost write filter (WF) cache which resides between the load/store queue and STTRAM-based L1 data cache. To maximize efficiency of the WF cache, the line allocation and access policies are optimized for reducing energy consumption of STTRAM-based L1 data cache. By efficiently filtering the write operations in the STTRAM-based L1 data cache, our proposed WF cache reduces energy consumption of the STTRAM-based L1 data cache by up to 43.0% compared to the case without the WF cache. In addition, thanks to the fast hit latency of the WF cache, it slightly improves performance by 0.2%.

Design and analytical evaluation of a fuzzy proxy caching for wireless internet

  • Bae, Ihn-Han
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.6
    • /
    • pp.1177-1190
    • /
    • 2009
  • In this paper, we propose a fuzzy proxy cache scheme for caching web documents in mobile base stations. In this scheme, a mobile cache model is used to facilitate data caching and data replication. Using the proposed cache scheme, the individual proxy in the base station makes cache decisions based solely on its local knowledge of the global cache state so that the entire wireless proxy cache system can be effectively managed without centralized control. To improve the performance of proxy caching, the proposed cache scheme predicts the direction of movement of mobile hosts, and uses various cache methods for neighboring proxy servers according to the fuzzy-logic-based control rules based on the membership degree of the mobile host. The performance of our cache scheme is evaluated analytically in terms of average response delay and average energy cost, and is compared with that of other mobile cache schemes.

  • PDF

Enhancing GPU Performance by Efficient Hardware-Based and Hybrid L1 Data Cache Bypassing

  • Huangfu, Yijie;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.2
    • /
    • pp.69-77
    • /
    • 2017
  • Recent GPUs have adopted cache memory to benefit general-purpose GPU (GPGPU) programs. However, unlike CPU programs, GPGPU programs typically have considerably less temporal/spatial locality. Moreover, the L1 data cache is used by many threads that access a data size typically considerably larger than the L1 cache, making it critical to bypass L1 data cache intelligently to enhance GPU cache performance. In this paper, we examine GPU cache access behavior and propose a simple hardware-based GPU cache bypassing method that can be applied to GPU applications without recompiling programs. Moreover, we introduce a hybrid method that integrates static profiling information and hardware-based bypassing to further enhance performance. Our experimental results reveal that hardware-based cache bypassing can boost performance for most benchmarks, and the hybrid method can achieve performance comparable to state-of-the-art compiler-based bypassing with considerably less profiling cost.