• Title/Summary/Keyword: cache scheme

Search Result 273, Processing Time 0.027 seconds

Cooperative Content Caching and Distribution in Dense Networks

  • Kabir, Asif
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5323-5343
    • /
    • 2018
  • Mobile applications and social networks tend to enhance the need for high-quality content access. To address the rapid growing demand for data services in mobile networks, it is necessary to develop efficient content caching and distribution techniques, aiming at significantly reduction of redundant content transmission and thus improve content delivery efficiency. In this article, we develop optimal cooperative content cache and distribution policy, where a geographical cluster model is designed for content retrieval across the collaborative small cell base stations (SBSs) and replacement of cache framework. Furthermore, we divide the SBS storage space into two equal parts: the first is local, the other is global content cache. We propose an algorithm to minimize the content caching delay, transmission cost and backhaul bottleneck at the edge of networks. Simulation results indicates that the proposed neighbor SBSs cooperative caching scheme brings a substantial improvement regarding content availability and cache storage capacity at the edge of networks in comparison with the current conventional cache placement approaches.

A Non-Cacheable Address Designating Scheme in MMU-less Embedded Microprocessor Systems

  • Lim, Yong-Seok;Suh, Woon-Sik;Kim, Suki
    • Proceedings of the IEEK Conference
    • /
    • 2002.06e
    • /
    • pp.235-238
    • /
    • 2002
  • This paper proposes a novel scheme of designating non-cacheable addresses of memories in embedded systems of multi-master architectures without a Memory Management Unit (MMU). As a solution for data coherency problem between external memories and a cache memory, we proposes a cache masking scheme by allocating the most significant bit of address not used in 32-bit address system as indicator bit to designate non-cacheable address. As this scheme enables non-cacheable area designation every address, the simpler in the aspect of hardware and more flexible size of non-cacheable area can be obtained.

  • PDF

Analysis of a Cache Management Protocol Using a Back-shifting Approach (백쉬프팅 기법을 이용한 캐쉬 유지 규약의 분석)

  • Cho Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.6
    • /
    • pp.49-56
    • /
    • 2005
  • To reduce server bottlenecks in client-server computing, each client may have its own cache for later reuse. The pessimistic approach for cache management protocol leads to unnecessary waits, because, it can not be commit a transaction until the transaction obtains all requested locks. In addition, optimistic approach tends to make needless aborts. This paper suggests an efficient optimistic protocol that overcomes such shortcomings. In this paper, we present a simulation-based analysis on the performance of our scheme with other well-known protocols. The analysis was executed under the Zipf workload which represents the popularity distribution on the Web. The simulation experiments show that our scheme performs as well as or better than other schemes with low overhead.

  • PDF

Performance Impact of Large File Transfer on Web Proxy Caching: A Case Study in a High Bandwidth Campus Network Environment

  • Kim, Hyun-Chul;Lee, Dong-Man;Chon, Kil-Nam;Jang, Beak-Cheol;Kwon, Tae-Kyoung;Choi, Yang-Hee
    • Journal of Communications and Networks
    • /
    • v.12 no.1
    • /
    • pp.52-66
    • /
    • 2010
  • Since large objects consume substantial resources, web proxy caching incurs a fundamental trade-off between performance (i.e., hit-ratio and latency) and overhead (i.e., resource usage), in terms of caching and relaying large objects to users. This paper investigates how and to what extent the current dedicated-server based web proxy caching scheme is affected by large file transfers in a high bandwidth campus network environment. We use a series of trace-based performance analyses and profiling of various resource components in our experimental squid proxy cache server. Large file transfers often overwhelm our cache server. This causes a bottleneck in a web network, by saturating the network bandwidth of the cache server. Due to the requests for large objects, response times required for delivery of concurrently requested small objects increase, by a factor as high as a few million, in the worst cases. We argue that this cache bandwidth bottleneck problem is due to the fundamental limitations of the current centralized web proxy caching model that scales poorly when there are a limited amount of dedicated resources. This is a serious threat to the viability of the current web proxy caching model, particularly in a high bandwidth access network, since it leads to sporadic disconnections of the downstream access network from the global web network. We propose a peer-to-peer cooperative web caching scheme to address the cache bandwidth bottleneck problem. We show that it performs the task of caching and delivery of large objects in an efficient and cost-effective manner, without generating significant overheads for participating peers.

An Efficient Buffer Cache Management Scheme for Heterogeneous Storage Environments (이기종 저장 장치 환경을 위한 버퍼 캐시 관리 기법)

  • Lee, Se-Hwan;Koh, Kern;Bahn, Hyo-Kyung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.5
    • /
    • pp.285-291
    • /
    • 2010
  • Flash memory has many good features such as small size, shock-resistance, and low power consumption, but the cost of flash memory is still high to substitute for hard disk entirely. Recently, some mobile devices, such as laptops, attempt to use both flash memory and hard disk together for taking advantages of merits of them. However, existing OSs (Operating Systems) are not optimized to use the heterogeneous storage media. This paper presents a new buffer cache management scheme. First, we allocate buffer cache space according to access patterns of block references and the characteristics of storage media. Second, we prefetch data blocks selectively according to the location of them and access patterns of them. Third, we moves destaged data from buffer cache to hard disk or flash memory considering the access patterns of block references. Trace-driven simulation shows that the proposed schemes enhance the buffer cache hit ratio by up to 29.9% and reduce the total I/O elapsed time by up to 49.5%.

2Q-CFP: A Client Cache Management Scheme for Broadcast-based Information Systems (2Q-CFP: 방송에 기초한 정보 시스템을 위한 클라이언트 캐쉬 관리 기법)

  • 권혁민
    • Journal of KIISE:Databases
    • /
    • v.30 no.6
    • /
    • pp.561-572
    • /
    • 2003
  • Broadcast-based data delivery has attracted a lot of attention as an efficient way of disseminating data to very large client populations. The main motivation of broadcast-based information systems (BBISs) is that the number of clients that they serve can grow arbitrarily large without any effect on their performance. The performance of BBISs depends mainly on client caching strategies and on data broadcast scheduling mechanisms. This paper addresses the former issue and proposes a new client cache management scheme, named 2Q-CFP, that is suitable to BBISs. This paper also evaluates the performance of 2Q-CFP on the basis of a simulation model. The performance results indicate that 2Q-CFP scheme shows superior performances over GRAY, LRU and CF in the average response time.

Client Cache Management Scheme For Data Broadcasting Environments (LRU-CFP: 데이터 방송 환경을 위한 클라이언트 캐쉬 관리 기법)

  • Kwon, Hyeok-Min
    • The KIPS Transactions:PartD
    • /
    • v.10D no.6
    • /
    • pp.961-970
    • /
    • 2003
  • In data broadcasting environments, the server periodically broadcasts data items in the broadcast channel. When each client wants to access any data item, it should monitor the broadcast channel and wait for the desired item to arrive. Client data caching is a very effective technique for reducing the time spent waiting for the desired item to be broadcastted. This paper proposes a new client cache management scheme, named LRU-CFP, to reduce this waiting time ans evaluates its performance on the basis of a simulation model. The performance results indicate that LRU-CFP scheme shows superior performance over LRU, GRAY and CF in the average response time.

Filter Cache Predictor Using Mode Selection Bit (모드 선택 비트를 사용한 필터 캐시 예측기)

  • Kwak, Jong-Wook
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.5
    • /
    • pp.1-13
    • /
    • 2009
  • Filter cache has been introduced as one solution of reducing cache power consumption. More than 50% of the power reduction results from the filter cache, whereas more than 20% of the performance is compromised. To minimize the performance degradation of the filter cache, the predictive filter cache has been proposed. In this paper, we review the previous filter cache predictors and analyze the problems of the solutions. As a result, we found main problems that cause prediction misses in previous filter cache schemes and, to resolve the problems, this paper proposes a new prediction policy. In our scheme, some reference bit entries, called MSBs, are inserted into filter cache and BTB, to adaptively control the filter cache access. In simulation parts, we use a modified SimpleScalar simulator with MiBench benchmark programs to verify the proposed filter cache. The simulation result shows in average 5% performance improvement, compared to previous ones.

Improving Reliability of the Last Level Cache with Low Energy and Low Area Overhead (낮은 에너지 소모와 공간 오버헤드의 Last Level Cache 신뢰성 향상 기법)

  • Kim, Young-Ung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.35-41
    • /
    • 2012
  • Due to the technology scaling, more transistors can be placed on a cache memories of a processor. However, processors become more vulnerable to the soft error because of the highly integrated transistors, and consequently, the reliability of the cache memory must consider seriously at the design space level. In this paper, we propose the reliability improving technique which can be achieved with low energy and low area overheads. The simulation experiments of the proposed scheme shows over 95.4% of protection rate against the soft error with only 0.26% of performance degradations. Also, It requires only 2.96% of extra energy consumption.

An efficient caching scheme at replacing a dirty block for softwre RAID filte systems (소프트웨어 RAID 파일 시스템에서 오손 블록 교체시에 효율적인 캐슁 기법)

  • 김종훈;노삼혁;원유헌
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.7
    • /
    • pp.1599-1606
    • /
    • 1997
  • The software RAID file system is defined as the system which distributes data redundantly across an aray of disks attached to each workstations connected on a high-speed network. This provides high throughput as well as higher availability. In this paper, we present an efficient caching scheme for the software RAID filte system. The performance of this schmem is compared to two other schemes previously proposed for convnetional file systems and adapted for the software RAID file system. As in hardware RAID systems, small-writes to be the performance bottleneck in softwre RAID filte systems. To tackle this problem, we logically divide the cache into two levels. By keeping old data and parity val7ues in the second-level cache we were able to eliminate much of the extra disk reads and writes necessary for write-back of dirty blocks. Using track driven simulations we show that the proposed scheme improves performance for both the average response time and the average system busy time.

  • PDF