• Title/Summary/Keyword: dual-cache

Search Result 25, Processing Time 0.019 seconds

Design of Push Agent Model Using Dual Cache for Increasing Hit-Ratio of Data Search (데이터 검색의 적중률 향상을 위한 이중 캐시의 푸시 에이전트 모델 설계)

  • Kim Kwang-jong;Ko Hyun;Kim Young-ja;Lee Yon-sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.153-166
    • /
    • 2005
  • Existing single cache structure has shown difference of hit-ratio according to individually replacement strategy However. It needs new improved cache structure for reducing network traffic and providing advanced hit-ratio. Therefore, this Paper design push agent model using dual cache for increasing hit-ratio by reducing server overload and network traffic by repetition request of persistent and identical information. In this model proposes dual cache structure to do achievement replace gradual cache using by two caches storage space for reducing server overload and network traffic. Also, we show new cache replace techniques and algorithms which executes data update and delete based on replace strategy of Log(Size) +LRU, LFU and PLC for effectiveness of data search in cache. And through an experiment, it evaluates Performance of dual cache push agent model.

  • PDF

Dual Cache Architecture for Low Cost and High Performance

  • Lee, Jung-Hoon;Park, Gi-Ho;Kim, Shin-Dug
    • ETRI Journal
    • /
    • v.25 no.5
    • /
    • pp.275-287
    • /
    • 2003
  • We present a high performance cache structure with a hardware prefetching mechanism that enhances exploitation of spatial and temporal locality. Temporal locality is exploited by selectively moving small blocks into the direct-mapped cache after monitoring their activity in the spatial buffer. Spatial locality is enhanced by intelligently prefetching a neighboring block when a spatial buffer hit occurs. We show that the prefetch operation is highly accurate: over 90% of all prefetches generated are for blocks that are subsequently accessed. Our results show that the system enables the cache size to be reduced by a factor of four to eight relative to a conventional direct-mapped cache while maintaining similar performance.

  • PDF

Design of A Media Processor Equipped with Dual Cache (복수 캐시로 구성한 미디어 프로세서의 설계)

  • Moon, Hyun-Ju;Jeon, Joong-Nam;Kim, Suk-Il
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.10
    • /
    • pp.573-581
    • /
    • 2002
  • In this paper, we propose a mediaprocessor of dual-cache architecture which is composed of the multimedia data cache and the general-purpose data cache to prevent performance degradation caused by memory delay. In the proposed processor architecture, multimedia data that are written in subword instructions are loaded in the multimedia data cache and the remaining data are loaded in the general-purpose data cache. Also, Ive use multi-block prefetching scheme that fetches two consecutive data blocks into a cache at a time to exploit the locality of multimedia data. Experimental results on MPEG and JPEG benchmark programs show that the proposed processor architecture results in better performance than the processor equipped with single data cache.

A Hybrid Prefix Cashing Scheme for Efficient IP Address Lookup

  • Kim, Jinsoo;Kim, Junghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.12
    • /
    • pp.45-52
    • /
    • 2015
  • We propose a hybrid prefix caching scheme to enable high speed IP address lookup. All prefixes loaded in a prefix cache should not be overlapped in address range for correct IP lookup. So, every non-leaf prefix needs to be expanded not so as to be overlapped. The shorter expanded prefix is more preferable because it can cover wider address range just as an single entry in a prefix cache. We exploits advantages of two dynamic prefix expansion techniques, bounded prefix expansion technique and bitmap-based prefix expansion technique. The proposed scheme uses dual bound values whereas just one bound value is used in bounded prefix expansion. Our elaborated technique make the dual bound values be associated with several subtries flexibly using bitmap information, rather than with fixed subtries. We evaluate the performance of the proposed scheme in terms of the average length of the expanded prefixes and cache miss ratio. The experiment results show the proposed scheme has lower cache miss ratio than other previous schemes including both bounded prefix expansion and bitmap-based expansion irrespective of the cache size.

A Timestamp Tree-based Cache Invalidation Report Scheme in Mobile Environments (모바일 환경에서 타임스탬프 트리 기반 캐시 무효화 보고 기법)

  • Jung, Sung-Won;Lee, Hak-Joo
    • Journal of KIISE:Databases
    • /
    • v.34 no.3
    • /
    • pp.217-231
    • /
    • 2007
  • Frequent disconnection is connected directly to client's cache consistency problem in Mobile Computing environment. For solving cache consistency problem, research about Invalidation Report is studied. But, existent invalidation report structure comes with increase of size of invalidation report structure and decline of cache efficiency if quantity of data become much, or quantity of updated data increases. Also, while existent method confirms whole cache, invalidation report doesn't support selective listening. This paper proposes TTCI(Timestamp Tree-based Cache Invalidation scheme) as invalidation report structure that solve problem of these existing schemes and improve efficiency. We can make TTCI using timestamp of updated data, composing timestamp tree and list ID of data in updated order. If we utilize this, each client can confirm correct information in point that become own disconnecting and increase cache utilization ratio. Also, we can pare down client's resources consumption by selective listening using tree structure. We experimented in comparison with DRCI(Dual-Report Cache Invalidation) that is existent techniques to verify such efficiency of TTCI scheme.

Impact Evaluation of DDoS Attacks on DNS Cache Server Using Queuing Model

  • Wang, Zheng;Tseng, Shian-Shyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.895-909
    • /
    • 2013
  • Distributed Denial-of-Service (DDoS) attacks towards name servers of the Domain Name System (DNS) have threaten to disrupt this critical service. This paper studies the vulnerability of the cache server to the flooding DNS query traffic. As the resolution service provided by cache server, the incoming DNS requests, even the massive attacking traffic, are maintained in the waiting queue. The sojourn of requests lasts until the corresponding responses are returned from the authoritative server or time out. The victim cache server is thus overloaded by the pounding traffic and thereafter goes down. The impact of such attacks is analyzed via the model of queuing process in both cache server and authoritative server. Some specific limits hold for this practical dual queuing process, such as the limited sojourn time in the queue of cache server and the independence of the two queuing processes. The analytical results are presented to evaluate the impact of DDoS attacks on cache server. Finally, numerical results are provided for further analysis.

Cache Optimization on Hot-Point Proxy Caching Using Weighted-Rank Cache Replacement Policy

  • Ponnusamy, S.P.;Karthikeyan, E.
    • ETRI Journal
    • /
    • v.35 no.4
    • /
    • pp.687-696
    • /
    • 2013
  • The development of proxy caching is essential in the area of video-on-demand (VoD) to meet users' expectations. VoD requires high bandwidth and creates high traffic due to the nature of media. Many researchers have developed proxy caching models to reduce bandwidth consumption and traffic. Proxy caching keeps part of a media object to meet the viewing expectations of users without delay and provides interactive playback. If the caching is done continuously, the entire cache space will be exhausted at one stage. Hence, the proxy server must apply cache replacement policies to replace existing objects and allocate the cache space for the incoming objects. Researchers have developed many cache replacement policies by considering several parameters, such as recency, access frequency, cost of retrieval, and size of the object. In this paper, the Weighted-Rank Cache replacement Policy (WRCP) is proposed. This policy uses such parameters as access frequency, aging, and mean access gap ratio and such functions as size and cost of retrieval. The WRCP applies our previously developed proxy caching model, Hot-Point Proxy, at four levels of replacement, depending on the cache requirement. Simulation results show that the WRCP outperforms our earlier model, the Dual Cache Replacement Policy.

An Interference Matrix Based Approach to Bounding Worst-Case Inter-Thread Cache Interferences and WCET for Multi-Core Processors

  • Yan, Jun;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.2
    • /
    • pp.131-140
    • /
    • 2011
  • Different cores typically share the last-level cache in a multi-core processor. Threads running on different cores may interfere with each other. Therefore, the multi-core worst-case execution time (WCET) analyzer must be able to safely and accurately estimate the worst-case inter-thread cache interference. This is not supported by current WCET analysis techniques that manly focus on single thread analysis. This paper presents a novel approach to analyze the worst-case cache interference and bounding the WCET for threads running on multi-core processors with shared L2 instruction caches. We propose to use an interference matrix to model inter-thread interference, on which basis we can calculate the worst-case inter-thread cache interference. Our experiments indicate that the proposed approach can give a worst-case bound less than 1%, as in benchmark fib-call, and an average 16.4% overestimate for threads running on a dual-core processor with shared-L2 cache. Our approach dramatically improves the accuracy of WCET overestimatation by on average 20.0% compared to work.

The Efficient Buffer Size in A Dual Flash Memory Structure with Buffer System (이중 NAND 플래시 구조의 버퍼시스템에서 효율적 버퍼 크기)

  • Jung, Bo-Sung;Lee, Jung-Hoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.6 no.6
    • /
    • pp.383-391
    • /
    • 2011
  • As we know the effects of cache memory research, instruction and data caches can be separated for higher performance with Harvard CPUs. In this paper, we shows the efficiency of buffer system in the instruction and data flash storage medium. And we analyzed characteristics of the data and instruction flash and evaluated the performance. Finally, we propose the best buffer structure with an optimal block size and buffer size for the instruction and data flash.

Data Cache System based on the Selective Bank Algorithm for Embedded System (내장형 시스템을 위한 선택적 뱅크 알고리즘을 이용한 데이터 캐쉬 시스템)

  • Jung, Bo-Sung;Lee, Jung-Hoon
    • The KIPS Transactions:PartA
    • /
    • v.16A no.2
    • /
    • pp.69-78
    • /
    • 2009
  • One of the most effective way to improve cache performance is to exploit both temporal and spatial locality given by any program executive characteristics. In this paper we present a high performance and low power cache structure with a bank selection mechanism that enhances exploitation of spatial and temporal locality. The proposed cache system consists of two parts, i.e., a main direct-mapped cache with a small block size and a fully associative buffer with a large block size as a multiple of the small block size. Especially, the main direct-mapped cache is constructed as two banks for low power consumption and stores a small block which is selected from fully associative buffer by the proposed bank selection algorithm. By using the bank selection algorithm and three state bits, We selectively extend the lifetime of those small blocks with high temporal locality by storing them in the main direct-mapped caches. This approach effectively reduces conflict misses and cache pollution at the same time. According to the simulation results, the average miss ratio, compared with the Victim and STAS caches with the same size, is improved by about 23% and 32% for Mibench applications respectively. The average memory access time is reduced by about 14% and 18% compared with the he victim and STAS caches respectively. It is also shown that energy consumption of the proposed cache is around 10% lower than other cache systems that we examine.