• Title/Summary/Keyword: Cache Allocation

Search Result 39, Processing Time 0.022 seconds

A Proposal for Hit Ratio Improvement of a Microprocessor's Cache Memory (마이크로프로세서 캐쉬메모리의 적중률 개선을 위한 제안)

  • 조용훈;김정선
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.4B
    • /
    • pp.783-787
    • /
    • 2000
  • A microprocessor, which is used as a CPU for state-of-the-art personal computers, adopts 256KB or 512KB L2(Level 2) cache memory. This cache hires Direct Mapping Procedure, 32B Line Size, and no Write Allocation. In this cache architecture, we can expert about 2.5% hit ratio improvement by using 8-way Set Associative Mapping instead of Direct Mapping, 128B Line Size instead of 32B, and Write Allocation.

  • PDF

User Centric Cache Allocation Schemes in Infrastructure Wireless Mesh Networks (인프라스트럭처 무선 메쉬 네트워크에서 사용자 중심 캐싱 할당 기법)

  • Jeon, Seung Hyun
    • Journal of Industrial Convergence
    • /
    • v.17 no.4
    • /
    • pp.131-137
    • /
    • 2019
  • In infrastructure wireless mesh networks (WMNs), in order to improve mobile users' satisfaction for the given cache hit ratio, we investigate an User centric Cache Allocation (UCA) scheme while reducing cache cost in a mesh router (MR) and expected transmission time (ETT) for content search in cache. To minimize ETT values of mobile users, a genetic algorithm based UCA (GA-UCA) scheme is provided. The goal is to maximize mobile users' satisfaction via our well defined utility, which considers content popularity and the number of mobile users. Finally, through solving optimization problem we show the optimal cache can be allocated for UCA and GA-UCA. Besides, a WMN provider can find the optimal number of mobile users for user centric cache allocation in infrastructure WMNs.

Dynamic Caching Routing Strategy for LEO Satellite Nodes Based on Gradient Boosting Regression Tree

  • Yang Yang;Shengbo Hu;Guiju Lu
    • Journal of Information Processing Systems
    • /
    • v.20 no.1
    • /
    • pp.131-147
    • /
    • 2024
  • A routing strategy based on traffic prediction and dynamic cache allocation for satellite nodes is proposed to address the issues of high propagation delay and overall delay of inter-satellite and satellite-to-ground links in low Earth orbit (LEO) satellite systems. The spatial and temporal correlations of satellite network traffic were analyzed, and the relevant traffic through the target satellite was extracted as raw input for traffic prediction. An improved gradient boosting regression tree algorithm was used for traffic prediction. Based on the traffic prediction results, a dynamic cache allocation routing strategy is proposed. The satellite nodes periodically monitor the traffic load on inter-satellite links (ISLs) and dynamically allocate cache resources for each ISL with neighboring nodes. Simulation results demonstrate that the proposed routing strategy effectively reduces packet loss rate and average end-to-end delay and improves the distribution of services across the entire network.

Efficient Buffer Allocation Policy for the Adaptive Block Replacement Scheme (적응력있는 블록 교체 기법을 위한 효율적인 버퍼 할당 정책)

  • Choi, Jong-Moo;Cho, Seong-Je;Noh, Sam-Hyuk;Min, Sang-Lyul;Cho, Yoo-Kun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.3
    • /
    • pp.324-336
    • /
    • 2000
  • The paper proposes an efficient buffer management scheme to enhance performance of the disk I/O system. Without any user level information, the proposed scheme automatically detects the block reference patterns of applications by associating block attributes with forward distance of a block. Based on the detected patterns, the scheme applies an appropriate replacement policy to each application. We also present a new block allocation scheme to improve the performance of buffer cache when kernel needs to allocate a cache block due to a cache miss. The allocation scheme analyzes the cache hit ratio of each application based on block reference patterns and allocates a cache block to maximize cache hit ratios of system. These all procedures are performed on-line, as well as automatically at system level. We evaluate the scheme by trace-driven simulation. Experimental results show that our scheme leads to significant improvements in hit ratios of cache blocks compared to the traditional schemes and requires low overhead.

  • PDF

A Locality-Aware Write Filter Cache for Energy Reduction of STTRAM-Based L1 Data Cache

  • Kong, Joonho
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.1
    • /
    • pp.80-90
    • /
    • 2016
  • Thanks to superior leakage energy efficiency compared to SRAM cells, STTRAM cells are considered as a promising alternative for a memory element in on-chip caches. However, the main disadvantage of STTRAM cells is high write energy and latency. In this paper, we propose a low-cost write filter (WF) cache which resides between the load/store queue and STTRAM-based L1 data cache. To maximize efficiency of the WF cache, the line allocation and access policies are optimized for reducing energy consumption of STTRAM-based L1 data cache. By efficiently filtering the write operations in the STTRAM-based L1 data cache, our proposed WF cache reduces energy consumption of the STTRAM-based L1 data cache by up to 43.0% compared to the case without the WF cache. In addition, thanks to the fast hit latency of the WF cache, it slightly improves performance by 0.2%.

Bus Splitting Techniques for MPSoC to Reduce Bus Energy (MPSoC 플랫폼의 버스 에너지 절감을 위한 버스 분할 기법)

  • Chung Chun-Mok;Kim Jin-Hyo;Kim Ji-Hong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.9
    • /
    • pp.699-708
    • /
    • 2006
  • Bus splitting technique reduces bus energy by placing modules with frequent communications closely and using necessary bus segments in communications. But, previous bus splitting techniques can not be used in MPSoC platform, because it uses cache coherency protocol and all processors should be able to see the bus transactions. In this paper, we propose a bus splitting technique for MPSoC platform to reduce bus energy. The proposed technique divides a bus into several bus segments, some for private memory and others for shared memory. So, it minimizes the bus energy consumed in private memory accesses without producing cache coherency problem. We also propose a task allocation technique considering cache coherency protocol. It allocates tasks into processors according to the numbers of bus transactions and cache coherence protocol, and reduces the bus energy consumption during shared memory references. The experimental results from simulations say the bus splitting technique reduces maximal 83% of the bus energy consumption by private memory accesses. Also they show the task allocation technique reduces maximal 30% of bus energy consumed in shared memory references. We can expect the bus splitting technique and the task allocation technique can be used in multiprocessor platforms to reduce bus energy without interference with cache coherency protocol.

A Local Buffer Allocation Scheme for Multimedia Data on Linux (리눅스 상에서 멀티미디어 데이타를 고려한 지역 버퍼 할당 기법)

  • 신동재;박성용;양지훈
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.4
    • /
    • pp.410-419
    • /
    • 2003
  • The buffer cache of general operating systems such as Linux manages file data by using global block replacement policy and read ahead. As a result, multimedia data with a low locality of reference and various consumption rate have low cache hit ratio and consume additional buffers because of read ahead. In this paper we have designed and implemented a new buffer allocation algorithm for multimedia data on Linux. Our approach keeps one read-ahead cache per every opened multimedia file and dynamically changes the read-ahead group size based on the buffer consumption rate of the file. This distributes resources fairly and optimizes the buffer consumption. This paper compares the system performance with that of Linux 2.4.17 in terms of buffer consumption and buffer hit ratio.

The Effects of Cache Memory on the System Bus Traffic (캐쉬 메모리가 버스 트래픽에 끼치는 영향)

  • 조용훈;김정선
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.1
    • /
    • pp.224-240
    • /
    • 1996
  • It is common sense for at least one or more levels of cache memory to be used in these day's computer systems. In this paper, the impact of the internal cache memory organization on the performance of the computer is investigated by using a simulator program, which is wirtten by authors and run on SUN SPARC workstation, with several real execution, with several real execution trace files. 280 cache organizations have been simulated using n-way set associative mapping and LRU(Least Recently Used) replacement algorithm with write allocation policy. As a result, 16-way setassociative cache is the best configuration, and when we select 256KB cache memory and 64 byte line size, the bus traffic ratio was decreased compared to that of the noncache system so that a single bus could support almost 7 processors without any delay and degradationof high ratio(hit ratio was 99.21%). The smaller the line size we choose, the little lower hit ratio we can get, but the more processors can be supported by a single bus(maximum 18 processors). Therefore, using a proper cache memory organization can make a single bus structure be able to support multiple processors without any performance degradation.

  • PDF

Game Theoretic Cache Allocation Scheme in Wireless Networks (게임이론 기반 무선 통신에서의 캐시 할당 기법)

  • Le, Tra Huong Thi;Kim, Do Hyeon;Hong, Choong Seon
    • Journal of KIISE
    • /
    • v.44 no.8
    • /
    • pp.854-859
    • /
    • 2017
  • Caching popular videos in the storage of base stations is an efficient method to reduce the transmission latency. This paper proposes an incentive proactive cache mechanism in the wireless network to motivate the content providers (CPs) to participate in the caching procedure. The system consists of one/many Infrastructure Provider (InP) and many CPs. The InP aims to define the price it charges the CPs to maximize its revenue while the CPs compete to determine the number of files they cache at the InP's base stations (BSs). We conceive this system within the framework of Stackelberg game where InP is considered as the leader and CPs are the followers. By using backward induction, we show closed form of the amount of cache space that each CP renting on each base station and then solve the optimization problem to calculate the price that InP leases each CP. This is different from the existing works in that we consider the non-uniform pricing scheme. The numerical results show that InP's profit in the proposed scheme is higher than in the uniform pricing.

A Comparative Analysis on Page Caching Strategies Affecting Energy Consumption in the NAND Flash Translation Layer (NAND 플래시 변환 계층에서 전력 소모에 영향을 미치는 페이지 캐싱 전략의 비교·분석)

  • Lee, Hyung-Bong;Chung, Tae-Yun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.3
    • /
    • pp.109-116
    • /
    • 2018
  • SSDs that are not allowed in-place update within the allocated page cause another allocation of a new page that will replace the previous page at the moment data modification occurs. This intrinsic characteristic of SSDs requires many changes to the existing HDD-based IO theory. In this paper, we conduct a performance comparison of FTL caching strategy in perspective of cache hashing (Global vs. grouped) and caching algorithm (LRU vs. NUR) through a simulation. Experimental results show that in terms of energy consumption for flash operation the grouped management of cache is not suitable and NUR algorithm is superior to LRU algorithm. In particular, we found that the cache hit ratio of LRU algorithm is about 10% point higher than that of NUR algorithm while the energy consumption of LRU algorithm is about 32% high.