• Title/Summary/Keyword: data cache

Search Result 487, Processing Time 0.032 seconds

A Study of File Replacement Policy in Data Grid Environments (데이터 그리드 환경에서 파일 교체 정책 연구)

  • Park, Hong-Jin
    • The KIPS Transactions:PartA
    • /
    • v.13A no.6 s.103
    • /
    • pp.511-516
    • /
    • 2006
  • The data grid computing provides geographically distributed storage resources to solve computational problems with large-scale data. Unlike cache replacement policies in virtual memory or web-caching replacement, an optimal file replacement policy for data grids is the one of the important problems by the fact that file size is very large. The traditional file replacement policies such as LRU(Least Recently Used) LCB-K(Least Cost Beneficial based on K), EBR(Economic-based cache replacement), LVCT(Least Value-based on Caching Time) have the problem that they have to predict requests or need additional resources to file replacement. To solve theses problems, this paper propose SBR-k(Sized-based replacement-k) that replaces files based on file size. The results of the simulation show that the proposed policy performs better than traditional policies.

A Review of Data Management Techniques for Scratchpad Memory (스크래치패드 메모리를 위한 데이터 관리 기법 리뷰)

  • DOOSAN CHO
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.771-776
    • /
    • 2023
  • Scratchpad memory is a software-controlled on-chip memory designed and used to mitigate the disadvantages of existing cache memories. Existing cache memories have TAG-related hardware control logic, so users cannot directly control cache misses, and their sizes are large and energy consumption is relatively high. Scratchpad memory has advantages in terms of size and energy consumption because it eliminates such hardware overhead, but there is a burden on software to manage data. In this study, data management techniques of scratchpad memory were classified and examined, and ways to maximize the advantages were discussed.

A Local Buffer Allocation Scheme for Multimedia Data on Linux (리눅스 상에서 멀티미디어 데이타를 고려한 지역 버퍼 할당 기법)

  • 신동재;박성용;양지훈
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.4
    • /
    • pp.410-419
    • /
    • 2003
  • The buffer cache of general operating systems such as Linux manages file data by using global block replacement policy and read ahead. As a result, multimedia data with a low locality of reference and various consumption rate have low cache hit ratio and consume additional buffers because of read ahead. In this paper we have designed and implemented a new buffer allocation algorithm for multimedia data on Linux. Our approach keeps one read-ahead cache per every opened multimedia file and dynamically changes the read-ahead group size based on the buffer consumption rate of the file. This distributes resources fairly and optimizes the buffer consumption. This paper compares the system performance with that of Linux 2.4.17 in terms of buffer consumption and buffer hit ratio.

A Concurrency Control Method using Data Group Information in Mobile Computing Environments (이동 컴퓨팅 환경에서 데이타 그룰 정보를 이용한 동시성 제어 방법)

  • Kim, Dae-In;Hwang, Bu-Hyun
    • Journal of KIISE:Databases
    • /
    • v.32 no.3
    • /
    • pp.315-325
    • /
    • 2005
  • In mobile computing environments, a mobile host caches the data items to use the bandwidth efficiently and improve the response time of transactions. If the data items cached in mobile host are updated in the server, the server broadcasts an invalidation report for maintaining the cache consistency of mobile hosts. However, this method has a problem that the response time of mobile transactions can be long since their commit decision is delayed until receiving the invalidation report from the server In this paper, we propose the USR-MT method for improving the response time of mobile transactions. As the UGR-MT method can make a commit decision by using the data group information before receiving the invalidation report, the response time of mobile transactions can be improved. Also our method can improve the cache's efficiency since it prevents all the contents of a cache from being invalidated in the case that the disconnection of a mobile host is longer than the broadcast period of invalidation report.

Policy for Selective Flushing of Smartphone Buffer Cache using Persistent Memory (영속 메모리를 이용한 스마트폰 버퍼 캐시의 선별적 플러시 정책)

  • Lim, Soojung;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.1
    • /
    • pp.71-76
    • /
    • 2022
  • Buffer cache bridges the performance gap between memory and storage, but its effectiveness is limited due to periodic flush, performed to prevent data loss in smartphones. This paper shows that selective flushing technique with small persistent memory can reduce the flushing overhead of smartphone buffer cache significantly. This is due to our I/O analysis of smartphone applications in that a certain hot data account for most of file writes, while a large proportion of file data incurs single-writes. The proposed selective flushing policy performs flushing to persistent memory for frequently updated data, and storage flushing is performed only for single-write data. This eliminates storage write traffic and also improves the space efficiency of persistent memory. Simulations with popular smartphone application I/O traces show that the proposed policy reduces write traffic to storage by 24.8% on average and up to 37.8%.

A Study on the Performance Analysis of Cache Coherence Protocols in a Multiprocessor System Using HiPi Bus (HiPi 버스를 사용한 멀티프로세서 시스템에서 캐쉬 코히어런스 프로토콜의 성능 평가에 관한 연구)

  • 김영천;강인곤;황승욱;최진규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.1
    • /
    • pp.57-68
    • /
    • 1993
  • In this paper, we describe a multiprocessor system using the HiPi bus with pended protocol and multiple cache memories, and evalute the performance of the multiprocessor system in terms of processor utilization for various cache coherence protocols. The HiPi bus is delveloped as the shared bus of TICOM II which is a main computer system to establish a nation-wide computing network in ETRI. The HiPi bus has high data transfer rate, but it doesn't allow cache-to-cache transfer. In order to evaluate the effect of cache-to-cache transfer upon the performance of system and to choose a best-performed protocol for HiPi bus, we simulate as follows: First, we analyze the performance of multiprocessor system with HiPi bus in terms of processor utilizatIOn through simulation. Each of cache coherence protocol is described by state transition diagram, and then the probability of each state is calculated by Markov steady state. The calculated probability of each state is used as input parameters of simulation, and modeling and simulation are implemented and performed by using SLAM II graphic symbols and language. Second, we propose the HiPi bus which supports cache-to-cache transfer, and analyze the performance of multiprocessor system with proposed HiPi bus in terms of processor utilization through simulation. Considered cache coherence protocols for the simulation are Write-through, Write-once, Berkely, Synapse, Illinois, Firefly, and Dragon.

  • PDF

2Q-CFP: A Client Cache Management Scheme for Broadcast-based Information Systems (2Q-CFP: 방송에 기초한 정보 시스템을 위한 클라이언트 캐쉬 관리 기법)

  • 권혁민
    • Journal of KIISE:Databases
    • /
    • v.30 no.6
    • /
    • pp.561-572
    • /
    • 2003
  • Broadcast-based data delivery has attracted a lot of attention as an efficient way of disseminating data to very large client populations. The main motivation of broadcast-based information systems (BBISs) is that the number of clients that they serve can grow arbitrarily large without any effect on their performance. The performance of BBISs depends mainly on client caching strategies and on data broadcast scheduling mechanisms. This paper addresses the former issue and proposes a new client cache management scheme, named 2Q-CFP, that is suitable to BBISs. This paper also evaluates the performance of 2Q-CFP on the basis of a simulation model. The performance results indicate that 2Q-CFP scheme shows superior performances over GRAY, LRU and CF in the average response time.

Gated Recurrent Unit based Prefetching for Graph Processing (그래프 프로세싱을 위한 GRU 기반 프리페칭)

  • Shivani Jadhav;Farman Ullah;Jeong Eun Nah;Su-Kyung Yoon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.6-10
    • /
    • 2023
  • High-potential data can be predicted and stored in the cache to prevent cache misses, thus reducing the processor's request and wait times. As a result, the processor can work non-stop, hiding memory latency. By utilizing the temporal/spatial locality of memory access, the prefetcher introduced to improve the performance of these computers predicts the following memory address will be accessed. We propose a prefetcher that applies the GRU model, which is advantageous for handling time series data. Display the currently accessed address in binary and use it as training data to train the Gated Recurrent Unit model based on the difference (delta) between consecutive memory accesses. Finally, using a GRU model with learned memory access patterns, the proposed data prefetcher predicts the memory address to be accessed next. We have compared the model with the multi-layer perceptron, but our prefetcher showed better results than the Multi-Layer Perceptron.

  • PDF

Compact Field Remapping for Dynamically Allocated Structures (동적으로 할당된 구조체를 위한 압축된 필드 재배치)

  • Kim, Jeong-Eun;Han, Hwan-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.10
    • /
    • pp.1003-1012
    • /
    • 2005
  • The most significant difference of embedded systems from general purpose systems is that embedded systems are allowed to use only limited resources including battery and memory. Especially, the number of applications increases which deal with multimedia data. In those systems with high data computations, the delay of memory access is one of the major bottlenecks hurting the system performance. As a result, many researchers have investigated various techniques to reduce the memory access cost. Most programs generally have locality in memory references. Temporal locality of references means that a resource accessed at one point will be used again in the near future. Spatial locality of references is that likelihood of using a resource gets higher if resources near it were just accessed. The latest embedded processors usually adapt cache memory to exploit these two types of localities. Processors access faster cache memory than off-chip memory, reducing the latency. In this paper we will propose the enhanced dynamic allocation technique for structure-type data in order to eliminate unused memory space and to reduce both the cache miss rate and the application execution time. The proposed approach aggregates fields from multiple records dynamically allocated and consecutively remaps them on the memory space. Experiments on Olden benchmarks show $13.9\%$ L1 cache miss rate drop and $15.9\%$ L2 cache miss drop on average, compared to the previously proposed techniques. We also find execution time reduced by $10.9\%$ on average, compared to the previous work.

QEMU/KVM Based In-Memory Block Cache Module for Virtualization Environment (가상화 환경을 위한 QEMU/KVM 기반의 인메모리 블록 캐시 모듈 구현)

  • Kim, TaeHoon;Song, KwangHyeok;No, JaeChun;Park, SungSoon
    • Journal of KIISE
    • /
    • v.44 no.10
    • /
    • pp.1005-1018
    • /
    • 2017
  • Recently, virtualization has become an essential component of cloud computing due to its various strengths, including maximizing server resource utilization, easy-to-maintain software, and enhanced data protection. However, since virtualization allows sharing physical resources among the VMs, the system performance can be deteriorated due to device contentions. In this paper, we first investigate the I/O overhead based on the number of VMs on the same server platform and analyze the block I/O process of the KVM hypervisor. We also propose an in-memory block cache mechanism, called QBic, to overcome I/O virtualization latency. QBic is capable of monitoring the block I/O process of the hypervisor and stores the data with a high access frequency in the cache. As a result, QBic provides a fast response for VMs and reduces the I/O contention to physical devices. Finally, we present a performance measurement of QBic to verify its effectiveness.