• Title/Summary/Keyword: cache-hit

Search Result 172, Processing Time 0.023 seconds

Reconsidering Performance Measurement when Non-Volatile RAM is used in the Buffer Cache (차세대 비휘발성 메모리가 추가된 버퍼캐쉬에서 성능 측정 방법의 재조명)

  • Lee Kyuhyung;Choi Jongmoo;Lee Donghee;Noh SamH.
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07a
    • /
    • pp.793-795
    • /
    • 2005
  • 영속적인 데이터 저장이 가능한 차세대 비휘발성 메모리를 휘발성 메모리와 혼용하여 버퍼캐처로 사용하면, 안정성과 성능향상의 효과를 얻을 수 있다. 본 연구에서는 기존의 연구에서 제시한 캐처관리 정책을 시뮬레이터를 이용하여 실험하고 실험 결과를 분석하여 비휘발성 메모리가 추가된 캐처의 새로운 특성을 밝혀냈다. 비휘발성 메모리가 캐쉬에 포함되면 읽기 쓰기의 요청의 종류, 미스(miss)되었을 경우 캐쉬될 블록의 더티(dirty)여부, 읽기 요청이 적중(hit)되었을 때, 적중된 블록의 메모리 종류에 따라 각각의 요청을 처리하기 위한 디스크 접근횟수가 달라지는 특성을 나타낸다. 이 특성 때문에 비휘발성 메모리가 추가된 버퍼캐처는 적중률(hit rate) 보다는 디스크 접근횟수를 측정하는 것이 정확한 성능측정을 가능하게 한다.

  • PDF

Performance Analysis of Futurebus+ based Multiprocessor Systems with MESI Cache Coherence Protocol (MESI 캐쉬 코히어런스 프로토콜을 사용하는 Futurebus+ 기반 멀티프로세서 시스템의 성능 평가)

  • 고석범;강인곤;박성우;김영천
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.12
    • /
    • pp.1815-1827
    • /
    • 1993
  • In this paper, we evaluate the performance of a Futurebus based multiprocessor system with MESI cache coherence protocol for four bus transaction types. Graphical symbols and compiler of SLAM II are used in modeling and simulation. A steady-state probability of each state for MESI protocol is computed by a Markov chain. The probability of each state is used as an input value for a correct simulation. Processor utilization, memory utilization, bus utilization, and the waiting time for bus arbitration are measured in terms of the number of processors, the hit ratio of cache memory, the probability of internal operation, and bus bandwidth.

  • PDF

A Study on the Performance of Prefetching Web Cache Proxy (Prefetch하는 웹 캐쉬 프록시의 성능에 대한 연구)

  • 백윤철
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.11
    • /
    • pp.1453-1464
    • /
    • 2001
  • Explosive growth of Internet populations results performance degradations of web service. Popular web sites cannot provide proper level of services to many requests, and such poor services cannot give user a satisfaction. Web caching, the remedy of this problem, reduces the amount of network traffic and gives fast response to user. In this paper, we analyze the characteristics of web cache traffics using traces of NLANR(National Laboratory for Applied Network Research) root caches and Education network cache in Seoul National University. Based on this analysis, we suggest a method of prefetching and we discuss the gains of our prefetching. As a result, we find proposed prefetching enhances hit rate up to 3%, response time up to 5%.

  • PDF

An Efficient Spatial Data Cache Algorithm for a Map Service in Mobile Environment (모바일 환경에서 지도 서비스를 위한 효율적인 공간 데이터 캐시 알고리즘)

  • Moon, Jin-Yong
    • Journal of Digital Contents Society
    • /
    • v.16 no.2
    • /
    • pp.257-262
    • /
    • 2015
  • Recently, the interests of mobile GIS technology is increasing with the spread of wireless network, the improvement of mobile device's performances, and the growth of demands about mobile services. Providing services in a wireless environment with existing wired-based GIS solutions have many limitations such as slow communication, processing rates and screen size. In this paper, we propose a cache algorithm on client side to solve the above problems. The proposed algorithm demonstrates the performance improvement over known studies by utilizing unit time and spatial proximity. In addition, this paper conducts a performance evaluation to measure the improvement in algorithm efficiency and analyzes the results of the performance evaluation. When spatial data queries are conducted, according to our performance evaluation, hit rate has been improved over the existing algorithms.

Forecasted Popularity Based Lazy Caching Strategy (예측된 선호도 기반 게으른 캐싱 전략)

  • Park, Chul;Yoo, Hae-Young
    • The KIPS Transactions:PartA
    • /
    • v.10A no.3
    • /
    • pp.261-268
    • /
    • 2003
  • In this paper, we propose a new caching strategy for web servers. The proposed strategy collects only the statistics of the requested file, for example the popularity, when a request arrives. At a point of time, only files with higher forecasted popularity are cached all together. Forecasted popularity based lazy caching (FPLC) strategy uses exponential smoothing method for forecast popularity of web files. And, FPLC strategy shows that the cache hit ratio and the cache transfer ratio are better than those produced by other caching strategy. Furthermore, the experiment that is performed with real log files built from web servers shows our study on forecast method for popularity of web files improves cache efficiency.

Implementation of Memory Efficient Flash Translation Layer for Open-channel SSDs

  • Oh, Gijun;Ahn, Sungyong
    • International journal of advanced smart convergence
    • /
    • v.10 no.1
    • /
    • pp.142-150
    • /
    • 2021
  • Open-channel SSD is a new type of Solid-State Disk (SSD) that improves the garbage collection overhead and write amplification due to physical constraints of NAND flash memory by exposing the internal structure of the SSD to the host. However, the host-level Flash Translation Layer (FTL) provided for open-channel SSDs in the current Linux kernel consumes host memory excessively because it use page-level mapping table to translate logical address to physical address. Therefore, in this paper, we implemente a selective mapping table loading scheme that loads only a currently required part of the mapping table to the mapping table cache from SSD instead of entire mapping table. In addition, to increase the hit ratio of the mapping table cache, filesystem information and mapping table access history are utilized for cache replacement policy. The proposed scheme is implemented in the host-level FTL of the Linux kernel and evaluated using open-channel SSD emulator. According to the evaluation results, we can achieve 80% of I/O performance using the only 32% of memory usage compared to the previous host-level FTL.

SBR-k(Sized-base replacement-k) : File Replacement in Data Grid Environments (SBR-k(Sized-based replacement-k) : 데이터 그리드 환경에서 파일 교체)

  • Park, Hong-Jin
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.11
    • /
    • pp.57-64
    • /
    • 2008
  • The data grid computing provides geographically distributed storage resources to solve computational problems with large-scale data. Unlike cache replacement policies in virtual memory or web-caching replacement, an optimal file replacement policy for data grids is the one of the important problems by the fact that file size is very large. The traditional file replacement policies such as LRU(Least Recently Used), LCB-K(Least Cost Beneficial based on K), EBR(Economic-based cache replacement), LVCT(Least Value-based on Caching Time) have the problem that they have to predict requests or need additional resources to file replacement. To solve theses problems, this paper propose SBR-k(Sized-based replacement-k) that replaces files based on file size. The proposed policy considers file size to reduce the number of files corresponding to a requested file rather than forecasting the uncertain future for replacement. The results of the simulation show that hit ratio was similar when the cache size was small, but the proposed policy was superior to traditional policies when the cache size was large.

Cache Replacement and Coherence Policies Depending on Data Significance in Mobile Computing Environments (모바일 컴퓨팅 환경에서 데이터의 중요도에 기반한 캐시 교체와 일관성 유지)

  • Kim, Sam-Geun;Kim, Hyung-Ho;Ahn, Jae-Geun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.2A
    • /
    • pp.149-159
    • /
    • 2011
  • Recently, mobile computing environments are becoming rapidly common. This trend emphasizes the necessity of accessing database systems on fixed networks from mobile platforms via wireless networks. However, it is not an appropriate way that applies the database access methods for traditional computing environments to mobile computing environments because of their essential restrictions. This paper suggests a new agent-based mobile database access model and also two functions calculating data significance scores to choose suitable data items for cache replacement and coherence policies. These functions synthetically reflect access term, access frequency and tendency, update frequency and tendency, and data item size distribution. As the result of simulation experiment, our policies outperform LRU, LIX, and SAIU policies in aspects of decrement of access latency, improvement of cache byte hit ratio, and decrease of cache byte pollution ratio.

Disk Cache Manager based on Minix3 Microkernel : Design and Implementation (Minix3 마이크로커널 기반 디스크 캐쉬 관리자의 설계 및 구현)

  • Choi, Wookjin;Kang, Yongho;Kim, Seonjong;Kwon, Hyeogsoong;Kim, Jooman
    • Journal of Digital Convergence
    • /
    • v.11 no.11
    • /
    • pp.421-427
    • /
    • 2013
  • Disk Cache Manager(DCM), a functional server of microkernel based, to improve the I/O power of shared disks is designed and implemented in this work. DCM interfaces other different servers with message passing through ports by serving as a system actor the multi-thread mode on the Minix3 micro-kernel. DCM proposed in this paper uses the shared disk logically as a Seven Disk and Sodd Disk to enable parallel I/O. DCM enables the efficient placement of disk data because it raises disk cache hit-ratio by increasing the cache size when the utilization of the particular disk is high. Through experimental results, we show that DCM is quite efficient for a shared disk with higher utilization.

Design and Implementation of an In-Memory File System Cache with Selective Compression (대용량 파일시스템을 위한 선택적 압축을 지원하는 인-메모리 캐시의 설계와 구현)

  • Choe, Hyeongwon;Seo, Euiseong
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.658-667
    • /
    • 2017
  • The demand for large-scale storage systems has continued to grow due to the emergence of multimedia, social-network, and big-data services. In order to improve the response time and reduce the load of such large-scale storage systems, DRAM-based in-memory cache systems are becoming popular. However, the high cost of DRAM severely restricts their capacity. While the method of compressing cache entries has been proposed to deal with the capacity limitation issue, compression and decompression, which are technically difficult to parallelize, induce significant processing overhead and in turn retard the response time. A selective compression scheme is proposed in this paper for in-memory file system caches that rapidly estimates the compression ratio of incoming cache entries with their Shannon entropies and compresses cache entries with low compression ratio. In addition, a description is provided of the design and implementation of an in-kernel in-memory file system cache with the proposed selective compression scheme. The evaluation showed that the proposed scheme reduced the execution time of benchmarks by approximately 18% in comparison to the conventional non-compressing in-memory cache scheme. It also provided a cache hit ratio similar to the all-compressing counterpart and reduced 7.5% of the execution time by reducing the compression overhead. In addition, it was shown that the selective compression scheme can reduce the CPU time used for compression by 28% compared to the case of the all-compressing scheme.