• Title/Summary/Keyword: Disk cache

Search Result 108, Processing Time 0.039 seconds

A Study of Weighted Disk Cache Method for World Wide Web (WWW를 위한 가중화 디스크 캐시 기법에 대한 연구)

  • 박해우;강병욱
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.153-156
    • /
    • 2002
  • As the use of world wide web is increasing, the number of connections to servers is increasing also. These interactions increase the load of networks and servers. therefore efficient caching strategies for web documents are needed to reduce server load and network traffics by migrating copies of server files closer to the clients that use those files. As One idea of caching policy, we propose a Weighted Disk Cache Replacement Policy(WDCRP) which analyses user's interaction to WWW and adds weight value to each web document. Especially the WDCRP takes account of the history data of cache log, the characteristics of Web requests and the importance of user interactive-actions.

  • PDF

A Study on Write Cache Policy using a Flash Memory (플래시 메모리를 사용한 쓰기 캐시 정책 연구)

  • Kim, Young-Jin;Anggorosesar, Aldhino;Lee, Jeong-Bae;Rim, Kee-Wook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2009.11a
    • /
    • pp.77-78
    • /
    • 2009
  • In this paper, we study a pattern-aware write cache policy using a NAND flash memory in disk-based mobile storage systems. Our work is designed to face a mix of a number of sequential accesses and fewer non-sequential ones in mobile storage systems by redirecting the latter to a NAND flash memory and the former to a disk. Experimental results show that our policy improves the overall I/O performance by reducing the overhead significantly from a non-volatile cache over a traditional one.

WWW Cache Replacement Algorithm Based on the Network-distance

  • Kamizato, Masaru;Nagata, Tomokazu;Taniguchi, Yuji;Tamaki, Shiro
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.238-241
    • /
    • 2002
  • With the popularity of utilization of the Internet among people, the amount of data in the network rapidly increased. So that, the fall of response time from WWW server, which is caused by the network traffic and the burden on m server, has become more of an issue. This problem is encouraged the rearch by redundancy of requesting the same pages by many people, even though they browse the same the ones. To reduce these redundancy, WWW cache server is used commonly in order to store m page data and reuse them. However, the technical uses of WWW cache that different from CPU and Disk cache, is known for its difficulty of improving the cache hit rate. Consecuently, it is difficult to choose effective WWW data to be stored from all data flowing through the WWW cache server. On the other hand, there are room for improvement in commonly used cache replacement algorithms by WWW cache server. In our study, we try to realize a WWW cache server that stresses on the improvement of the stresses of response time. To this end, we propose the new cache replacement algorithm by focusing on the utilizable information of network distance from the WWW cache server to WWW server that possessing the page data of the user requesting.

  • PDF

A Study on Improving SQUID Proxy Server Performance by Arbitral Thread and Delayed Caching (중재 쓰레드와 지연 캐싱에 의한 스퀴드 프록시 서버 성능 향상에 관한 연구)

  • Lee, Dae-Sung;Kim, Yoo-Sung;Kim, Ki-Chang
    • The KIPS Transactions:PartC
    • /
    • v.10C no.1
    • /
    • pp.87-94
    • /
    • 2003
  • As the number of the Internet users increases explosively, a solution for this problem is web caching. So, many techniques on improving cache server performance have been suggested. In this paper, we analyze the cause of the bottleneck in cache servers, and propose an arbitral thread and delayed caching mechanism as a solution. We use an arbitral thread in order to provide a quick service to user requests through eliminating the ready multi-thread search problem in case of disk writing operation. We also use delayed caching in order to provide stable system operation through avoiding overloaded disk operation and queue threshold. Proposed cache server is implemented through modification on SQUlD cache server, and we compare its performance with the original SQUID cache server.

Buffer Invalidation Schemes for High Performance Transaction Processing in Shared Database Environment (공유 데이터베이스 환경에서 고성능 트랜잭션 처리를 위한 버퍼 무효화 기법)

  • 김신희;배정미;강병욱
    • The Journal of Information Systems
    • /
    • v.6 no.1
    • /
    • pp.159-180
    • /
    • 1997
  • Database sharing system(DBSS) refers to a system for high performance transaction processing. In DBSS, the processing nodes are locally coupled via a high speed network and share a common database at the disk level. Each node has a local memory, a separate copy of operating system, and a DBMS. To reduce the number of disk accesses, the node caches database pages in its local memory buffer. However, since multiple nodes may be simultaneously cached a page, cache consistency must be ensured so that every node can always access the latest version of pages. In this paper, we propose efficient buffer invalidation schemes in DBSS, where the database is logically partitioned using primary copy authority to reduce locking overhead. The proposed schemes can improve performance by reducing the disk access overhead and the message overhead due to maintaining cache consistency. Furthermore, they can show good performance when database workloads are varied dynamically.

  • PDF

Performance Evaluation of Disk Replacement Algorithms in a Shared Cluster (공유 디스크 클러스터에서 버퍼 고체 알고리즘의 성능 평가)

  • Cho, Haeng-Rae
    • Journal of KIISE:Databases
    • /
    • v.35 no.6
    • /
    • pp.469-480
    • /
    • 2008
  • A shared disk (SD) cluster couples multiple nodes for high performance transaction processing, and all the coupled nodes share a common database at the disk level. To reduce the number of disk accesses, each node caches database pages in its memory buffer. Since a particular page may be cached simultaneously in different nodes, cache consistency should be maintained to ensure that nodes can always access the most recent version of database pages. Most cache consistency schemes proposed in the SD cluster adopted LRU as a buffer replacement algorithm. In this paper, we first present four buffer replacement algorithms that consider the characteristics of the SD cluster. Then we compare the performance of the buffer replacement algorithms. We perform the experiments on a variety of cluster configurations and database workloads. The experiment results show that the proposed algorithms achieve performance improvement up to 5 times of LRU algorithm.

Analysis and Improvement of the DPW-LRU Cache Replacement Algorithm for Flash Translation Layer (플래시 변환 계층을 위한 DPW-LRU 캐시 교체 알고리즘 분석 및 개선)

  • Lee, Hyung-Bong;Chung, Tae-Yun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.6
    • /
    • pp.289-297
    • /
    • 2020
  • Although flash disks are being used widely instead of hard disks, it is difficult to optimize for effective utilization of flash disks because overwrite in place is impossible and the power consumption and time required for read, write, and erase operations are all different. One of these optimization issues is a cache management strategy to minimize write operations. The cache operates at two levels: an operating system equipped with flash disks and a translation layer within the flash disk. Most studies deal with the operating system-level cache strategy. In this study, we implement and analyse the DPW-LRU algorithm which is one of the recently proposed operating system cache replacement algorithms to apply to FTL, and grope with some improvements. As a result of the experiment, the DPW-LRU algorithm maintained superiority even in the FTL environment, and showed better performance with a slight improvement.

Cache Coherency Schemes for Database Sharing Systems with Primary Copy Authority (주사본 권한을 지원하는 공유 데이터베이스 시스템을 위한 캐쉬 일관성 기법)

  • Kim, Shin-Hee;Cho, Haeng-Rae;Kim, Byeong-Uk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.6
    • /
    • pp.1390-1403
    • /
    • 1998
  • Database sharing system (DSS) refers to a system for high performance transaction processing. In DSS, the processing nodes are locally coupled via a high speed network and share a common database at the disk level. Each node has a local memory, a separate copy of operating system, and a DB'\fS. To reduce the number of disk accesses, the node caches database pages in its local memory buffer. However, since multiple nodes may be simultaneously cached a page, cache consistency must be cnsured so that every node can always access the'latest version of pages. In this paper, we propose efficient cache consistency schemes in DSS, where the database is logically partitioned using primary copy authority to reduce locking overhead, The proposed schemes can improve performance by reducing the disk access overhead and the message overhead due to maintaining cache consistency, Furthermore, they can show good performance when database workloads are varied dynamically.

  • PDF

SSD Cache for RAID: Integrating Data Caching and Parity Update Delay (RAID를 위한 SSD 캐시: 데이터 캐싱과 패리티 갱신 지연 기법의 결합)

  • Minh, Sophal;Lee, Donghee
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.6
    • /
    • pp.379-385
    • /
    • 2017
  • In enterprise environments, hybrid storage typically utilizes SSDs over disk-based RAID. Typically, SSDs over RAID are used as the data cache. Recently, the LeavO caching scheme was introduced to reduce the parity update overhead of the underlying RAID. In this paper, we combine the data caching and LeavO caching schemes and derive cost models of the combined cache to determine the optimal data and LeavO cache sizes. We also propose the Adaptive Combined Cache that dynamically adjusts the data cache and LeavO cache sizes for evolving workloads. Experimental results show that the performance of the Adaptive Combined Cache is significantly superior to that of the conventional data caching scheme and is comparable with that of the off-line optimal scheme.

An performance analysis on SSD caching mechanism in Linux (리눅스 SSD caching mechanism 의 성능 비교 및 분석)

  • Heo, Sang-Bok;Park, Jinhee;Jo, Heeseung
    • Smart Media Journal
    • /
    • v.4 no.2
    • /
    • pp.62-67
    • /
    • 2015
  • During several decades, hard disk drive(HDD) has been used in most computer systems as secondary storage and, however, the performance enhancement of HDD is limited by its mechanical properties. On the other hand, although the flash memory based solid state drive (SSD) has more advantages over HDD such as high performance and low noise, SSD is still too expensive for common usage and expected to take several years to replace HDD completely. Therefore, SSD caching mechanism using the SSD as a cache of high capacity HDD has been highlighted lately. The representatives of SSD caching mechanisms are typically bcache, dm-cache, Flashcache, and EnhanceIO. Each of them has its own internal mechanism and implementation, and this makes them to show their own pros. and cons. In this paper, we analyze the characteristics of each SSD caching mechanisms and compare the performance of them under various workloads. We expect that our contribution will be useful to enhance the performance of SSD caching mechanisms.