• Title/Summary/Keyword: Cache management

Search Result 212, Processing Time 0.025 seconds

A Web Caching Algorithm with Divided Cache Scope (분할 영역 기반의 웹 캐싱 알고리즘)

  • Ko, Il-Suk;Na, Yun-Ji;Leem, Chun-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.11b
    • /
    • pp.1007-1010
    • /
    • 2003
  • Efficient using of web cache is becoming important factor that decide system management efficiency in web-Based system. Cache performance depends heavily on replacement algorithms, which dynamically select a suitable subset of objects for caching in a finite space. However, replacement algorithm on web cache has many differences than traditional replacement algorithm. In this paper, a web-caching algorithm is proposed for efficient operation of web base system. The algorithm is designed based on a divided scope that considered size reference characteristic and heterogeneity on web object. The performance of the algorithm is analyzed with an experiment. With the experiment results, the algorithm is compared with previous replacement algorithms, and its performance is confirmed ith an improvement of response speed.

  • PDF

Efficient Management of Proxy Server Cache for Video (비디오를 위한 효율적인 프록시 서버 캐쉬의 관리)

  • 조경산;홍병천
    • Journal of the Korea Society for Simulation
    • /
    • v.12 no.2
    • /
    • pp.25-34
    • /
    • 2003
  • Because of explosive growth in demand for web-based multimedia applications, proper proxy caching for large multimedia object (especially video) has become needed. For a video object which is much larger in size and has different access characteristics than the traditional web object such as image and text, caching the whole video file as a single web object is not efficient for the proxy cache. In this paper, we propose a proxy caching strategy with the constant-sized segment for video file and an improved proxy cache replacement policy. Through the event-driven simulation under various conditions, we show that our proposal is more efficient than the variable-sized segment strategy which has been proven to have higher hit ratio than other traditional proxy cache strategies.

  • PDF

A Comparative Analysis on Page Caching Strategies Affecting Energy Consumption in the NAND Flash Translation Layer (NAND 플래시 변환 계층에서 전력 소모에 영향을 미치는 페이지 캐싱 전략의 비교·분석)

  • Lee, Hyung-Bong;Chung, Tae-Yun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.3
    • /
    • pp.109-116
    • /
    • 2018
  • SSDs that are not allowed in-place update within the allocated page cause another allocation of a new page that will replace the previous page at the moment data modification occurs. This intrinsic characteristic of SSDs requires many changes to the existing HDD-based IO theory. In this paper, we conduct a performance comparison of FTL caching strategy in perspective of cache hashing (Global vs. grouped) and caching algorithm (LRU vs. NUR) through a simulation. Experimental results show that in terms of energy consumption for flash operation the grouped management of cache is not suitable and NUR algorithm is superior to LRU algorithm. In particular, we found that the cache hit ratio of LRU algorithm is about 10% point higher than that of NUR algorithm while the energy consumption of LRU algorithm is about 32% high.

A Design and Implementation on Large Data File Management Using Buffer Cache and Virtual Memory File (버퍼 캐쉬와 가상메모리 파일을 이용한 대형 데이터화일의 처리방법 설계 및 구현)

  • 김병철;신병석;조동섭;황희영
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.41 no.7
    • /
    • pp.784-792
    • /
    • 1992
  • In this paper we design and implement a method for application programs to allow handling of large data files in DOS environment. In this method we use extended memory and hard disk as a data buffer. And we use a part of the conventional DOS memory as a buffer cache which allows the application program to use extended memory and hard disks transparently. Using buffer cache also allows us some speed improvement for the application program.

  • PDF

Web Caching using File Type (파일 타입을 이용한 웹 캐싱)

  • Lim, Jae-Hyun;Lee, Jun-Yeon
    • The KIPS Transactions:PartC
    • /
    • v.9C no.6
    • /
    • pp.961-968
    • /
    • 2002
  • This paper proposes a new access method which is to considered the high variability in World Wide Web and manage the web cache space. Instead of using a single cache, we divide a cache and store all documents according to their file types. Proposed method was compares with current cache management policies using LFU, LRU and SIZE base algorithm. Using two different workload, we show the improvement hitting ratio and byte hitting ratio through simulating on the file type caching.

Cache Management using a Adaptive Parity Group Configuration in RAID 5 Controller (적응형 패리티 그룹 구성을 이용한 RAID 5 제어기에서의 캐시 운영)

  • Huh, Jung-Ho;Song, Ja-Young;Chang, Tae-Mu
    • The KIPS Transactions:PartA
    • /
    • v.10A no.2
    • /
    • pp.83-92
    • /
    • 2003
  • RAID 5 is a widely-used technique used to construct disk systems of high reliability and performance. This paper proposes APGOC (Adaptive Parity Group On Cache) organization on cache to solve "small write" problem of RAID 5 especially in OLTP (On-Line Transaction Processing System) environments. In our approach, when user process makes a request for a file to kernel, the information on the read/write characteristics is added to the file data structure of the file system. With this information, data and parity cache can be managed interchangeably through parity fetching. Therefore we can enhance the cache utilization and improve the disk request response time. Our method is analyzed and evaluated with a simulation method. Comparing with previous works, we observed about 6~l3% of performance enhancement.hancement.

An Energy Efficient and High Performance Data Cache Structure Utilizing Tag History of Cache Addresses (캐시 주소의 태그 이력을 활용한 에너지 효율적 고성능 데이터 캐시 구조)

  • Moon, Hyun-Ju;Jee, Sung-Hyun
    • The KIPS Transactions:PartA
    • /
    • v.14A no.1 s.105
    • /
    • pp.55-62
    • /
    • 2007
  • Uptime of embedded processors for mobile devices are dependent on battery consumption. Especially the large portion of power consumption is known to be due to cache management in embedded processors. This paper proposes an energy efficient data cache structure for high performance embedded processors. High performance prefetching data cache issues prefetching instructions before issuing demand-fetch instructions based on reference predictions. These prefetching instruction bring reduction on memory delay by improving cache hit ratio, but on the other hand those increase energy consumption in proportion to the number of prefetching instructions. In this paper, we adopt tag history table on prefetching data cache for reducing energy consumption by minimizing parallel tag comparison. Experimental results show the proposed data cache improves performance on energy consumption as well as memory delay.

An Efficient Cache Management Scheme for Load Balancing in Distributed Environments with Different Memory Sizes (상이한 메모리 크기를 가지는 분산 환경에서 부하 분산을 위한 캐시 관리 기법)

  • Choi, Kitae;Yoon, Sangwon;Park, Jaeyeol;Lim, Jongtae;Lee, Seokhee;Bok, Kyoungsoo;Yoo, Jaesoo
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.543-548
    • /
    • 2015
  • Recently, volume of data has been growing dramatically along with the growth of social media and digital devices. However, the existing disk-based distributed file systems have limits to their performance of data processing or data access, due to I/O processing costs and bottlenecks. To solve this problem, the caching technique is being used to manage data in the memory. In this paper, we propose a cache management scheme to handle load balancing in a distributed memory environment. The proposed scheme distributes the data according to the memory size, n distributed environments with different memory sizes. If overloaded nodes occur, it redistributes the the access time of the caching data. In order to show the superiority of the proposed scheme, we compare it with an existing distributed cache management scheme through performance evaluation.

Development of A Web-cache System with Compression Capability (압축 기능을 가진 웹캐시 시스템 개발)

  • Park, Zin-Won;Kim, Myung-Kyun;Hong, Yoon-Hwan
    • The KIPS Transactions:PartA
    • /
    • v.11A no.1
    • /
    • pp.29-36
    • /
    • 2004
  • As the number of Internet users and the amount of web contents have increased very fast, reducing the load of web servers and providing web services more rapidly have been great issues. A web-cache system, which is located between the user and the web server, has been used by many web service providers as an effective way to reduce the load of web servers and the web service response time. In this paper, we have developed a web-cache system which is based on the Squid cache and has a compression capability. The web-cache system in which compression capability reduces the amount of network traffic and the web service response time by transfering the web contents in the compressed format over the network between the web-cache system and the user. The performance enhancement is greater in the reverse-cache system than in the forward-cache system because in the case of the reverse-cache system, the cache reduces the amount of traffic on the Internet which is the bottleneck in the network path between the user and the web server. The experimentation result shows that the amount of data traffic has reduced from 2 to 8 times depending on the size of the web contents. The web server response time has reduced 37% on the average and when the size of the web content is greater than 10Kbyte, the response time has reduced 87% on the average.

STP-FTL: An Efficient Caching Structure for Demand-based Flash Translation Layer

  • Choi, Hwan-Pil;Kim, Yong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.7
    • /
    • pp.1-7
    • /
    • 2017
  • As the capacity of NAND flash module increases, the amount of RAM increases for caching and maintaining the FTL mapping information. In order to reduce the amount of mapping information managed in the RAM, a demand-based address mapping method stores the entire mapping information in the flash and some valid mapping information in the form of cache in the RAM so that the RAM can be used efficiently. However, when cache miss occurs, it is necessary to read the mapping information recorded in the flash, so overhead occurs to translate the address. If the RAM space is not enough, the cache hit ratio decreases, resulting in greater overhead. In this paper, we propose a method using two tables called TPMT(Translation Page Mapping Table) and SMT(Segmented Translation Page Mapping Table) to utilize both temporal locality and spatial locality more efficiently. A performance evaluation shows that this method can improve the cache hit ratio by up to 30% and reduces the extra translation operations by up to 72%, compared to the TPM scheme.