• Title/Summary/Keyword: hit ratio

Search Result 280, Processing Time 0.025 seconds

Enhancing LRU Buffer Replacement Policy with Delayed Write of Not-cold-dirty-pages for Flash Memory (플래시 메모리를 위한 Not-cold-Page 쓰기지연을 통한 LRU 버퍼교체 정책 개선)

  • Jung Ho-Young;Park Sung-Min;Cha Jae-Hyuk;Kang Soo-Yong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.9
    • /
    • pp.634-641
    • /
    • 2006
  • Flash memory has many advantages like non-volatility and fast I/O speed, but it has also disadvantages such as not-in-place-update data and asymmetric read/write/erase speed. For the performance of flash memory storage, it is essential for the buffer replacement algorithms to reduce the number of write operations that also affects the number of erase operations. A new buffer replacement algorithm is proposed in this paper, that delays the writes of not-cold-dirty pages in the buffer cache of flash storage. We show that this algorithm effectively decreases the number of write operations and erase operations without much degradation of hit ratio. As a result overall performance of flash I/O speed is improved.

SBR-k(Sized-base replacement-k) : File Replacement in Data Grid Environments (SBR-k(Sized-based replacement-k) : 데이터 그리드 환경에서 파일 교체)

  • Park, Hong-Jin
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.11
    • /
    • pp.57-64
    • /
    • 2008
  • The data grid computing provides geographically distributed storage resources to solve computational problems with large-scale data. Unlike cache replacement policies in virtual memory or web-caching replacement, an optimal file replacement policy for data grids is the one of the important problems by the fact that file size is very large. The traditional file replacement policies such as LRU(Least Recently Used), LCB-K(Least Cost Beneficial based on K), EBR(Economic-based cache replacement), LVCT(Least Value-based on Caching Time) have the problem that they have to predict requests or need additional resources to file replacement. To solve theses problems, this paper propose SBR-k(Sized-based replacement-k) that replaces files based on file size. The proposed policy considers file size to reduce the number of files corresponding to a requested file rather than forecasting the uncertain future for replacement. The results of the simulation show that hit ratio was similar when the cache size was small, but the proposed policy was superior to traditional policies when the cache size was large.

Performance Impact of Large File Transfer on Web Proxy Caching: A Case Study in a High Bandwidth Campus Network Environment

  • Kim, Hyun-Chul;Lee, Dong-Man;Chon, Kil-Nam;Jang, Beak-Cheol;Kwon, Tae-Kyoung;Choi, Yang-Hee
    • Journal of Communications and Networks
    • /
    • v.12 no.1
    • /
    • pp.52-66
    • /
    • 2010
  • Since large objects consume substantial resources, web proxy caching incurs a fundamental trade-off between performance (i.e., hit-ratio and latency) and overhead (i.e., resource usage), in terms of caching and relaying large objects to users. This paper investigates how and to what extent the current dedicated-server based web proxy caching scheme is affected by large file transfers in a high bandwidth campus network environment. We use a series of trace-based performance analyses and profiling of various resource components in our experimental squid proxy cache server. Large file transfers often overwhelm our cache server. This causes a bottleneck in a web network, by saturating the network bandwidth of the cache server. Due to the requests for large objects, response times required for delivery of concurrently requested small objects increase, by a factor as high as a few million, in the worst cases. We argue that this cache bandwidth bottleneck problem is due to the fundamental limitations of the current centralized web proxy caching model that scales poorly when there are a limited amount of dedicated resources. This is a serious threat to the viability of the current web proxy caching model, particularly in a high bandwidth access network, since it leads to sporadic disconnections of the downstream access network from the global web network. We propose a peer-to-peer cooperative web caching scheme to address the cache bandwidth bottleneck problem. We show that it performs the task of caching and delivery of large objects in an efficient and cost-effective manner, without generating significant overheads for participating peers.

Proxy Caching Grouping by Partition and Mapping for Distributed Multimedia Streaming Service (분산 멀티미디어 스트리밍 서비스를 위한 분할과 사상에 의한 프록시 캐싱 그룹화)

  • Lee, Chong-Deuk
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.40-47
    • /
    • 2009
  • Recently, dynamic proxy caching has been proposed on the distributed environment so that media objects by user's requests can be served directly from the proxy without contacting the server. However, it makes caching challenging due to multimedia large sizes, low latency and continuous streaming demands of media objects. To solve the problems caused by streaming demands of media objects, this paper has been proposed the grouping scheme with fuzzy filtering based on partition and mapping. For partition and mapping, this paper divides media block segments into fixed partition reference block(R$_f$P) and variable partition reference block(R$_v$P). For semantic relationship, it makes fuzzy relationship to performs according to the fixed partition temporal synchronization(T$_f$) and variable partition temporal synchronization(T$_v$). Simulation results show that the proposed scheme makes streaming service efficiently with a high average request response time rate and cache hit rate and with a low delayed startup ratio compared with other schemes.

Distributed File Placement and Coverage Expansion Techniques for Network Throughput Enhancement in Small-cell Network (소형셀 네트워크 전송용량 향상을 위한 분산 파일저장 및 커버리지 확장 기법)

  • Hong, Jun-Pyo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.1
    • /
    • pp.183-189
    • /
    • 2018
  • This paper proposes distributed file placement and coverage expansion techniques for mitigating the traffic bottleneck in backhaul for small-cell networks. In order to minimize the backhaul load with limited memory space, the proposed scheme controls the coverage and file placement of base station according to file popularity distribution and memory space of base stations. In other words, since the cache hit ratio is low when there is small memory capacity or widespread file popularity distribution, the base stations expand its coverage and cache different set of files for the user located in overlapped area to exploit multiple cached file sets of base stations. Our simulation results show that the proposed scheme outperforms the conventional cache strategy in terms of network throughput when there is small memory capacity or widespread file popularity distribution.

Cache Management using a Adaptive Parity Group Configuration in RAID 5 Controller (적응형 패리티 그룹 구성을 이용한 RAID 5 제어기에서의 캐시 운영)

  • Huh, Jung-Ho;Song, Ja-Young;Chang, Tae-Mu
    • The KIPS Transactions:PartA
    • /
    • v.10A no.2
    • /
    • pp.83-92
    • /
    • 2003
  • RAID 5 is a widely-used technique used to construct disk systems of high reliability and performance. This paper proposes APGOC (Adaptive Parity Group On Cache) organization on cache to solve "small write" problem of RAID 5 especially in OLTP (On-Line Transaction Processing System) environments. In our approach, when user process makes a request for a file to kernel, the information on the read/write characteristics is added to the file data structure of the file system. With this information, data and parity cache can be managed interchangeably through parity fetching. Therefore we can enhance the cache utilization and improve the disk request response time. Our method is analyzed and evaluated with a simulation method. Comparing with previous works, we observed about 6~l3% of performance enhancement.hancement.

An Energy Efficient and High Performance Data Cache Structure Utilizing Tag History of Cache Addresses (캐시 주소의 태그 이력을 활용한 에너지 효율적 고성능 데이터 캐시 구조)

  • Moon, Hyun-Ju;Jee, Sung-Hyun
    • The KIPS Transactions:PartA
    • /
    • v.14A no.1 s.105
    • /
    • pp.55-62
    • /
    • 2007
  • Uptime of embedded processors for mobile devices are dependent on battery consumption. Especially the large portion of power consumption is known to be due to cache management in embedded processors. This paper proposes an energy efficient data cache structure for high performance embedded processors. High performance prefetching data cache issues prefetching instructions before issuing demand-fetch instructions based on reference predictions. These prefetching instruction bring reduction on memory delay by improving cache hit ratio, but on the other hand those increase energy consumption in proportion to the number of prefetching instructions. In this paper, we adopt tag history table on prefetching data cache for reducing energy consumption by minimizing parallel tag comparison. Experimental results show the proposed data cache improves performance on energy consumption as well as memory delay.

Efficient Traffic Management Scheme for Fast Authenticated Handover in IEEE 802.16e Network (휴대인터넷에서 낮은 지연 특성을 가지는 인증유지 핸드오버를 위한 효과적인 트래픽 관리기법)

  • Choi Jae Woo;Kang Jeon il;Nyang Dae Hun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.6C
    • /
    • pp.457-464
    • /
    • 2005
  • Recently, Portable Internet being standardized provides fast movement with wider service range than wireless LAN does. If Portable Internet service starts, many people will use Portable Internet and thus wireless traffic is going to increase. In Portable Internet, it is important to reduce handover latency to provide user with satisfactory service when handover occurs. In IEEE 802.16e, MSS sends its own security context information to one Base Station which it will move to reduce handover latency. But this is not suitable in the situation that the BS doesn't know the security context. To reduce handover latency of proactive caching method that is to send security context information to adjacency Base Stations in advance has been proposed by[4]. In this paper we propose effective traffic management algorithm to reduce signaling network traffic caused by proactive caching method.

Disk Cache Manager based on Minix3 Microkernel : Design and Implementation (Minix3 마이크로커널 기반 디스크 캐쉬 관리자의 설계 및 구현)

  • Choi, Wookjin;Kang, Yongho;Kim, Seonjong;Kwon, Hyeogsoong;Kim, Jooman
    • Journal of Digital Convergence
    • /
    • v.11 no.11
    • /
    • pp.421-427
    • /
    • 2013
  • Disk Cache Manager(DCM), a functional server of microkernel based, to improve the I/O power of shared disks is designed and implemented in this work. DCM interfaces other different servers with message passing through ports by serving as a system actor the multi-thread mode on the Minix3 micro-kernel. DCM proposed in this paper uses the shared disk logically as a Seven Disk and Sodd Disk to enable parallel I/O. DCM enables the efficient placement of disk data because it raises disk cache hit-ratio by increasing the cache size when the utilization of the particular disk is high. Through experimental results, we show that DCM is quite efficient for a shared disk with higher utilization.

Performance Evaluation of Caching in PON-based 5G Fronthaul (PON기반 5G 프론트홀의 캐싱 성능 평가)

  • Jung, Bokrae
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.1
    • /
    • pp.22-27
    • /
    • 2020
  • With the deployment of 5G infrastructure, content delivery network (CDN) will be a key role to provide explosive growing services for the independent media and YouTube which contain high-speed mobile contents. Without a local cache, the mobile backhaul and fronthaul should endure huge burden of bandwidth request for users as the increase number of direct accesses from contents providers. To deal with this issue, this paper fist presents both fronthaul solutions for CDN that use dark fibers and a passive optical network (PON). On top of that, we propose the aggregated content request specialized for PON caching and evaluate and compare its performance to legacy schemes through the simulation. The proposed PON caching scheme can reduce average access time of up to 0.5 seconds, 1/n received request packets, and save 60% of backhaul bandwidth compared to the no caching scheme. This work can be a useful reference for service providers and will be extended to further improve the hit ratio of cache in the future.