• Title/Summary/Keyword: buffer cache scheme

Search Result 41, Processing Time 0.028 seconds

Dynamic Buffer Allocation Scheme for Caching in Realtime Multimedia Systems (실시간 멀티미디어 시스템에서의 캐슁을 위한 동적 버퍼 할당 기법)

  • Kwon, Jin-Baek;Yeom, Heon-Young;Lee, Kyung-Oh
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.4
    • /
    • pp.420-430
    • /
    • 2000
  • Several caching schemes for realtime multimedia systems have been proposed, but they focus only on increasing the hit ratio without providing any means to utilize the saved disk bandwidth due to cache hits. One of the most important metrics in multimedia systems is the number of clients that the systems can service simultaneously guaranteeing Quality of Service(QoS). Preemptive but Safe Interval Caching(PSIC) was proposed as a caching scheme which makes it possible to provide deterministic QoS.. However, it has no ability to adapt to the change of system environments since it has no mechanism to change the cache size. In this paper, we present a new caching scheme, Dynamic Interval Caching(DIC), which maximizes the performance, regardless of the change of system environments, providing hiccup-free service, by managing memory buffers dynamically. And it is demonstrated that DIC allocates buffer cache optimally, by comparing with PSIC through trace-driven simulations.

  • PDF

Energy-Performance Efficient 2-Level Data Cache Architecture for Embedded System (내장형 시스템을 위한 에너지-성능 측면에서 효율적인 2-레벨 데이터 캐쉬 구조의 설계)

  • Lee, Jong-Min;Kim, Soon-Tae
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.5
    • /
    • pp.292-303
    • /
    • 2010
  • On-chip cache memories play an important role in both performance and energy consumption points of view in resource-constrained embedded systems by filtering many off-chip memory accesses. We propose a 2-level data cache architecture with a low energy-delay product tailored for the embedded systems. The L1 data cache is small and direct-mapped, and employs a write-through policy. In contrast, the L2 data cache is set-associative and adopts a write-back policy. Consequently, the L1 data cache is accessed in one cycle and is able to provide high cache bandwidth while the L2 data cache is effective in reducing global miss rate. To reduce the penalty of high miss rate caused by the small L1 cache and power consumption of address generation, we propose an ECP(Early Cache hit Predictor) scheme. The ECP predicts if the L1 cache has the requested data using both fast address generation and L1 cache hit prediction. To reduce high energy cost of accessing the L2 data cache due to heavy write-through traffic from the write buffer laid between the two cache levels, we propose a one-way write scheme. From our simulation-based experiments using a cycle-accurate simulator and embedded benchmarks, the proposed 2-level data cache architecture shows average 3.6% and 50% improvements in overall system performance and the data cache energy consumption.

AN ADVACNCED DISK BLOCK CACHING ALGORITHM FOR DISK I/O SUB-SYSTEM

  • Jung, Soo-Mok;Rho, Kyung-Taeg
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.11 no.3
    • /
    • pp.43-52
    • /
    • 2007
  • A hard disk, which can be classified as an external storage is usually capacious and economical. In spite of the attractive characteristics and efforts on the performance improvement, however, the operation of the hard disk is apparently slower than a processor and the advancement has also been slowly conducted since it is based on mechanical process. On the other hand, the advancement of the processor has been drastically performed as semiconductor technology does. So, disk I/O sub-system becomes bottleneck of computer systems' performance. For this reason, the research on disk I/O sub-system is in progress to improve computer systems' performance. In this paper, we proposed multi-level LRU scheme and then apply it to the computer systems with buffer cache and disk cache. By applying the proposed scheme to computer systems, the average access time to disk blocks can be decreased. The efficiency of the proposed algorithm was verified by simulation results.

  • PDF

Characteristics and Automatic Detection of Block Reference Patterns (블록 참조 패턴의 특성 분석과 자동 발견)

  • Choe, Jong-Mu;Lee, Dong-Hui;No, Sam-Hyeok;Min, Sang-Ryeol;Jo, Yu-Geun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.9
    • /
    • pp.1083-1095
    • /
    • 1999
  • 최근 처리기와 입출력 시스템의 속도 차이가 점점 커짐에 따라 버퍼 캐쉬의 효율적인 관리가 더욱 중요해지고 있다. 버퍼 캐쉬는 블록 교체 정책과 선반입 정책에 의해 관리되며, 각 정책은 버퍼 캐쉬에서 블록의 가치 즉 어떤 블록이 더 가까운 미래에 참조될 것인가를 결정해야 한다. 블록의 가치는 응용들의 블록 참조 패턴의 특성에 기반하며, 블록 참조 패턴의 특성에 대한 정확한 분석은 올바른 결정을 가능하게 하여 버퍼 캐쉬의 효율을 높일 수 있다. 본 논문은 각 응용들의 블록 참조 패턴에 대한 특성을 분석하고 이를 자동으로 발견하는 기법을 제안한다. 제안된 기법은 블록의 속성과 미래 참조 거리간의 관계를 이용해 블록 참조 패턴을 발견한다. 이 기법은 2 단계 파이프라인 방법을 이용하여 온라인으로 참조 패턴을 발견할 수 있으며, 참조 패턴의 변화가 발생하면 이를 인식할 수 있다. 본 논문에서는 8개의 실제 응용 트레이스를 이용해 블록 참조 패턴의 발견을 실험하였으며, 제안된 기법이 각 응용의 블록 참조 패턴을 정확히 발견함을 확인하였다. 그리고 발견된 참조 패턴 정보를 블록 교체 정책에 적용해 보았으며, 실험 결과 기존의 대표적인 블록 교체 정책인 LRU에 비해 최대 57%까지 디스크 입출력 횟수를 줄일 수 있었다.Abstract As the speed gap between processors and disks continues to increase, the role of the buffer cache located in main memory is becoming increasingly important. The buffer cache is managed by block replacement policies and prefetching policies and each policy should decide the value of block, that is which block will be accessed in the near future. The value of block is based on the characteristics of block reference patterns of applications, hence accurate characterization of block reference patterns may improve the performance of the buffer cache. In this paper, we study the characteristics of block reference behavior of applications and propose a scheme that automatically detects the block reference patterns. The detection is made by associating block attributes of a block with the forward distance of the block. With the periodic detection using a two-stage pipeline technique, the scheme can make on-line detection of block reference patterns and monitor the changes of block reference patterns. We measured the detection capability of the proposed scheme using 8 real workload traces and found that the scheme accurately detects the block reference patterns of applications. Also, we apply the detected block reference patterns into the block replacement policy and show that replacement policies appropriate for the detected block reference patterns decreases the number of DISK I/Os by up to 57%, compared with the traditional LRU policy.

The Early Write Back Scheme For Write-Back Cache (라이트 백 캐쉬를 위한 빠른 라이트 백 기법)

  • Chung, Young-Jin;Lee, Kil-Whan;Lee, Yong-Surk
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.11
    • /
    • pp.101-109
    • /
    • 2009
  • Generally, depth cache and pixel cache of 3D graphics are designed by using write-back scheme for efficient use of memory bandwidth. Also, there are write after read operations of same address or only write operations are occurred frequently in 3D graphics cache. If a cache miss is detected, an access to the external memory for write back operation and another access to the memory for handling the cache miss are operated simultaneously. So on frequent cache miss situations, as the memory access bandwidth limited, the access time of the external memory will be increased due to memory bottleneck problem. As a result, the total performance of the processor or the IP will be decreased, also the problem will increase peak power consumption. So in this paper, we proposed a novel early write back cache architecture so as to solve the problems issued above. The proposed architecture controls the point when to access the external memory as to copy the valid data block. And this architecture can improve the cache performance with same hit ratio and same capacity cache. As a result, the proposed architecture can solve the memory bottleneck problem by preventing intensive memory accesses. We have evaluated the new proposed architecture on 3D graphics z cache and pixel cache on a SoC environment where ARM11, 3D graphic accelerator and various IPs are embedded. The simulation results indicated that there were maximum 75% of performance increase when using various simulation vectors.

Access Frequency Based Selective Buffer Cache Management Strategy For Multimedia News Data (접근 요청 빈도에 기반한 멀티미디어 뉴스 데이터의 선별적 버퍼 캐쉬 관리 전략)

  • Park, Yong-Un;Seo, Won-Il;Jeong, Gi-Dong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.9
    • /
    • pp.2524-2532
    • /
    • 1999
  • In this paper, we present a new buffer pool management scheme designed for video type news objects to build a cost-effective News On Demand storage server for serving users requests beyond the limitation of disk bandwidth. In a News On Demand Server where many of users request for video type news objects have to be serviced keeping their playback deadline, the maximum numbers of concurrent users are limited by the maximum disk bandwidth the server provides. With our proposed buffer cache management scheme, a requested data is checked to see whether or not it is worthy of caching by checking its average arrival interval and current disk traffic density. Subsequently, only granted news objects are permitted to get into the buffer pool, where buffer allocation is made not on the block basis but on the object basis. We evaluated the performance of our proposed caching algorithm through simulation. As a result of the simulation, we show that by using this caching scheme to support users requests for real time news data, compared with serving those requests only by disks, 30% of extra requests are served without additional cost increase.

  • PDF

A Design of Efficient Cache Management Scheme Using Meta Information in the Web (메타정보를 이용한 웹에서의 효율적인 캐쉬 관리 기법의 설계)

  • 한지영;윤성대
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2003.11b
    • /
    • pp.1039-1042
    • /
    • 2003
  • 웹 정보의 급격한 양적 팽창은 네트워크 병목 현상과 사용자의 지연시간 증가 및 웹 서버의 과부하 등의 문제를 야기하고 있다. 이를 완화시키기 위한 방법으로 웹 캐슁이 이용되는데, 전통적인 캐슁과는 달리 문서의 종류와 크기가 가변적이며 많은 사용자의 요구를 처리해야하는 특성이 있다. 따라서 본 논문에서는 동적인 웹 환경과 한정된 크기의 웹 캐쉬 공간의 사용 효율을 향상시켜 캐쉬 적중률을 증가시키기 위한 방법으로, 서비스되는 각 파일의 메타정보를 Main Server의 캐쉬에 리스트 형태로 유지하는 CRBM(Client Request Buffer Manager)을 제안한다.

  • PDF

A Buffer Cache Scheme Considering both DRAM/MRAM Hybrid Main Memory and Flash Memory Storages (DRAM/MRAM 하이브리드 메인 메모리와 플래시메모리 저장 장치를 고려한 버퍼 캐시 기법)

  • Yang, Soo-Hyun;Ryu, Yeon-Seung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.05a
    • /
    • pp.93-96
    • /
    • 2013
  • 모바일 환경에서 전력 손실이 중요한 문제 중 하나가 됨에 따라, MRAM과 플래시메모리와 같은 비 휘발성 메모리가 차세대 모바일 컴퓨터에 널리 사용될 것이다. 본 논문에서는 DRAM/MRAM 하이브리드 메인 메모리의 제한적인 쓰기 연산 성능을 고려한 효율적인 버퍼 캐시 기법을 연구했다. 제안한 기법은 MRAM 의 제한적인 쓰기 연산 성능을 고려하고 플래시 메모리 저장 장치의 삭제 연산 횟수를 최소화한다.

PSS Movement Prediction Algorithm for Seamless hando (휴대인터넷에서 seamless handover를 위한 단말 이동 예측 알고리즘)

  • Lee, Ho-Jeong;Yun, Chan-Young;Oh, Young-Hwan
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.12 s.354
    • /
    • pp.53-60
    • /
    • 2006
  • Handover of WiBro is based on 802.16e hard handover scheme. When PSS is handover, it is handover that confirm neighbor's cell condition and RAS ID in neighbor advertisement message. Serving RAS transmits HO-notification message to neighbor RAS. Transmiting HO-notification message to neighbor RAS, it occurs many signaling traffics. Also, When WiBro is handover, It occurs many packet loss. Therefore, user suffer service degradation. LPM handover is supporting seamless handover because it buffers data packets during handover. So It is proposed scheme that predicts is LPM handover and reserves target RAS with pre-authentication. These schemes occur many signaling traffics. In this paper, we propose PSS Movement Prediction to solve signaling traffic. Target RAS is decided by old data in history cache. When serving RAS receives HO-notification-RSP message to target RAS, target RAS inform to crossover node. And crossover node bicast data packet. If handover is over, target RAS forward data packet. Therefore, It reduces signaling traffics but increase handover success rate. When history cache success, It decrease about 48% total traffic. But When history cache fails, It increase about 6% total traffic

Improving the Availability of Scalable on-demand Streams by Dynamic Buffering on P2P Networks

  • Lin, Chow-Sing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.4
    • /
    • pp.491-508
    • /
    • 2010
  • In peer-to-peer (P2P) on-demand streaming networks, the alleviation of server load depends on reciprocal stream sharing among peers. In general, on-demand video services enable clients to watch videos from beginning to end. As long as clients are able to buffer the initial part of the video they are watching, on-demand service can provide access to the video to the next clients who request to watch it. Therefore, the key challenge is how to keep the initial part of a video in a peer's buffer for as long as possible, and thus maximize the availability of a video for stream relay. In addition, to address the issues of delivering data on lossy network and providing scalable quality of services for clients, the adoption of multiple description coding (MDC) has been proven as a feasible resolution by much research work. In this paper, we propose a novel caching scheme for P2P on-demand streaming, called Dynamic Buffering. The proposed Dynamic Buffering relies on the feature of MDC to gradually reduce the number of cached descriptions held in a client's buffers, once the buffer is full. Preserving as many initial parts of descriptions in the buffer as possible, instead of losing them all at one time, effectively extends peers’ service time. In addition, this study proposes a description distribution balancing scheme to further improve the use of resources. Simulation experiments show that Dynamic Buffering can make efficient use of cache space, reduce server bandwidth consumption, and increase the number of peers being served.