• Title/Summary/Keyword: Cache Policy

Search Result 136, Processing Time 0.02 seconds

The Cache Replacement Policy for Collaborative Proxy Servers in Mobile Environments (모바일 환경에서의 협력작업을 하는 프록시 서버를 위한 캐쉬 교체정책)

  • 장해권;한종현;정흥기;박승규
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04a
    • /
    • pp.85-87
    • /
    • 2003
  • 무선환경과 모바일 기기의 발달되고, 모바일 단말기의 보편화로 인해 유선환경에서 제공되던 각종 멀티미디어 서비스가 무선환경으로 옮겨가고 있다. 기존의 웹캐쉬 정책은 그 성능을 이미 검증 받았다. 그러나 그런 정책들은 인기도와 네트워크의 상태만을 고려했기 때문에 무선환경에서는 적합하지 않다. 본 논문에서는 무선환경에서 모바일 호스트의 이동성과 미디어 스트림의 특성을 고려한 캐쉬 교체 정책인 M-LRU를 제안한다. 그리고, 시뮬레이션을 통하여 기존의 LRU정책과 제안한 M-LRU 정책을 비교하였으며, 8-9%의 성능향상이 되었다는 것을 보여준다.

  • PDF

A Media Cache Replacement Policy based on Weighted Window (가중치 윈도우 기반의 미디어 캐쉬 교체 정책)

  • 오재학;차호정
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10c
    • /
    • pp.409-411
    • /
    • 2002
  • 본 논문에서는 스트리밍 미디어 캐슁 서버의 효율적인 캐슁 구조를 위하여 참조 횟수, 참조량, 참조 시간 둥의 정량적인 인자들과 사용자 요구 주기를 적용하여 최근 참조 경향에 높은 가중치를 부여함으로써 변화하는 콘텐츠 선호 경향에 빠르게 적응하는 가중치 기반의 캐쉬 교체 정책을 제안한다. 성능 분석은 시뮬레이션 환경 구축을 통해 실험하였으며 LRU, LFU와 SEG 캐쉬 정책과 비교 분석하여 향상된 결과를 보였다.

  • PDF

A Study on Improvement of Buffer Cache Performance for File I/O in Deep Learning (딥러닝의 파일 입출력을 위한 버퍼캐시 성능 개선 연구)

  • Jeongha Lee;Hyokyung Bahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.2
    • /
    • pp.93-98
    • /
    • 2024
  • With the rapid advance in AI (artificial intelligence) and high-performance computing technologies, deep learning is being used in various fields. Deep learning proceeds training by randomly reading a large amount of data and repeats this process. A large number of files are randomly repeatedly referenced during deep learning, which shows different access characteristics from traditional workloads with temporal locality. In order to cope with the difficulty in caching caused by deep learning, we propose a new sampling method that aims at reducing the randomness of dataset reading and adaptively operating on existing buffer cache algorithms. We show that the proposed policy reduces the miss rate of the buffer cache by 16% on average and up to 33% compared to the existing method, and improves the execution time by up to 24%.

A Prefetch Algorithm for a Mobile Host using Association Rules (연관 규칙을 이용한 이동 호스트의 선반입 알고리즘)

  • 김호숙;용환승
    • Journal of KIISE:Databases
    • /
    • v.31 no.2
    • /
    • pp.163-173
    • /
    • 2004
  • Recently, location-based services are becoming very Popular in mobile environments. In this paper, we propose a new association based prefetch algorithm (called by STAP) that efficiently supports information service based on the large quantity of spatial database in mobile environments. We apply the spatial-temporal relations that are meaningful for location-based queries in mobile environments. Moreover, STAP considers user's mobility and the weight of spatial data. The relation of services is a new aspect not considered in previous cache politics. So STAP is the first prefetch algorithm considering the spatial-temporal relations and thus the cache policy begins to gain a new dimension. We evaluate the performance of STAP and prove the efficiency of STAP.

Enhancing Location Privacy through P2P Network and Caching in Anonymizer

  • Liu, Peiqian;Xie, Shangchen;Shen, Zihao;Wang, Hui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.5
    • /
    • pp.1653-1670
    • /
    • 2022
  • The fear that location privacy may be compromised greatly hinders the development of location-based service. Accordingly, some schemes based on the distributed architecture in peer-to-peer network for location privacy protection are proposed. Most of them assume that mobile terminals are mutually trusted, but this does not conform to realistic scenes, and they cannot make requirements for the level of location privacy protection. Therefore, this paper proposes a scheme for location attribute-based security authentication and private sharing data group, so that they trust each other in peer-to-peer network and the trusted but curious mobile terminal cannot access the initiator's query request. A new identifier is designed to allow mobile terminals to customize the protection strength. In addition, the caching mechanism is introduced considering the cache capacity, and a cache replacement policy based on deep reinforcement learning is proposed to reduce communications with location-based service server for achieving location privacy protection. Experiments show the effectiveness and efficiency of the proposed scheme.

Segment-based Cache Replacement Policy in Transcoding Proxy (트랜스코딩 프록시에서 세그먼트 기반 캐쉬 교체 정책)

  • Park, Yoo-Hyun;Kim, Hag-Young;Kim, Kyong-Sok
    • The KIPS Transactions:PartA
    • /
    • v.15A no.1
    • /
    • pp.53-60
    • /
    • 2008
  • Streaming media has contributed to a significant amount of today's Internet Traffic. Like traditional web objects, rich media objects can benefit from proxy caching, but caching streaming media is more of challenging than caching simple web objects, because the streaming media have features such as huge size and high bandwidth. And to support various bandwidth requirements for the heterogeneous ubiquitous devices, a transcoding proxy is usually necessary to provide not only adapting multimedia streams to the client by transcoding, but also caching them for later use. The traditional proxy considers only a single version of the objects, whether they are to be cached or not. However the transcoding proxy has to evaluate the aggregate effect from caching multiple versions of the same object to determine an optimal set of cache objects. And recent researches about multimedia caching frequently store initial parts of videos on the proxy to reduce playback latency and archive better performance. Also lots of researches manage the contents with segments for efficient storage management. In this paper, we define the 9-events of transcoding proxy using 4-atomic events. According to these events, the transcoding proxy can define the next actions. Then, we also propose the segment-based caching policy for the transcoding proxy system. The performance results show that the proposing policy have a low delayed start time, high byte-hit ratio and less transcoding data.

A Processor Allocation Policy using Program Characteristics on Shared Bus (공유 버스상에서 프로그램 특성을 사용한 프로세서 할당 정책)

  • Jeong, In-Beom;Lee, Jun-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.9
    • /
    • pp.1073-1082
    • /
    • 1999
  • 본 논문에서는 시스템 내의 프로세서들을 효과적으로 사용하기 위한 적응적 프로세서 할당 정책을 제안한다. 프로그램의 병렬성을 향상시키기 위하여 일반적으로 병렬 처리에 사용될 프로세서 개수를 증가시킨다. 그러나 증가된 프로세서들은 그레인 크기에 변화를 일으키며 이는 캐쉬 성능에 영향을 미친다. 특히 대역이 제한된 공유 버스를 사용하는 시스템에서는 프로세서 개수의 증가는 공유 버스에 대한 접근 경쟁을 크게 증가하므로 버스에서 대기하는 시간이 프로세서 증가에 의한 계산 능력 이득을 상쇄시키는 주요한 원인이 되고 있다. 본 논문에서 제안한 적응적 프로세서 할당 정책은 프로그램이 수행되는 도중에 임의의 기간동안 공유버스에 대기중인 프로세서 분포에 관한 정보를 얻는다. 그리고 이 정보를 바탕으로 프로세서 개수를 변경하는 방법이다. 모의 시험에서 적응적 프로세서 할당 정책은 프로그램들의 버스 트래픽 특성에 따른 최적의 적합한 프로세서 개수를 발견함을 보인다. 그리고 적응적 프로세서 할당 정책은 고정된 프로세서 개수를 사용한 가장 좋은 성능보다는 다소 떨어진 성능을 나타내었으나 시스템의 프로세서 활용성을 높여 효과적 시스템 사용에 기여함을 보인다. Abstract In this paper, the adaptive processor allocation policy is suggested to make effective use of processors in system. To enhance the parallelism, the number of processors used in the parallel computing may be increased. However, increasing the number of processors affects the grain size of the parallel program. Therefore, it affects the cache performance. In particular, when the shared bus is employed, since increasing the number of processors can result in a significant amount of contention to achieve the shared-bus, the increased computing power is offset by the bus waiting time due to these contentions. The adaptive processor allocation policy acquires the information about the distribution of waiting processors on shared bus for any execution period of programs. And it changes the number of processors working in parallel processing during the program's run. Our simulation results show that the adaptive processor allocation policy finds the optimum feasible number of processors based on the bus traffic characteristic of programs. Thus, it contributes to effective system utilization, even though it performs slightly less efficiently than using a fixed number of processors with the best performance.

Scalar First Replacement Strategy for Reference Prediction Table Used in Prefetching Streaming Data (스트리밍 데이터의 선인출에 사용되는 참조예측표의 스칼라 우선 교체 전략)

  • Lim, Chul-hoo;Chon, Young-Suk;Kim, Suk-il;Jeon, Joong-nam
    • The KIPS Transactions:PartA
    • /
    • v.11A no.3
    • /
    • pp.163-172
    • /
    • 2004
  • Multimedia applications tend to access their data as a streaming pattern with regular intervals. This characteristic can be utilized in prefetching the multimedia data into cache memory so as to reduce their execution speeds. The reference-prediction prefetch algorithm predicts the memory address that seems to be used in the next time based on the previous history of memory references stored in the prediction reference table. This paper proposes a strategy to manipulate the reference prediction table which contains all of the data reference instructions to scalar and streaming data. We have recognized that the scalar reference instructions do not contribute to the data prefetching algorithm. Therefore, when replacing an element in the reference prediction table, the proposed algorithm preferentially selects the scalar reference instruction before the stream reference instruction. It makes the stream reference instruction to stay for a long time compared to the FIFO replacement policy, and eventually improves the performance of data prefetching.

An Efficient Caching Strategy in Data Broadcasting (데이터 방송 환경에서의 효율적인 캐슁 정책)

  • Kim, Su-Yeon;Choe, Yang-Hui
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.12
    • /
    • pp.1476-1484
    • /
    • 1999
  • TV 방송 분야에서 다양한 정보와 상호 작용성을 제공하기 위해서 최근 기존 방송 내용인 A/V 스트림 외 부가정보 방송이 시도되고 있다. 데이타 방송에 대한 기존 연구는 대부분 고정된 내용의 데이타를 방송하는 환경을 가정하고 있어서 그 결과가 방송 내용의 변화가 많은 환경에 부적합하다. 본 논문에서는 데이타에 대한 접근이 반복되지 않을 가능성이 높고 사용자 접근 확률을 예상하기 어려운 상황에서 응답 시간을 개선하는 방안으로 수신 데이타를 무조건 캐쉬에 반입하고 교체가 필요한 경우 다음 방송 시각이 가장 가까운 페이지를 축출하는 사용자 단말 시스템에서의 캐슁 정책을 제안하였다. 제안된 캐쉬 관리 정책은 평균적인 캐쉬 접근 실패 비용을 줄임으로써 사용자 응답 시간을 개선하며, 서로 다른 스케줄링 기법을 사용하는 다양한 방송 제공자가 공존하는 환경에서 보편적으로 효과를 가져올 수 있다.Abstract Recently, many television broadcasters have tried to disseminate digital multimedia data in addition to the traditional content (audio-visual stream). The broadcast data need to be cached by a client system, to provide a reasonable response time for a user request. Previous studies assumed the dissemination of a fixed set of items, and the results are not suitable when broadcast items are frequently changed. In this paper, we propose a novel cache management scheme that chooses the replacement victim based on the remaining time to the next broadcast instance. The proposed scheme reduces response time, where it is hard to predict the probability distribution of user accesses. The caching policy we present here significantly reduces expected response time by minimizing expected cache miss penalty, and can be applied without difficulty to different scheduling algorithms.

A novel page replacement policy associated with ACT-R inspired by human memory retrieval process (인간 기억 인출 과정을 응용하여 설계된 ACT-R 기반 페이지 교체 정책)

  • Roh, Hong-Chan;Park, Sang-Hyun
    • The KIPS Transactions:PartD
    • /
    • v.18D no.1
    • /
    • pp.1-8
    • /
    • 2011
  • The cache structure, which is designed for assuring fast accesses to frequently accessed data, resides on the various levels of computer system hierarchies. Many studies on this cache structure have been conducted and thus many page-replacement algorithms have been proposed. Most of page-replacement algorithms are designed on the basis of heuristic methods by using their own criteria such as how recently pages are accessed and how often they are accessed. This data-retrieval process in computer systems is analogous to human memory retrieval process since the retrieval process of human memory depends on frequency and recency of the retrieval events as well. A recent study regarding human memory cognition revealed that the possibility of the retrieval success and the retrieval latency have a strong correlation with the frequency and recency of the previous retrieval events. In this paper, we propose a novel page-replacement algorithm by utilizing the knowledge from the recent research regarding human memory cognition. Through a set of experiments, we demonstrated that our new method presents better hit-ratio than the LRFU algorithm which has been known as the best performing page-replacement algorithm for DBMS caches.