• Title/Summary/Keyword: Cache management

Search Result 212, Processing Time 0.024 seconds

A Study on Large Data File Management Using Buffer Cache and Virtual Memory File (가상메모리 화일과 버퍼캐쉬를 이용한 대형 데이타 화일의 처리에 관한 연구)

  • Kim, Byeong-Chul;Shin, Byeong-Seok;Hwang, Hee-Yeung
    • Proceedings of the KIEE Conference
    • /
    • 1991.11a
    • /
    • pp.185-188
    • /
    • 1991
  • In this paper we have designed and implemented a method of using extended memory and hard disk space as a data buffer for application programs to allow handling of large data files in DOS environment. We use a part of the conventional DOS memory as a buffer cache which allows the application program to use extended memory and hard disks transparently. Using buffer cache also allows some speed improvement for the application program. We have also implemented a number of functions to allow easier handling of pointer operations used by application programs.

  • PDF

Design and evaluation of a fuzzy cooperative caching scheme for MANETs

  • Bae, Ihn-Han
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.3
    • /
    • pp.605-619
    • /
    • 2010
  • Caching of frequently accessed data in multi-hop ad hoc environment is a technique that can improve data access performance and availability. Cooperative caching, which allows sharing and coordination of cached data among several clients, can further en-hance the potential of caching techniques. In this paper, we propose a fuzzy cooperative caching scheme in mobile ad hoc networks. The cache management of the proposed caching scheme not only uses adaptively CacheData or CachePath based on data sim-ilarity and data utility, but also uses the replacement manager based on data pro t. Also, the proposed caching scheme uses a prefetch manager. When the TTL of the cached data expires, the prefetch manager evaluates the popularity index of the data. If the popularity index is larger than a threshold, the data is prefetched. Otherwise, its space is released. The performance of the proposed scheme is evaluated analytically and is compared to that of other cooperative caching schemes.

An Efficient Flash Translation Layer Considering Temporal and Spacial Localities for NAND Flash Memory Storage Systems

  • Kim, Yong-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.12
    • /
    • pp.9-15
    • /
    • 2017
  • This paper presents an efficient FTL for NAND flash based SSDs. Address translation information of page mapping based FTLs is stored on flash memory pages and address translation cache keeps frequently accessed entries. The proposed FTL of this paper reduces response time by considering both of temporal and spacial localities of page access patterns in translation cache management. The localities of several well-known traces are evaluated and determine the structure of the cache for high hit ratio. A simulation with several well-known traces shows that the presented FTL reduces response time in comparison to previous FTLs and can be used with relatively small size of caches.

Regular File Access of Embedded System Using Flash Memory as a Storage (플래시 메모리를 저장매체로 사용하는 임베디드 시스템에서의 정규파일 접근)

  • 이은주;박현주
    • Journal of Information Technology Applications and Management
    • /
    • v.11 no.1
    • /
    • pp.189-200
    • /
    • 2004
  • Recently Flash Memory which is small and low-powered is widely used as a storage of embedded system, because an embedded system requests portability and a fast response. To resolve a difference of access time between a storage and RAM, Linux is using disk caching which copies a part of file on disk into RAM. It is not also an exception on embedded system. A READ access-time of flash memory is similar to RAMs. So, when a process on an embedded system reads data, it is similar to the time to access cached data in RAM and to access directly data on a flash memory. On the embedded system using limited memory, using a disk cache is that wastes much time and memory spaces to manage it and can not reflects the characteristic of a flash memory. This paper proposes the regular file access of limited using a page cache in the file system based on a flash memory and reflects the characteristic of a flash memory. The proposed algorithm minimizes power consumption because access numbers of the RAM are reduced and doesn't waste a memory space because it accesses directly to a flash memory Therefore, the performance improvement of the system applying the proposed algorithm is expected.

  • PDF

Proxy-based Caching Optimization for Mobile Ad Hoc Streaming Services (모바일 애드 혹 스트리밍 서비스를 위한 프록시 기반 캐싱 최적화)

  • Lee, Chong-Deuk
    • Journal of Digital Convergence
    • /
    • v.10 no.4
    • /
    • pp.207-215
    • /
    • 2012
  • This paper proposes a proxy-based caching optimization scheme for improving the streaming media services in wireless mobile ad hoc networks. The proposed scheme utilizes the proxy for data packet transmission between media server and nodes in WLANs, and the proxy locates near the wireless access pointer. For caching optimization, this paper proposes NFCO (non-full cache optimization) and CFO (cache full optimization) scheme. When performs the streaming in the proxy, the NFCO and CFO is to optimize the caching performance. This paper compared the performance for optimization between the proposed scheme and the server-based scheme and rate-distortion scheme. Simulation results show that the proposed scheme has better performance than the existing server-only scheme and rate distortion scheme.

A Design and Performance Analysis of Web Cache Replacement Policy Based on the Size Heterogeneity of the Web Object (웹 객체 크기 이질성 기반의 웹 캐시 대체 기법의 설계와 성능 평가)

  • Na Yun Ji;Ko Il Seok;Cho Dong Uk
    • The KIPS Transactions:PartC
    • /
    • v.12C no.3 s.99
    • /
    • pp.443-448
    • /
    • 2005
  • Efficient using of the web cache is becoming important factors that decide system management efficiency in the web base system. The cache performance depends heavily on the replacement algorithm which dynamically selects a suitable subset of objects for caching in a finite cache space. In this paper, the web caching algorithm is proposed for the efficient operation of the web base system. The algorithm is designed based on a divided scope that considered size reference characteristic and heterogeneity on web object. With the experiment environment, the algorithm is compared with conservative replacement algorithms, we have confirmed more than $15\%$ of an performance improvement.

Improving QoS using Cellular-IP/PRC in Hospital Wireless Network

  • Kim, Sung-Hong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.1 no.2
    • /
    • pp.120-126
    • /
    • 2006
  • In this paper, we propose for improving QoS in hospital wireless network using Cellular-IP/PRC(Paging Route Cache) with Paging Cache and Route Cache in Cellular-IP. Although the Cellular-IP/PRC technology is devised for mobile internet communication, it has its vulnerability in frequent handoff environment. This handoff state machine using differentiated handoff improves quality of services in Cellular-IP/PRC. Suggested algorithm shows better performance than existing technology in wireless mobile internet communication environment. When speech quality is secured considering increment of interference to receive in case of suppose that proposed acceptance method grooves base radio station capacity of transfer node is plenty, and most of contiguity cell transfer node was accepted at groove base radio station with a blow, groove base radio station new trench lake acceptance method based on transmission of a message electric power estimate of transfer node be. Do it so that may apply composing PC(Paging Cache) and RC(Routing Cache) that was used to manage paging and router in radio Internet network in integral management and all nodes as one PRC(Paging Router Cache), and add hand off state machine in transfer node so that can manage hand off of transfer node and Roaming state efficiently, and studies so that achieve connection function at node. Analyze benevolent person who influence on telephone traffic in system environment and forecasts each link currency rank and imbalance degree, forecast most close and important lake interception probability and lake falling off probability, GoS(Grade of Service), efficiency of cell capacity in QoS because applies algorithm proposing based on algorithm use gun send-receive electric power that judge by looking downward link whether currency book was limited and accepts or intercept lake and handles and displays QoS performance improvement.

  • PDF

A Comparison Study on Data Caching Policies of CCN (콘텐츠 중심 네트워킹의 데이터 캐시 정책 비교 연구)

  • Kim, Dae-Youb
    • Journal of Digital Convergence
    • /
    • v.15 no.2
    • /
    • pp.327-334
    • /
    • 2017
  • For enhancing network efficiency, various applications/services like CDN and P2P try to utilize content which have previously been cached somewhere. Content-centric networking (CCN) also utilizes data caching functionality. However, dislike CDN/P2P, CCN implements such a function on network nodes. Then, any intermediated nodes can directly respond to request messages for cached data. Hence, it is essential which content is cached as well as which nodes cache transmitted content. Basically, CCN propose for every nodes on the path from the content publisher of transmitted object to a requester to cache the object. However, such an approach is inefficient considering the utilization of cached objects as well as the storage overhead of each node. Hence, various caching mechanisms are proposed to enhance the storage efficiency of a node. In this paper, we analyze the performance of such mechanisms and compare the characteristics of such mechanisms. Also, we analyze content utilization patterns and apply such pattern to caching mechanisms to analyze the practicalism of the caching mechanisms.

Buffer Cache Management for Low Power Consumption (저전력을 위한 버퍼 캐쉬 관리 기법)

  • Lee, Min;Seo, Eui-Seong;Lee, Joon-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.6
    • /
    • pp.293-303
    • /
    • 2008
  • As the computing environment moves to the wireless and handheld system, the power efficiency is getting more important. That is the case especially in the embedded hand-held system and the power consumed by the memory system takes the second largest portion in overall. To save energy consumed in the memory system we can utilize low power mode of SDRAM. In the case of RDRAM, nap mode consumes less than 5% of the power consumed in active or standby mode. However hardware controller itself can't use this facility efficiently unless the operating system cooperates. In this paper we focus on how to minimize the number of active units of SDRAM. The operating system allocates its physical pages so that only a few units of SDRAM need to be activated and the unnecessary SDRAM can be put into nap mode. This work can be considered as a generalized and system-wide version of PAVM(Power-Aware Virtual Memory) research. We take all the physical memory into account, especially buffer cache, which takes an half of total memory usage on average. Because of the portion of buffer cache and its importance, PAVM approach cannot be robust without taking the buffer cache into account. In this paper, we analyze the RAM usage and propose power-aware page allocation policy. Especially the pages mapped into the process' address space and the buffer cache pages are considered. The relationship and interactions of these two kinds of pages are analyzed and exploited for energy saving.

A Strategy To Reduce Network Traffic Using Two-layered Cache Servers for Continuous Media Data on the Wide Area Network (이중 캐쉬 서버를 사용한 실시간 데이터의 좡대역 네트워크 대역폭 감소 정책)

  • Park, Yong-Woon;Beak, Kun-Hyo;Chung, Ki-Dong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.10
    • /
    • pp.3262-3271
    • /
    • 2000
  • Continuous media objects, due to large volume and real-time consiraints in their delivery,are likely to consume much network andwidth Generally, proxy servers are used to hold the fiequently requested objects so as to reduce the network traffic to the central server but most of them are designed for text and image dae that they do not go well with continuous media data. So, in this paper, we propose a two-layered network cache management policy for continuous media object delivery on the wide area networks. With the proposed cache management scheme,in cach LAN, there exists one LAN cache and each LAN is further devided into a group of sub-LANs, each of which also has its own sub-LAN eache. Further, each object is also partitioned into two parts the front-end and rear-end partition. they can be loaded in the same cache or separately in different network caches according to their access frequencics. By doing so, cache replacement overhead could be educed as compared to the case of the full size daa allocation and replacement , this eventually reduces the backbone network traffic to the origin server.

  • PDF