• Title/Summary/Keyword: Cache management

Search Result 212, Processing Time 0.024 seconds

Trickle Write-Back Scheme for Cache Management in Mobile Computing Environments (?이동 컴퓨팅 환경에서 캐쉬 관리를 위한 TWB 기법)

  • Kim, Moon-Jeong;Eom, Young-Ik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.1
    • /
    • pp.89-100
    • /
    • 2000
  • Recently, studies on the mobile computing environments that enable mobile hosts to move while retaining its network connection are in progress. In these mobile computing environments, one of the necessary components is the distributed file system supporting mobile hosts, and there are several issues for the design and implementation of the shared file system. Among these issues, there are problems caused by network traffic on limited bandwidth of wireless media. Also, there are consistency maintenance issues that are caused by update-conflicts on the shared files in the distributed file system. In this paper, we propose TWB(Trickle Write-Back) scheme that utilizes weak connectivity for cache management of mobile clients. This scheme focuses on saving bandwidth, reducing waste of disk space, and reducing risks caused by disconnection. For such goals, this scheme lets mobile clients write back intermediate states periodically or on demand while delaying unnecessary write-backs. Meanwhile, this scheme is based on the existing distributed file system architecture and provides transparency.

  • PDF

Implementation of MPOA for Supporting Various Protocols over ATM (ATM 상에서 다양한 프로토콜을 지원하기 위한 MPOA의 구현)

  • Lim, Ji-Young;Kim, Mi-Hee;Choi, Jeong-Hyun;Lee, Mee-Jeong;Chae, Ki-Joon;Choi, Kil-Young;Kang, Hun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.1
    • /
    • pp.181-199
    • /
    • 2000
  • In this paper, we implemented and tested MPOA(MutiProtocol Over ATM) standardized in ATM Forum, which provides service for various layer 3 protocols as well as legacy LAN applications over ATM networks. The functions of MPCs(MPOA Clients) and MPSs(MPOA Servers) which are the components in MPOA systems are implemented. MPCs are located at the edge device and MPOA hosts and MPSs exist in routers. The implemented MPCs have the functions such as exchances of primitives between an LEC(LAN Emulation Client) and an MPC, management and maintenance of Egress/Ingress cache, default transmission through LECs and shortcut transmission. Assuming that routing, convergence and NHRP(Next Hop Resolution Protocol) functions exist in routers, the implemented MPSs have the functions such as exchanges of primitives between an LEC and an MPC, conversion and exchanges of frames between MPOA and NHRP, and management and maintenance of Egress/Ingress cache. All of the possible scenarios are made up to test whether they run correctly. The implemented system is tested by simulation according to the scenarios.

  • PDF

Extended Buffer Management with Flash Memory SSDs (플래시메모리 SSD를 이용한 확장형 버퍼 관리)

  • Sim, Do-Yoon;Park, Jang-Woo;Kim, Sung-Tan;Lee, Sang-Won;Moon, Bong-Ki
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.308-314
    • /
    • 2010
  • As the price of flash memory continues to drop and the technology of flash SSD controller innovates, high performance flash SSDs with affordable prices flourish in the storage market. Nevertheless, it is hard to expect that flash SSDs will replace harddisks completely as database storage. Instead, the approach to use flash SSD as a cache for harddisks would be more practical, and, in fact, several hybrid storage architectures for flash memory and harddisk have been suggested in the literature. In this paper, we propose a new approach to use flash SSD as an extended buffer for main buffer in database systems, which stores the pages replaced out from main buffer and returns the pages which are re-referenced in the upper buffer layer, improving the system performance drastically. In contrast to the existing approaches to use flash SSD as a cache in the lower storage layer, our approach, which uses flash SSD as an extended buffer in the upper host, can provide fast random read speed for the warm pages which are being replaced out from the limited main buffer. In fact, for all the pages which are missing from the main buffer in a real TPC-C trace, the hit ratio in the extended buffer could be more than 60%, and this supports our conjecture that our simple extended buffer approach could be very effective as a cache. In terms of performance/price, our extended buffer architecture outperforms two other alternative approaches with the same cost, 1) large main buffer and 2) more harddisks.

Automated Method for the Efficient Management of DNSSEC Singing Keys in Korea (국내 DNSSEC 서명키의 효율적인 관리를 위한 자동화 방안)

  • Choi, Myung Hee;Kim, Seung Joo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.8
    • /
    • pp.259-270
    • /
    • 2015
  • In this paper, we study and implement ways for users to easily apply and manage the DNSSEC in a domestic environment. DNSSEC is the DNS cache information proposed to address the vulnerability of modulation. However, DNSSEC is difficult to apply and manage due to insufficient domestic applications. In signing keys for efficient and reliable management of DNSSEC, we propose proactive monitoring SW and signing keys. This is an automatic management s/w signing key for DNSSEC efficient and reliable management and to provide a monitoring of the signing key. In addition to the proposed details of how DNSSEC signing key update and monitoring progress smoothly, we expect that the present study will help domestic users to apply and manage DNSSEC easily.

A Data Caching Management Scheme for NDN (데이터 이름 기반 네트워킹의 데이터 캐싱 관리 기법)

  • Kim, DaeYoub
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.291-299
    • /
    • 2016
  • To enhance network efficiency, named-data networking (NDN) implements data caching functionality on intermediate network nodes, and then the nodes directly respond to request messages for cached data. Through the processing of request messages in intermediate node, NDN can efficiently reduce the amount of network traffic, also solve network congestion problems near data sources. Also, NDN provides a data authenticate mechanism so as to prevent various Internet accidents caused from the absence of an authentication mechanism. Hence, through applying NDN to various smart IT convergence services, it is expected to efficiently control the explosive growth of network traffic as well as to provide more secure services. Basically, it is important factors of NDN which data is cached and where nodes caching data is located in a network topology. This paper first analyzes previous works caching content based on the popularity of the content. Then ii investigates the hitting rate of caches in each node of a network topology, and then propose an improved caching scheme based on the result of the analyzation. Finally, it evaluates the performance of the proposal.

Enhanced ANTSEC Framework with Cluster based Cooperative Caching in Mobile Ad Hoc Networks

  • Umamaheswari, Subbian;Radhamani, Govindaraju
    • Journal of Communications and Networks
    • /
    • v.17 no.1
    • /
    • pp.40-46
    • /
    • 2015
  • In a mobile ad hoc network (MANET), communication between mobile nodes occurs without centralized control. In this environment the mobility of a node is unpredictable; this is considered as a characteristic of wireless networks. Because of faulty or malicious nodes, the network is vulnerable to routing misbehavior. The resource constrained characteristics of MANETs leads to increased query delay at the time of data access. In this paper, AntHocNet+ Security (ANTSEC) framework is proposed that includes an enhanced cooperative caching scheme embedded with artificial immune system. This framework improves security by injecting immunity into the data packets, improves the packet delivery ratio and reduces end-to-end delay using cross layer design. The issues of node failure and node malfunction are addressed in the cache management.

Performance enhancement of movement-based registration using cache (캐시를 이용한 이동기준 위치등록의 성능개선)

  • 황광신;유병한;김경수;백장현
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2000.04a
    • /
    • pp.683-686
    • /
    • 2000
  • 본 연구에서는 이동기준 위치등록(Movement-Based Registration, MBR)과 선택적 페이징(Selective Paging, SP)을 근간으로 하여 무선 채널에서의 신호 트래픽을 최소화할 수 있는 방법을 제안하고 성능을 분석하였다. 먼저, 사각형 셀 환경에서 선택적 페이징 방법을 적용할 경우 페이징 영역을 적절히 선택함으로써 기존 방법보다 페이징 부하를 줄일 수 있는 방안을 제시하고 성능을 분석하였다. 또한 이동국이 이미 통과한 셀들의 ID를 캐시에 유지함으로써 위치등록 횟수를 줄일 수 있는 개선된 이동기준 위치등록(Improved MBR, IMBR) 방법을 제시하고 성능을 분석하였다. 본 연구의 결과는, 시스템의 운용환경에 따라 적절한 위치등록 방법을 선택, 운용하는 데에 효과적으로 이용될 수 있다.

  • PDF

An Analysis of Multi-processor System Performance Depending on the Input/Output Types (입출력 형태에 따른 다중처리기 시스템의 성능 분석)

  • Moon, Wonsik
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.4
    • /
    • pp.71-79
    • /
    • 2016
  • This study proposes a performance model of a shared bus multi-processor system and analyzes the effect of input/output types on system performance and overload of shared resources. This system performance model reflects the memory reference time in relation to the effect of input/output types on shared resources and the input/output processing time in relation to the input/output processor, disk buffer, and device standby places. In addition, it demonstrates the contribution of input/output types to system performance for comprehensive analysis of system performance. As the concept of workload in the probability theory and the presented model are utilized, the result of operating and analyzing the model in various conditions of processor capability, cache miss ratio, page fault ratio, disk buffer hit ratio (input/output processor and controller), memory access time, and input/output block size. A simulation is conducted to verify the analysis result.

Performance Evaluation of Disk I/O for Web Proxy Servers (웹 프락시 서버의 디스크 I/O 성능 평가)

  • Shim Jong-Ik
    • The KIPS Transactions:PartC
    • /
    • v.12C no.4 s.100
    • /
    • pp.603-608
    • /
    • 2005
  • Disk I/O is a major performance bottleneck of web proxy server. Today's most web proxy sowers are design to run on top of a general purpose file system. But general purpose file system can not efficiently handle web cache workload, small files, leading to the performance degradation of entire web proxy servers. In this paper we evaluate the performance potential of raw disk to reduce disk I/O overhead of web proxy servers. To show the performance potential of raw disk, we design a storage management system called Block-structured Storage Management System (BSMS). And we also actually implement web proxy server that incorporate BSMS in Squid. Comprehensive experimental evaluations show that raw disk can be a good solution to improve disk I/O performance significantly for web proxy servers.

Efficient Hybrid Transactional Memory Scheme using Near-optimal Retry Computation and Sophisticated Memory Management in Multi-core Environment

  • Jang, Yeon-Woo;Kang, Moon-Hwan;Chang, Jae-Woo
    • Journal of Information Processing Systems
    • /
    • v.14 no.2
    • /
    • pp.499-509
    • /
    • 2018
  • Recently, hybrid transactional memory (HyTM) has gained much interest from researchers because it combines the advantages of hardware transactional memory (HTM) and software transactional memory (STM). To provide the concurrency control of transactions, the existing HyTM-based studies use a bloom filter. However, they fail to overcome the typical false positive errors of a bloom filter. Though the existing studies use a global lock, the efficiency of global lock-based memory allocation is significantly low in multi-core environment. In this paper, we propose an efficient hybrid transactional memory scheme using near-optimal retry computation and sophisticated memory management in order to efficiently process transactions in multi-core environment. First, we propose a near-optimal retry computation algorithm that provides an efficient HTM configuration using machine learning algorithms, according to the characteristic of a given workload. Second, we provide an efficient concurrency control for transactions in different environments by using a sophisticated bloom filter. Third, we propose a memory management scheme being optimized for the CPU cache line, in order to provide a fast transaction processing. Finally, it is shown from our performance evaluation that our HyTM scheme achieves up to 2.5 times better performance by using the Stanford transactional applications for multi-processing (STAMP) benchmarks than the state-of-the-art algorithms.