• Title/Summary/Keyword: 캐시 관리

Search Result 147, Processing Time 0.024 seconds

The Study of the Object Replication Management using Adaptive Duplication Object Algorithm (적응적 중복 객체 알고리즘을 이용한 객체 복제본 관리 연구)

  • 박종선;장용철;오수열
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.1
    • /
    • pp.51-59
    • /
    • 2003
  • It is effective to be located in the double nodes in the distributed object replication systems, then object which nodes share is the same contents. The nodes store an access information on their local cache as it access to the system. and then the nodes fetch and use it, when it needed. But with time the coherence Problems will happen because a data carl be updated by other nodes. So keeping the coherence of the system we need a mechanism that we managed the to improve to improve the performance and availability of the system effectively. In this paper to keep coherence in the shared memory condition, we can set the limited parallel performance without the additional cost except the coherence cost using it to keep the object at the proposed adaptive duplication object(ADO) algorithms. Also to minimize the coherence maintenance cost which is the bi99est overhead in the duplication method, we must manage the object effectively for the number of replication and location of the object replica which is the most important points, and then it determines the cos. And that we must study the adaptive duplication object management mechanism which will improve the entire run time.

  • PDF

Improving QoS using Mobility Management in Wireless Internet Environment (무선 인터넷에서 셀룰라 IP 이동성 관리에 의한 QoS 개선)

  • Yoon Young-Ji;Suk Kyung-Hyu;Park Dung-Suk;Hong Sung-Soo;Bae Chul-Soo;Na Sang-Dong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2006.05a
    • /
    • pp.495-500
    • /
    • 2006
  • 본 논문에서는 셀 환경에서 QoS를 보장하기 위해 통합된 페이징과 루프 정보 관리 캐시를 사용하는 Cellular IP 특성을 가진 Cellular-IP/PRC 네트워크를 제안한다. 제안한 호 수락 방식은, 이동 노드의 홈 기지국 용량이 충분하고, 인접 셀 이동 노드가 흠 기지국에서 호가 수락되었다고 가정할 경우 받을 간섭의 증가량을 고려해 통화 품질이 보장될 때, 홈 기지국은 새로운 호를 이동 노드의 송신 전력 예측에 기반을 둔 호 수락 방식이다. 무선인터넷 네트워크 내의 페이징과 라우터를 관리하기 위해 사용되었던 PC(Paging Cache)와 RC(Routing Cache)를 하나의 PRC(Paging Router Cache)로 통합 관리하고, 모든 노드 내에 구성하여 운용토록 하고, 이동 노드의 핸드 오프 및 로밍 상태를 효율적으로 관리 할 수 있도록 이동 노드에 핸드오프 상태 머신을 추가하고, 노드에서 관련 기능을 수행하도록 하였다. 시스템 환경에서 통화량에 영향을 주는 인자를 각 링크 통화권 및 불균형 정도를 예측, 하향링크에 의해 통화권이 제한을 판단하여 송수신 전력을 기반으로 한 알고리즘과 제안한 알고리즘을 비교하여 QoS가 호 차단 확률과 호 탈락 확률, GoS, 셀 용량의 효율을 예측할 수 있는 QoS로 성능 개선을 연구한다.

  • PDF

Extended Buffer Management with Flash Memory SSDs (플래시메모리 SSD를 이용한 확장형 버퍼 관리)

  • Sim, Do-Yoon;Park, Jang-Woo;Kim, Sung-Tan;Lee, Sang-Won;Moon, Bong-Ki
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.308-314
    • /
    • 2010
  • As the price of flash memory continues to drop and the technology of flash SSD controller innovates, high performance flash SSDs with affordable prices flourish in the storage market. Nevertheless, it is hard to expect that flash SSDs will replace harddisks completely as database storage. Instead, the approach to use flash SSD as a cache for harddisks would be more practical, and, in fact, several hybrid storage architectures for flash memory and harddisk have been suggested in the literature. In this paper, we propose a new approach to use flash SSD as an extended buffer for main buffer in database systems, which stores the pages replaced out from main buffer and returns the pages which are re-referenced in the upper buffer layer, improving the system performance drastically. In contrast to the existing approaches to use flash SSD as a cache in the lower storage layer, our approach, which uses flash SSD as an extended buffer in the upper host, can provide fast random read speed for the warm pages which are being replaced out from the limited main buffer. In fact, for all the pages which are missing from the main buffer in a real TPC-C trace, the hit ratio in the extended buffer could be more than 60%, and this supports our conjecture that our simple extended buffer approach could be very effective as a cache. In terms of performance/price, our extended buffer architecture outperforms two other alternative approaches with the same cost, 1) large main buffer and 2) more harddisks.

Segment-based Buffer Management for Multi-level Streaming Service in the Proxy System (프록시 시스템에서 multi-level 스트리밍 서비스를 위한 세그먼트 기반의 버퍼관리)

  • Lee, Chong-Deuk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.11
    • /
    • pp.135-142
    • /
    • 2010
  • QoS in the proxy system are under heavy influence from interferences such as congestion, latency, and retransmission. Also, multi-level streaming services affects from temporal synchronization, which lead to degrade the service quality. This paper proposes a new segment-based buffer management mechanism which reduces performance degradation of streaming services and enhances throughput of streaming due to drawbacks of the proxy system. The proposed paper optimizes streaming services by: 1) Use of segment-based buffer management mechanism, 2) Minimization of overhead due to congestion and interference, and 3) Minimization of retransmission due to disconnection and delay. This paper utilizes fuzzy value $\mu$ and cost weight $\omega$ to process the result. The simulation result shows that the proposed mechanism has better performance in buffer cache control rate, average packet loss rate, and delay saving rate with stream relevance metric than the other existing methods of fixed segmentation method, pyramid segmentation method, and skyscraper segmentation method.

A Scheme of Efficient Contents Service and Sharing By Associating Media Server with Location-Aware Overlay Network (미디어 서버와 위치-인지 오버레이 네트워크를 연계한 효율적 콘텐츠 공유 및 서비스 방법)

  • Chung, Won-Ho;Lee, Seung Yeon
    • Journal of Broadcast Engineering
    • /
    • v.23 no.1
    • /
    • pp.26-35
    • /
    • 2018
  • The recent development of overlay network technology enables distributed sharing of various types of contents. Although overlay network has great advantages as a huge content repository, it is practically difficult to directly provide such Internet service as streaming of contents. On the other hand, the media server, which is specialized in content services, has excellent service capabilities, but it suffers from the huge contents that are constantly created and requires large expansion of severs and storages, and thus requires much effort for efficient management of the huge repository. Hence, the association of an overlay network of huge storage with a media server of high performance content service will show a great synergy effect. In this paper, a location-aware scheme of constructing overlay networks and associating it with media server is proposed, and then cache-based contents management and service policy are proposed for efficient content service. The performance is analysed for one of the content services, streaming service.

Efficient Cache Management Scheme with Maintaining Strong Data Consistency in a VANET (VANET에서 효율적이며 엄격한 데이터 일관성을 유지하는 캐쉬 관리 기법)

  • Moon, Sung-Hoon;Park, Kwang-Jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.5
    • /
    • pp.41-48
    • /
    • 2012
  • A Vehicular Ad-hoc Network (VANET) is a vehicular specific type of a mobile ad-hoc network, to provide temporary communications among nearby vehicles. Mobile node of VANET consumes energy and resource with participating in the member of network. In a VANET, data replication and cooperative caching have been used as promising solutions to improve system performance. Existing cooperative caching scheme in a VANET mostly focuses on weak consistency is not always satisfactory. In this paper, we propose an efficient cache management scheme to maintain strong data consistency in a VANET. We make an adaptive scheduling scheme to broadcast Invalidation Report (IR) in order to reduce query delay and communication overhead to maintain strong data consistency. The simulation result shows that our proposed method has a strength in terms of query delay and communication overhead.

Optimizing LRU Lock Management in the Linux Kernel for Improving Parallel Write Throughout in Many-Core CPU Systems (매니코어 CPU 시스템의 병렬 쓰기 성능 향상을 위한 리눅스 커널의 LRU 관리 최적화 기법)

  • Eun-Kyu Byun;Gibeom Gu;Kwang-Jin Oh;Jiwoo Bang
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.7
    • /
    • pp.209-216
    • /
    • 2023
  • Modern HPC systems are equipped with many-core CPUs with dozens of cores. When performing parallel I/O in such a system, there is a limit to scalability due to the problem of the LRU lock management policy of the Linux system. The study proposes an improved FinerLRU to solve this problem. Our new FinerLRU improves the parallel write performance of file systems using the buffer cache through granular lock management by increasing the number of LRU locks upto the maximum number of cores. The proposed method was implemented in Linux 5.18.11, and the performance was measured on two types of CPUs, Intel Icelake Xeon and Intel Knights landing, with different characteristics, and it was found that a performance improvement of about two times can be obtained in both types of systems.

Implementation and Performance Analysis of Event Processing and Buffer Managing Techniques for DDS (고성능 데이터 발간/구독 미들웨어의 이벤트, 버퍼 처리 기술 및 성능 분석)

  • Yoon, Gunjae;Choi, Hoon
    • Journal of KIISE
    • /
    • v.44 no.5
    • /
    • pp.449-459
    • /
    • 2017
  • Data Distribution Service (DDS) is a communication middleware that supports a flexible, scalable and real-time communication capability. This paper describes several techniques to improve the performance of DDS middleware. Detailed events for the internal behavior of the middleware are defined. A DDS message is disassembled into several submessages of independent, meaningful units for event-driven structuring in order to reduce the processing complexity. The proposed technique of history cache management is also described. It utilizes the fact that status access and random access to the history cache occur more frequently in the DDS. These methods have been implemented in the EchoDDS, the DDS implementation developed by our team, and it showed improved performance.

Performance Evaluation of Disk I/O for Web Proxy Servers (웹 프락시 서버의 디스크 I/O 성능 평가)

  • Shim Jong-Ik
    • The KIPS Transactions:PartC
    • /
    • v.12C no.4 s.100
    • /
    • pp.603-608
    • /
    • 2005
  • Disk I/O is a major performance bottleneck of web proxy server. Today's most web proxy sowers are design to run on top of a general purpose file system. But general purpose file system can not efficiently handle web cache workload, small files, leading to the performance degradation of entire web proxy servers. In this paper we evaluate the performance potential of raw disk to reduce disk I/O overhead of web proxy servers. To show the performance potential of raw disk, we design a storage management system called Block-structured Storage Management System (BSMS). And we also actually implement web proxy server that incorporate BSMS in Squid. Comprehensive experimental evaluations show that raw disk can be a good solution to improve disk I/O performance significantly for web proxy servers.

A Method of efficient connection setting for Mobile IP with high mobility (이동성이 잦은 Mobile IP를 위한 효율적인 연결 설정 기법)

  • Rho Kyung-Taeg;Kim Hye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.4 s.32
    • /
    • pp.167-172
    • /
    • 2004
  • Although Mobile IP proposed in IETF is effective. it has inefficiency in case mobile hosts communicate with each others while they are roaming frequently in a specific area. It occurs lots of latency because mobile hosts must be registered and establish an secure path under the internet emvironments and transmitting data on the path. Additionally this inefficiency is more aggravated in case mobile hosts has high mobility. Thus this paper propose a method using Anchor foreign agent by Anchor chain method which combine an pointer forwarding and a cache method plus a border router as a way to complement the above problem which exists in an mobility management in a specific area.

  • PDF