• Title/Summary/Keyword: Cache data

Search Result 487, Processing Time 0.023 seconds

Improving the Performance of Network Management Protocol SNMP (네트워크 관리 프로토콜 SNMP의 성능 향상)

  • Na, Ho-Jin;Cho, Kyung-San
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.2
    • /
    • pp.99-107
    • /
    • 2010
  • SNMP(Simple Network Management Protocol) is most commonly used as a standard protocol for effective network management by supporting the increasing size of the network and the variety of network elements such as router, switch, server and so on. However, SNMP has performance drawbacks of network overhead, processing latency, and the inefficiency in data retrieval. In this paper, we propose two schemes to improve the performance of SNMP; 1) the first scheme to reduce the amount of redundant OID information within a SNMP-GetBulk response message, 2) the second scheme of newly proposed SNMP-GetUpdate message combined with the cache in MNS. Through the analysis with real experiments, we show that our first scheme reduces the network overhead and the second scheme improves the processing latency and the retrieval of SNMP MB tables. And, therefore the scalability of network management can be improved.

The Study of the Object Replication Management using Adaptive Duplication Object Algorithm (적응적 중복 객체 알고리즘을 이용한 객체 복제본 관리 연구)

  • 박종선;장용철;오수열
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.1
    • /
    • pp.51-59
    • /
    • 2003
  • It is effective to be located in the double nodes in the distributed object replication systems, then object which nodes share is the same contents. The nodes store an access information on their local cache as it access to the system. and then the nodes fetch and use it, when it needed. But with time the coherence Problems will happen because a data carl be updated by other nodes. So keeping the coherence of the system we need a mechanism that we managed the to improve to improve the performance and availability of the system effectively. In this paper to keep coherence in the shared memory condition, we can set the limited parallel performance without the additional cost except the coherence cost using it to keep the object at the proposed adaptive duplication object(ADO) algorithms. Also to minimize the coherence maintenance cost which is the bi99est overhead in the duplication method, we must manage the object effectively for the number of replication and location of the object replica which is the most important points, and then it determines the cos. And that we must study the adaptive duplication object management mechanism which will improve the entire run time.

  • PDF

An Efficient P2P Service using Distributed Caches in MANETs (모바일 애드-혹 망에서 분산 캐시를 이용한 효율적인 P2P 서비스 방법)

  • Oh, Sun-Jin;Lee, Young-Dae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.3
    • /
    • pp.165-171
    • /
    • 2009
  • With rapid growth of Mobile Ad-Hoc network(MANET) and P2P service technologies, many attempts for integration of MANET and P2P service and development of such applications are actively introduced recently. The implementation of stable P2P service, however, is very difficult challenge because of the high mobility of mobile users in MANET. In this paper, we propose an efficient mobile P2P service, which shares and manages multimedia data files efficiently in mobile environment, uses distributed caches to store files considering their popularities in order to achieve high performance. The performance of proposed P2P service is evaluated by an analytic model and compared with those of existing DHT based P2P service in peer-to-peer network.

  • PDF

A Cache Management Technique for an Efficient Video Proxy Server (효율적인 비디오 프록시 서버를 위한 캐시 관리 방법)

  • Lee, Jun-Pyo;Park, Sung-Han
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.4
    • /
    • pp.82-88
    • /
    • 2009
  • Video proxy server which is located near clients can store the frequently requested video data in storage space in order to minimize initial latency and network traffic significantly. However, due to the limited storage space in video proxy server, an appropriate video selection method is needed to store the videos which are frequently requested by users. Thus, we present a virtual caching technique to efficiently store the video in video proxy server. For this purpose, we employ a virtual memory in video poky server. If the video is requested by user, it is loaded in virtual memory first and then, delivered to the user. A video which is loaded in virtual memory is deleted or moved into the storage space of video poxy sewer depending on the request condition. In addition, virtual memory is divided into each segment area in order to store the segments efficiently and to avoid the fragmentation. The simulation results show that the proposed method performs better than other methods in terms of the block hit rate and the number of block deletion.

The Node Scheduling of Multi-Threaded Process for CC-NUMA System (CC-NUMA 시스템을 위한 다중 스레드 프로세스의 노드 스케줄링 설계 및 구현)

  • Kim, Jeong-Nyeo;Kim, Hae-Jin;Lee, Cheol-Hoon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.488-496
    • /
    • 2000
  • this paper describes the design and implementation of node scheduling for MX Server that is CC-NUMA System COMSIX, the operating system of MX Server, is designed to suit for CC-NUMA Architecture. MX Server consists of up to 8 nodes, and each node is connected by SCI ring. This node scheduling scheme considers data locality for performance improvement of Oracle8i DBMS on the CC-NUMA architecture. For DBMS such as Oracle8i, a multi-threaded process may be run to tie on particular disk. We have developed a CG binding function that the multi-threaded process bound the node. Currently, We don't have an available CC-NUMA Platform. Instead of MX Server, we developed the Node scheduling scheme for multi-threaded process to suit server platform on the PC test-bed and tested completely.

  • PDF

Design of a Virtual Machine based on the Lua interpreter for the On-Board Control Procedure Execution Environment (탑재운영절차서 실행환경을 위한 Lua 인터프리터 기반의 가상머신 설계)

  • Kang, Sooyeon;Koo, Cheolhea;Ju, Gwanghyeok;Park, Sihyeong;Kim, Hyungshin
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.4
    • /
    • pp.127-133
    • /
    • 2014
  • In this paper, we present the design, functions and performance analysis of the virtual machine (VM) based on the Lua interpreter for On-Board Control Procedure Execution Environment (OEE). The development of the OEE has been required in order to operate the lunar explorer mission autonomously which is planned by Korea Aerospace Research Institute (KARI) autonomously. The concept of On-Board Control Procedure (OBCP) is already being applied to the deep space missions with a long propagation delay and a limited data transmission capacity since it ensure he autonomy of the mission without the ground intervention. The interpreter is the execution engine in the VM and it interpreters high-level programming codes line by line and executes the VM instructions. So the execution speed is very more slower than that of natively compiled codes. In order to overcome it, we design and implement OEE using register-based Lua interpreter for execution engine in OEE. We present experimental results on a range of additional hardware configurations such as usages of cache and floating point unit. We expect those to utilized to the OBCP scheduling policy and the system with Lua interpreter.

An Implementation and Performance Evaluation of a RAID System Based on Embedded Linux (내장형 리눅스 기반 RAID 시스템의 구현 및 성능평가)

  • Baek, Sung-Hoon;Park, Chong-Won
    • The KIPS Transactions:PartA
    • /
    • v.9A no.4
    • /
    • pp.451-458
    • /
    • 2002
  • In this article, we present, design, and implement a software and hardware for an embedded RAID system. The merits and drawbacks of our system are presented by performance evaluation. The proposed hardware system consists of three fibre channel controllers for the interface with fibre channel disks and hosts. Embedded Linux in which a RAID software is implemented is ported to the hardware. A SCSI target mode device driver and a target mode SCSI module are designed for that our RAID system is considered as a block device to a host computer. Linux Multi-device is used as RAID functions of this system. A data cache module is implemented for high performance and the interconnection between Linux Multi-device and the target mode SCSI module. The RAID 5 module of Multi-device is modified for improvement of read performance. The benchmark shows that the new RAID 5 module is superior to the original one in overall performance.

Meltdown Threat Dynamic Detection Mechanism using Decision-Tree based Machine Learning Method (의사결정트리 기반 머신러닝 기법을 적용한 멜트다운 취약점 동적 탐지 메커니즘)

  • Lee, Jae-Kyu;Lee, Hyung-Woo
    • Journal of Convergence for Information Technology
    • /
    • v.8 no.6
    • /
    • pp.209-215
    • /
    • 2018
  • In this paper, we propose a method to detect and block Meltdown malicious code which is increasing rapidly using dynamic sandbox tool. Although some patches are available for the vulnerability of Meltdown attack, patches are not applied intentionally due to the performance degradation of the system. Therefore, we propose a method to overcome the limitation of existing signature detection method by using machine learning method for infrastructures without active patches. First, to understand the principle of meltdown, we analyze operating system driving methods such as virtual memory, memory privilege check, pipelining and guessing execution, and CPU cache. And then, we extracted data by using Linux strace tool for detecting Meltdown malware. Finally, we implemented a decision tree based dynamic detection mechanism to identify the meltdown malicious code efficiently.

Development of a Distributed File System for Multi-Cloud Rendering (멀티 클라우드 렌더링을 위한 분산 파일 시스템 개발 )

  • Hyokyung, Bahn;Kyungwoon, Cho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.1
    • /
    • pp.77-82
    • /
    • 2023
  • Multi-cloud rendering has been attracting attention recently as the computational load of rendering fluctuates over time and each rendering process can be performed independently. However, it is challenging in multi-cloud rendering to deliver large amounts of input data instantly with consistency constraints. In this paper, we develop a new distributed file system for multi-cloud rendering. In our file system, a local machine maintains a file server that manages versions of rendering input files, and each cloud node maintains a rendering cache manager, which performs distributed cooperative caching by considering file versions. Measurement studies with rendering workloads show that the proposed file system performs better than NFS and the uploading schemes by 745% and 56%, respectively, in terms of I/O throughput and execution time.

Speed-up Techniques for High-Resolution Grid Data Processing in the Early Warning System for Agrometeorological Disaster (농업기상재해 조기경보시스템에서의 고해상도 격자형 자료의 처리 속도 향상 기법)

  • Park, J.H.;Shin, Y.S.;Kim, S.K.;Kang, W.S.;Han, Y.K.;Kim, J.H.;Kim, D.J.;Kim, S.O.;Shim, K.M.;Park, E.W.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.153-163
    • /
    • 2017
  • The objective of this study is to enhance the model's speed of estimating weather variables (e.g., minimum/maximum temperature, sunshine hour, PRISM (Parameter-elevation Regression on Independent Slopes Model) based precipitation), which are applied to the Agrometeorological Early Warning System (http://www.agmet.kr). The current process of weather estimation is operated on high-performance multi-core CPUs that have 8 physical cores and 16 logical threads. Nonetheless, the server is not even dedicated to the handling of a single county, indicating that very high overhead is involved in calculating the 10 counties of the Seomjin River Basin. In order to reduce such overhead, several cache and parallelization techniques were used to measure the performance and to check the applicability. Results are as follows: (1) for simple calculations such as Growing Degree Days accumulation, the time required for Input and Output (I/O) is significantly greater than that for calculation, suggesting the need of a technique which reduces disk I/O bottlenecks; (2) when there are many I/O, it is advantageous to distribute them on several servers. However, each server must have a cache for input data so that it does not compete for the same resource; and (3) GPU-based parallel processing method is most suitable for models such as PRISM with large computation loads.