• Title/Summary/Keyword: Caching System

Search Result 184, Processing Time 0.029 seconds

AN ADVACNCED DISK BLOCK CACHING ALGORITHM FOR DISK I/O SUB-SYSTEM

  • Jung, Soo-Mok;Rho, Kyung-Taeg
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.11 no.3
    • /
    • pp.43-52
    • /
    • 2007
  • A hard disk, which can be classified as an external storage is usually capacious and economical. In spite of the attractive characteristics and efforts on the performance improvement, however, the operation of the hard disk is apparently slower than a processor and the advancement has also been slowly conducted since it is based on mechanical process. On the other hand, the advancement of the processor has been drastically performed as semiconductor technology does. So, disk I/O sub-system becomes bottleneck of computer systems' performance. For this reason, the research on disk I/O sub-system is in progress to improve computer systems' performance. In this paper, we proposed multi-level LRU scheme and then apply it to the computer systems with buffer cache and disk cache. By applying the proposed scheme to computer systems, the average access time to disk blocks can be decreased. The efficiency of the proposed algorithm was verified by simulation results.

  • PDF

The design of Hybrid Storage System with High-speed Caching (캐싱 기법을 접목시킨 HSS(Hybrid Storage System) 프레임워크 설계)

  • Jae, Eun-kyeung;Jung, Gi-man;Son, Jae-gi;Kim, Young-Hwan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.327-328
    • /
    • 2013
  • 최근 클라우드 컴퓨팅 환경 내에서 많은 IT 기업들이 그동안에 쌓아두었던 소프트웨어, 하드웨어 기술력을 바탕으로 클라우드 컴퓨팅을 제공하는 데 필요한 IT 인프라 구축에 힘쓰고 있는 시점에서 경쟁력이 심화되고 있다. 그렇기 때문에 대용량의 데이터를 사용자에게 고속으로 제공할 수 있는 방안을 연구하는 것이 클라우드 컴퓨팅 환경에서 대두되고 있는 이슈 중에 하나이다. 본 논문에서는 HSS(Hybrid Storage System) 내부 소프트웨어 시스템 설계 기법에 고속 캐싱을 접목시켜 사용자에게 고속으로 데이터를 제공하는 방안을 제시하였다.

Regular File Access of Embedded System Using Flash Memory as a Storage (플래시 메모리를 저장매체로 사용하는 임베디드 시스템에서의 정규파일 접근)

  • 이은주;박현주
    • Journal of Information Technology Applications and Management
    • /
    • v.11 no.1
    • /
    • pp.189-200
    • /
    • 2004
  • Recently Flash Memory which is small and low-powered is widely used as a storage of embedded system, because an embedded system requests portability and a fast response. To resolve a difference of access time between a storage and RAM, Linux is using disk caching which copies a part of file on disk into RAM. It is not also an exception on embedded system. A READ access-time of flash memory is similar to RAMs. So, when a process on an embedded system reads data, it is similar to the time to access cached data in RAM and to access directly data on a flash memory. On the embedded system using limited memory, using a disk cache is that wastes much time and memory spaces to manage it and can not reflects the characteristic of a flash memory. This paper proposes the regular file access of limited using a page cache in the file system based on a flash memory and reflects the characteristic of a flash memory. The proposed algorithm minimizes power consumption because access numbers of the RAM are reduced and doesn't waste a memory space because it accesses directly to a flash memory Therefore, the performance improvement of the system applying the proposed algorithm is expected.

  • PDF

Web Service Proxy Architecture using WS-Eventing for Reducing SOAP Traffic

  • Terefe, Mati Bekuma;Oh, Sangyoon
    • Journal of Information Technology and Architecture
    • /
    • v.10 no.2
    • /
    • pp.159-167
    • /
    • 2013
  • Web Services offer many benefits over other types of middleware in distributed computing. However, usage of Web Services results in large network bandwidth since Web Services use XML-based protocol which is heavier than binary protocols. Even though there have been many researches to minimize the network traffic and bandwidth usages of Web Services messages, none of them are solving problem clearly yet. In this paper, we propose a transparent proxy with cache to avoid transfer of repeated SOAP data, sent by Web Service to an application. To maintain the cache consistency, we introduce publish/subscribe paradigm using WS-Eventing between the proxy and Web Service. The implemented system based on our proposed architecture will not compromise the standards of Web Service. The evaluation of our system shows that caching SOAP messages not only reduces the network traffic but also decreases the request delays.

An Enhanced Searching Algorithm over Unstructured Mobile P2P Overlay Networks

  • Shah, Babar;Kim, Ki-Il
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.3
    • /
    • pp.173-178
    • /
    • 2013
  • To discover objects of interest in unstructured peer-to-peer networks, the peers rely on flooding query messages which create incredible network traffic. This article evaluates the performance of an unstructured Gnutella-like protocol over mobile ad-hoc networks and proposes modifications to improve its performance. This paper offers an enhanced mechanism for an unstructured Gnutella-like network with improved peer features to better meet the mobility requirement of ad-hoc networks. The proposed system introduces a novel caching optimization technique and enhanced ultrapeer selection scheme to make communication more efficient between peers and ultrapeers. The paper also describes an enhanced query mechanism for efficient searching by applying multiple walker random walks with a jump and replication technique. According to the simulation results, the proposed system yields better performance than Gnutella, XL-Gnutella, and random walk in terms of the query success rate, query response time, network load, and overhead.

A Caching Scheme of Meta-Information for the Linux Cluster File System (리눅스 클러스터 파일 시스템을 위한 메타정보 캐쉬 기법)

  • 홍재연;김형식
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10c
    • /
    • pp.316-318
    • /
    • 2002
  • 클러스터 파일 시스템은 고가용성(high availability) 및 결함내성(fault tolerance)을 만족하고 확장성이 뛰어나기 때문에 멀티미디어 서비스 등으로 활용범위를 넓혀왔다. 클러스터 시스템에서 일반적으로 제공되는 단일 시스템 이미지 (single system image) 기술은 저장된 위치에 관계없이 디렉토리나 파일에 접근할 수 있는 장점을 제공하지만 실제 저장된 위치에 따라 접근 시간의 편차가 발생된다. 본 논문에서는 파일이나 디렉토리에 관한 메타정보를 캐쉬에 저장함으로써 클러스터 파일 시스템에서의 접근 시간을 단축하기 위한 방법을 제안한다. 파일과 디렉토리의 접근 형태에 적합한 캐쉬 배치(placement) 기법과 재배치(replacement) 기법을 제시하고 캐쉬 일관성 유지를 위한 알고리즘을 보인다. 제안된 방법은 멀티미디어 서비스 등의 응용에서 효과적으로 접근시간을 단축할 수 있을 것으로 예상된다.

  • PDF

An Efficient Caching Strategy in Data Broadcasting (데이터 방송 환경에서의 효율적인 캐슁 정책)

  • Kim, Su-Yeon;Choe, Yang-Hui
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.12
    • /
    • pp.1476-1484
    • /
    • 1999
  • TV 방송 분야에서 다양한 정보와 상호 작용성을 제공하기 위해서 최근 기존 방송 내용인 A/V 스트림 외 부가정보 방송이 시도되고 있다. 데이타 방송에 대한 기존 연구는 대부분 고정된 내용의 데이타를 방송하는 환경을 가정하고 있어서 그 결과가 방송 내용의 변화가 많은 환경에 부적합하다. 본 논문에서는 데이타에 대한 접근이 반복되지 않을 가능성이 높고 사용자 접근 확률을 예상하기 어려운 상황에서 응답 시간을 개선하는 방안으로 수신 데이타를 무조건 캐쉬에 반입하고 교체가 필요한 경우 다음 방송 시각이 가장 가까운 페이지를 축출하는 사용자 단말 시스템에서의 캐슁 정책을 제안하였다. 제안된 캐쉬 관리 정책은 평균적인 캐쉬 접근 실패 비용을 줄임으로써 사용자 응답 시간을 개선하며, 서로 다른 스케줄링 기법을 사용하는 다양한 방송 제공자가 공존하는 환경에서 보편적으로 효과를 가져올 수 있다.Abstract Recently, many television broadcasters have tried to disseminate digital multimedia data in addition to the traditional content (audio-visual stream). The broadcast data need to be cached by a client system, to provide a reasonable response time for a user request. Previous studies assumed the dissemination of a fixed set of items, and the results are not suitable when broadcast items are frequently changed. In this paper, we propose a novel cache management scheme that chooses the replacement victim based on the remaining time to the next broadcast instance. The proposed scheme reduces response time, where it is hard to predict the probability distribution of user accesses. The caching policy we present here significantly reduces expected response time by minimizing expected cache miss penalty, and can be applied without difficulty to different scheduling algorithms.

A Study on the Improvement of Military Information Communication Network Efficiency Using CCN (CCN을 활용한 군 정보통신망 효율성 향상 방안)

  • Kim, Hui-Jung;Kwon, Tae-Wook
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.5
    • /
    • pp.799-806
    • /
    • 2020
  • The rapid growth of smartphone-to-Internet of Things (IoT) connections and the explosive demand for data usage centered on mobile video are increasing day by day, and this increase in data usage creates many problems in the IP system. In a full-based environment, in which information requesters focus on information providers to receive information from specific servers, problems arise with bottlenecks and large data processing. To address this problem, CCN networking technology, a future network technology, has emerged as an alternative to CCN networking technology, which reduces bottlenecks that occur when requesting popular content through caching of intermediate nodes and increases network efficiency, and can be applied to military information and communication networks to address the problem of traffic concentration and the use of various surveillance equipment in full-based networks, such as scientific monitoring systems, and to provide more efficient content.

Parallel Multithreaded Processing for Data Set Summarization on Multicore CPUs

  • Ordonez, Carlos;Navas, Mario;Garcia-Alvarado, Carlos
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.2
    • /
    • pp.111-120
    • /
    • 2011
  • Data mining algorithms should exploit new hardware technologies to accelerate computations. Such goal is difficult to achieve in database management system (DBMS) due to its complex internal subsystems and because data mining numeric computations of large data sets are difficult to optimize. This paper explores taking advantage of existing multithreaded capabilities of multicore CPUs as well as caching in RAM memory to efficiently compute summaries of a large data set, a fundamental data mining problem. We introduce parallel algorithms working on multiple threads, which overcome the row aggregation processing bottleneck of accessing secondary storage, while maintaining linear time complexity with respect to data set size. Our proposal is based on a combination of table scans and parallel multithreaded processing among multiple cores in the CPU. We introduce several database-style and hardware-level optimizations: caching row blocks of the input table, managing available RAM memory, interleaving I/O and CPU processing, as well as tuning the number of working threads. We experimentally benchmark our algorithms with large data sets on a DBMS running on a computer with a multicore CPU. We show that our algorithms outperform existing DBMS mechanisms in computing aggregations of multidimensional data summaries, especially as dimensionality grows. Furthermore, we show that local memory allocation (RAM block size) does not have a significant impact when the thread management algorithm distributes the workload among a fixed number of threads. Our proposal is unique in the sense that we do not modify or require access to the DBMS source code, but instead, we extend the DBMS with analytic functionality by developing User-Defined Functions.

Performance Evaluation of TCP over Wireless Links (무선 링크에서의 TCP 성능 평가)

  • Park, Jin-Young;Chae, Ki-Joon
    • Journal of KIISE:Information Networking
    • /
    • v.27 no.2
    • /
    • pp.160-174
    • /
    • 2000
  • Nowadays, most widely used transport protocol, TCP is tuned to perform well in traditional networks where packet losses occur mostly because of congestion. TCP performs reliable end-to-end packet transmission under the assumption of low packet error rate. However, networks with wireless links suffer from significant losses due to high error rate and handoffs. TCP responds to all losses by invoking congestion control and avoidance algorithms, resulting in inefficient use of network bandwidth and degraded end-to-end performance in that system. To solve this problem, several methods have been proposed. In this paper, we analyse and compare these methods and propose appropriate model for improving TCP performance in the network with wireless links. This model uses TCP selective acknowledgement (SACK) option between TCP ends, and also uses caching method at the base station. Our simulation results show that using TCP SACK option with base station caching significantly reduces unnecessary duplicate retransmissions and recover packet losses effectively.

  • PDF