• Title/Summary/Keyword: Cache System

Search Result 457, Processing Time 0.03 seconds

A Divided Scope Web Cache Replacement Technique Based on Object Reference Characteristics (객체 참조 특성 기반의 분할된 영역 웹 캐시 대체 기법)

  • Ko, Il-Seok;Leem, Chun-Seong;Na, Yun-Ji;Cho, Dong-Wook
    • The KIPS Transactions:PartC
    • /
    • v.10C no.7
    • /
    • pp.879-884
    • /
    • 2003
  • Generally we use web cache in order to increase performance of web base system, and a replacement technique has a great influence on performance of web cache. A web cache replacement technique is different from a replacement technique of memory scope, and a unit substituted for is web object Also, as for the web object, a variation of user reference characteristics is very great. Therefore, a web cache replacement technique can reflect enough characteristics of this web object. But the existing web caching techniques were not able to reflect enough these object reference characteristics. A principal viewpoint of this study is reference characteristic analysis, an elevation of an object hit rate, an improvement of response time. First of all we analyzed a reference characteristics of an web object by log analysis. And we divide web cache storage scope using the result of reference characteristics analysis. In the experiment result, we can confirm that performance of an object-hit ratio and a response speed was improved than a conventional technique about a proposal technique.

Improving QoS using Cellular-IP/PRC in Hospital Wireless Network

  • Kim, Sung-Hong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.1 no.2
    • /
    • pp.120-126
    • /
    • 2006
  • In this paper, we propose for improving QoS in hospital wireless network using Cellular-IP/PRC(Paging Route Cache) with Paging Cache and Route Cache in Cellular-IP. Although the Cellular-IP/PRC technology is devised for mobile internet communication, it has its vulnerability in frequent handoff environment. This handoff state machine using differentiated handoff improves quality of services in Cellular-IP/PRC. Suggested algorithm shows better performance than existing technology in wireless mobile internet communication environment. When speech quality is secured considering increment of interference to receive in case of suppose that proposed acceptance method grooves base radio station capacity of transfer node is plenty, and most of contiguity cell transfer node was accepted at groove base radio station with a blow, groove base radio station new trench lake acceptance method based on transmission of a message electric power estimate of transfer node be. Do it so that may apply composing PC(Paging Cache) and RC(Routing Cache) that was used to manage paging and router in radio Internet network in integral management and all nodes as one PRC(Paging Router Cache), and add hand off state machine in transfer node so that can manage hand off of transfer node and Roaming state efficiently, and studies so that achieve connection function at node. Analyze benevolent person who influence on telephone traffic in system environment and forecasts each link currency rank and imbalance degree, forecast most close and important lake interception probability and lake falling off probability, GoS(Grade of Service), efficiency of cell capacity in QoS because applies algorithm proposing based on algorithm use gun send-receive electric power that judge by looking downward link whether currency book was limited and accepts or intercept lake and handles and displays QoS performance improvement.

  • PDF

A Technique of Replacing XML Semantic Cache (XML 시맨틱 캐쉬의 교체 기법)

  • Hong, Jung-Woo;Kang, Hyun-Chul
    • The Journal of Society for e-Business Studies
    • /
    • v.12 no.3
    • /
    • pp.211-234
    • /
    • 2007
  • In e-business, XML is a major format of data and it is essential to efficiently process queries against XML data. XML query caching has received much attention for query performance improvement. In employing XML query caching, some efficient technique of cache replacement is required. The previous techniques considered as a replacement unit either the whole query result or the path in the query result. The former is simple to employ but it is not efficient whereas the latter is more efficient and yet the size difference among the potential victims is large, and thus, efficiency of caching would be limited. In this paper, we propose a new technique where the element in the query result is are placement unit to overcome the limitations of the previous techniques. The proposed technique could enhance the cache efficiency to a great extent because it would not pick a victim whose size is too large to store a new cached item, the variance in the size of victims would be small, and the unused space of the cache storage would be small. A technique of XML semantic cache replacement is presented which is based on the replacement function that takes into account cache hit ratio, last access time, fetch time, size of XML semantic region, size of element in XML semantic region, etc. We implemented a prototype XML semantic cache system that employs the proposed technique, and conducted a detailed set of experiments over a LAN environment. The experimental results showed that our proposed technique outperformed the previous ones.

  • PDF

T-Cache: a Fast Cache Manager for Pipeline Time-Series Data (T-Cache: 시계열 배관 데이타를 위한 고성능 캐시 관리자)

  • Shin, Je-Yong;Lee, Jin-Soo;Kim, Won-Sik;Kim, Seon-Hyo;Yoon, Min-A;Han, Wook-Shin;Jung, Soon-Ki;Park, Se-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.293-299
    • /
    • 2007
  • Intelligent pipeline inspection gauges (PIGs) are inspection vehicles that move along within a (gas or oil) pipeline and acquire signals (also called sensor data) from their surrounding rings of sensors. By analyzing the signals captured in intelligent PIGs, we can detect pipeline defects, such as holes and curvatures and other potential causes of gas explosions. There are two major data access patterns apparent when an analyzer accesses the pipeline signal data. The first is a sequential pattern where an analyst reads the sensor data one time only in a sequential fashion. The second is the repetitive pattern where an analyzer repeatedly reads the signal data within a fixed range; this is the dominant pattern in analyzing the signal data. The existing PIG software reads signal data directly from the server at every user#s request, requiring network transfer and disk access cost. It works well only for the sequential pattern, but not for the more dominant repetitive pattern. This problem becomes very serious in a client/server environment where several analysts analyze the signal data concurrently. To tackle this problem, we devise a fast in-memory cache manager, called T-Cache, by considering pipeline sensor data as multiple time-series data and by efficiently caching the time-series data at T-Cache. To the best of the authors# knowledge, this is the first research on caching pipeline signals on the client-side. We propose a new concept of the signal cache line as a caching unit, which is a set of time-series signal data for a fixed distance. We also provide the various data structures including smart cursors and algorithms used in T-Cache. Experimental results show that T-Cache performs much better for the repetitive pattern in terms of disk I/Os and the elapsed time. Even with the sequential pattern, T-Cache shows almost the same performance as a system that does not use any caching, indicating the caching overhead in T-Cache is negligible.

Design of Global Buffer Manager in SAN-based Cluster File Systems (SAN 환경의 대용량 클러스터 파일 시스템을 위한 광역 버퍼 관리기의 설계)

  • Lee, Kyu-Woong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.11
    • /
    • pp.2404-2410
    • /
    • 2011
  • This paper describes the design overview of cluster file system $SANique^{TM}$ based on SAN(Storage Area Network) environment. The design issues and problems of the conventional global buffer manager are also illustrated under a large set of clustered computing hosts. We propose the efficient global buffer management method that provides the more scalability and availability. In our proposed global buffer management method, we reuse the maintained list of lock information from our cluster lock manager. The global buffer manger can easily find and determine the location of requested data block cache based on that lock information. We present the pseudo code of the global buffer manager and illustration of global cache operation in cluster environment.

A Caching Mechanism for Remote Queries in Distributed Directory Systems (분산 디렉토리 시스템에서의 원격 질의에 대한 캐싱 기법)

  • Lee, Kang-Woo
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.1
    • /
    • pp.50-56
    • /
    • 2000
  • In this paper, for improving the speed of query processing on distributed directory system, we proposed a caching mechanism which is store the queries and their on the remote site objects in the cache of local site. For this, first, cached information which is stored in distributed directory systems is classified as application data and system data. And cache system architecture is designed according to classified information. Second least-TTL replacement mechanism which uses the weighted value of geographical information and access frequency for replacements are developed for each cache. Finally, performance evaluations are performed by comparing the proposed caching mechanism and other mechanisms (LRU, LFU replacement). Our least-TTL mechanism shows a performance improvement of 25% over the LRU and that of 30% over LFU.

  • PDF

Eager Data Transfer Mechanism for Reducing Communication Latency in User-Level Network Protocols

  • Won, Chul-Ho;Lee, Ben;Park, Kyoung;Kim, Myung-Joon
    • Journal of Information Processing Systems
    • /
    • v.4 no.4
    • /
    • pp.133-144
    • /
    • 2008
  • Clusters have become a popular alternative for building high-performance parallel computing systems. Today's high-performance system area network (SAN) protocols such as VIA and IBA significantly reduce user-to-user communication latency by implementing protocol stacks outside of operating system kernel. However, emerging parallel applications require a significant improvement in communication latency. Since the time required for transferring data between host memory and network interface (NI) make up a large portion of overall communication latency, the reduction of data transfer time is crucial for achieving low-latency communication. In this paper, Eager Data Transfer (EDT) mechanism is proposed to reduce the time for data transfers between the host and network interface. The EDT employs cache coherence interface hardware to directly transfer data between the host and NI. An EDT-based network interface was modeled and simulated on the Linux-based, complete system simulation environment, Linux/SimOS. Our simulation results show that the EDT approach significantly reduces the data transfer time compared to DMA-based approaches. The EDTbased NI attains 17% to 38% reduction in user-to-user message time compared to the cache-coherent DMA-based NIs for a range of message sizes (64 bytes${\sim}$4 Kbytes) in a SAN environment.

Cache Replacement Policies Considering Small-Writes and Reference Counts for Software RAID Systems (소프트웨어 RAID 파일 시스템에 작은 쓰기와 참조 횟수를 고려한 캐쉬 교체 정책)

  • Kim, Jong-Hoon;Noh, Sam-Hyuk;Won, Yoo-Hun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2849-2860
    • /
    • 1997
  • In this paper, we present efficient cache replacement policies for the software RAID file system. The performance of this policies is compared to two other policies previously proposed for conventional file systems and adapted for the software RAID file system. As in hardware RAID systems, we found small-writes to be the performance bottleneck in software RAID file systems. To tackle this small-write problem, we propose cache replacement policies. Using trace driven simulations we show that the proposed policies improve performance in the aspect of the average response time and the average system busy time.

  • PDF

Sharing Pattern Analysis of an OLTP Application

  • Lee, Kangwoo;Kim, Hiecheol
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.7 no.5
    • /
    • pp.121-128
    • /
    • 2002
  • Although multiprocessor systems are widely used in recent years to run commercial workloads, data sharing patterns are rarely explored due to several difficulties. In this paper, we made in-depth sharing pattern analysis for a representative OLTP application, the TPC-B benchmark, running on a cache-coherent shared-memory multiprocessor system. In addition, to illustrate their effects on the performance, the number of cache misses were measured for various numbers of processors, cache sizes and cache block sizes. From these measurements, we found out the shared data in TPC-B largely bear quite different sharing characteristics from those in scientific applications.

  • PDF

Hybrid Buffer Replacement Scheme Considering Reference Pattern in Multimedia Storage Systems (멀티미디어 저장 시스템에서 참조 유형을 고려한 혼성 버퍼 교체 기법)

  • 류연승
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.1
    • /
    • pp.47-56
    • /
    • 2002
  • Previous buffer cache schemes for multimedia storage systems only exploited the sequential references of multimedia files and didn't consider looping references. However, in some video applications like foreign language learning, users mark the scene as loop area and then application automatically playbacks the scene several times. In this paper, we propose a new buffer replacement scheme, called HBM(Hybrid Buffer Management), for multimedia storage systems that have both sequential and looping references. Proposed scheme assumes that application layer informs reference pattern of files to file system. Then HBM applies an appropriate replacement policy to each file. Our simulation experiments show that HBM outperforms previous buffer cache schemes such as DISTANCE and LRU.

  • PDF