• Title/Summary/Keyword: 캐시 데이터

Search Result 256, Processing Time 0.024 seconds

A method of access pattern classification for decision of caching and prefetching policies (캐시 및 선반입 정책 결정을 위한 액세스 패턴 분류 방법)

  • 석성우;김재열;서대화
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10c
    • /
    • pp.38-40
    • /
    • 2000
  • 대용량 데이터 베이스, 멀티미디어 서버, 복잡한 과학 기술계산 등의 대용량의 데이터를 처리해야 하는 응용이 증가하고 있다. 이에 따라 파일 서비스의 속도를 높이기 위한 연구가 다방면에서 진행되고 있다. 파일 서비스 속도를 결정하는 중요한 두 부분 중의 하나인 액세스 시간개선을 위해서 좀더 효율적인 캐시, 선반입을 수행하기 위한 선수작업으로써 액세스 패턴을 실시간으로 분석하여 특징을 추출하고 관리하는 방법을 연구하였다.

  • PDF

Efficient Cache Architecture for Transactional Memory (트랜잭셔널 메모리를 위한 효율적인 캐시 구조)

  • Choi, Dong-Min;Kim, Seung-Hun;Ro, Won-Woo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.4
    • /
    • pp.1-8
    • /
    • 2011
  • Traditional transactional memory systems are no longer able to guarantee the performance of diverse applications with overflowed transactions since there is the drawback that tracking the data for logging is difficult. Especially, this mechanism has a disadvantage of increasing communication delay for sustaining the state which is required to detect the conflict on the overflowed transactions from the first level cache in the transactional memory systems. To address this point, we have focused on the cache architecture of the systems to reduce the overhead caused by overflows and cache misses. In this paper, we present Supportive Cache which reduces additional overhead during transactions. Supportive Cache performs a parallel look-up with L1 private cache and uses the same replacement policy as L1 private cache. We evaluate the performance of the proposed design by comparing LogTM-SE with and without Supportive Cache. The simulation results show that our system improves the performance by 37% on average, compared to the original LogTM-SE which uses the same hardware resource.

Cache Management using a Adaptive Parity Group Configuration in RAID 5 Controller (적응형 패리티 그룹 구성을 이용한 RAID 5 제어기에서의 캐시 운영)

  • Huh, Jung-Ho;Song, Ja-Young;Chang, Tae-Mu
    • The KIPS Transactions:PartA
    • /
    • v.10A no.2
    • /
    • pp.83-92
    • /
    • 2003
  • RAID 5 is a widely-used technique used to construct disk systems of high reliability and performance. This paper proposes APGOC (Adaptive Parity Group On Cache) organization on cache to solve "small write" problem of RAID 5 especially in OLTP (On-Line Transaction Processing System) environments. In our approach, when user process makes a request for a file to kernel, the information on the read/write characteristics is added to the file data structure of the file system. With this information, data and parity cache can be managed interchangeably through parity fetching. Therefore we can enhance the cache utilization and improve the disk request response time. Our method is analyzed and evaluated with a simulation method. Comparing with previous works, we observed about 6~l3% of performance enhancement.hancement.

Analysis and Improvement of I/O Performance Degradation by Journaling in a Virtualized Environment (가상화 환경에서 저널링 기법에 의한 입출력 성능저하 분석 및 개선)

  • Kim, Sunghwan;Lee, Eunji
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.6
    • /
    • pp.177-181
    • /
    • 2016
  • This paper analyzes the host cache effectiveness in full virtualization, particularly associated with journaling of guests. We observe that the journal access of guests degrades cache performance significantly due to the write-once access pattern and the frequent sync operations. To remedy this problem, we design and implement a novel caching policy, called PDC (Pollution Defensive Caching), that detects the journal accesses and prevents them from entering the host cache. The proposed PDC is implemented in QEMU-KVM 2.1 on Linux 4.14 and provides 3-32% performance improvement for various file and I/O benchmarks.

Design and Implementation of Host-side Cache Migration Engine for High Performance Storage in A Virtualization Environment (가상화 환경에서 스토리지 성능 향상을 위한 호스트 캐시 마이그레이션 엔진 설계 및 구현)

  • Park, Joon Young;Park, Hyunchan;Yoo, Chuck
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.6
    • /
    • pp.278-283
    • /
    • 2016
  • Due to explosive increase in the amount of data produced recently, cloud storage system is required to offer high and stable performance. However, VM (Virtual Machine) migration may result in lowered storage service performance. Especially, in an environment where the host-side flash cache is used in a cloud system, the existing warmed up cache is lost and the problematic cold start begins at a new cache due to a VM migration. In this paper, we first demonstrate and analyze the cold start problem and then propose Cachemior (Cache migrator) which enables efficient hot start of the flash cache.

Considering Data Reference Pattern in Buffer Cache for Continuous Media File System (연속미디어 파일 시스템의 버퍼 캐시에서 데이터 참조 유형의 고려)

  • Cho, Kyung-Woon;Ryu, Yeon-Seung;Koh, Kern
    • The KIPS Transactions:PartA
    • /
    • v.9A no.2
    • /
    • pp.163-170
    • /
    • 2002
  • Previous buffer cache schemes for continuous media file system only exploited the sequentiality of continuous media accesses and didn't consider looping references. However, in some video applications like foreign language learning, users mark the scene as loop area and then application automatically playbacks the scene several times. In this paper, we propose a novel buffer cache scheme for continuous media file system that sequential and looping references exist together. Proposed scheme increases the cache hit ratio by detecting reference pattern of files and appling an appropriate replacement policy to each file.

Time Window based Cache Replacement Strategy using Popularity and Life of News-Demand Data (NOD(News On Demand) 데이터의 인기도와 생명주기를 이용하는 시간 윈도우에 기반한 캐시 재배치 기법)

  • 최태욱;박성호;김영주;정기동
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10a
    • /
    • pp.101-103
    • /
    • 1998
  • 뉴스기사를 구성하는 NOD데이터는 VOD(Video on Demand) 데이터와는 달리 미디어의 종류 및 크기, 시간적인 접근 지역성, 사용자와 상호 작용성 등의 차이점을 가질 뿐만 아니라 새로운 뉴스기사가 수시로 생성되고 사용자가 인기도가 높은 기사와 최신의 뉴스기사에 더 많이 접근하는 특성을 가진다. 본 논문에서는 현재 서비스중인 전자신문의 로그파일을 분석하여 NOD 뉴스기사의 인기도가 Zipf분포와 다름을 보이고, 뉴스기사의 생명주기Lifr Cycle)에 따른 접근 확률분포 제시한다. NOD 데이터의 접근 편기성으로 인하여 데이터 캐싱을 통한 NOD 서버의 성능 향상을 기대할 수 있으나 뉴스기사의 생명주기가 짧고 접근시간대별로 사용자 접근형태가 변하는 등의 이유로 단순히 인기도만 고려한 캐싱은 빈번한 데이터 재배치 문제로 인해 높은 캐시 관리비용을 야기한다. 따라서 본 논문에서는 뉴스 기사의 접근 편기성에 나타나는 인기도(popularity)와 생명주기를 조합한 척도를 제안하고 이를 이용한 재배치를 제안한다.

  • PDF

Image Cache for FPGA-based Real-time Image Warping (FPGA 기반 실시간 영상 워핑을 위한 영상 캐시)

  • Choi, Yong Joon;Ryoo, Jung Rae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.6
    • /
    • pp.91-100
    • /
    • 2016
  • In FPGA-based real-time image warping systems, image caches are utilized for fast readout of image pixel data and reduction of memory access rate. However, a cache algorithm for a general computer system is not suitable for real-time performance because of time delays from cache misses and on-line computation complexity. In this paper, a simple image cache algorithm is presented for a FPGA-based real-time image warping system. Considering that pixel data access sequence is determined from the 2D coordinate transformation and repeated identically at every image frame, a cache load sequence is off-line programmed to guarantee no cache miss condition, and reduced on-line computation results in a simple cache controller. An overall system structure using a FPGA is presented, and experimental results are provided to show accuracy and validity of the proposed cache algorithm.

Cache Sensitive T-tree Index Structure (캐시를 고려한 T-트리 인덱스 구조)

  • Lee Ig-hoon;Kim Hyun Chul;Hur Jae Yung;Lee Snag-goo;Shim JunHo;Chang Juho
    • Journal of KIISE:Databases
    • /
    • v.32 no.1
    • /
    • pp.12-23
    • /
    • 2005
  • In the past decade, advances in speed of commodity CPUs have iu out-paced advances in memory latency Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. To reduce memory access latency, cache memory incorporated in the memory subsystem. but cache memories can reduce the memory latency only when the requested data is found in the cache. This mainly depends on the memory access pattern of the application. At this point, previous research has shown that B+ trees perform much faster than T-trees because B+ trees are more cache conscious than T-trees, and also proposed 'Cache Sensitive B+trees' (CSB. trees) that are more cache conscious than B+trees. The goal of this paper is to make T-trees be cache conscious as CSB-trees. We propose a new index structure called a 'Cache Sensitive T-trees (CST-trees)'. We implemented CST-trees and compared performance of CST-trees with performance of other index structures.

Analysis of time-series user request pattern dataset for MEC-based video caching scenario (MEC 기반 비디오 캐시 시나리오를 위한 시계열 사용자 요청 패턴 데이터 세트 분석)

  • Akbar, Waleed;Muhammad, Afaq;Song, Wang-Cheol
    • KNOM Review
    • /
    • v.24 no.1
    • /
    • pp.20-28
    • /
    • 2021
  • Extensive use of social media applications and mobile devices continues to increase data traffic. Social media applications generate an endless and massive amount of multimedia traffic, specifically video traffic. Many social media platforms such as YouTube, Daily Motion, and Netflix generate endless video traffic. On these platforms, only a few popular videos are requested many times as compared to other videos. These popular videos should be cached in the user vicinity to meet continuous user demands. MEC has emerged as an essential paradigm for handling consistent user demand and caching videos in user proximity. The problem is to understand how user demand pattern varies with time. This paper analyzes three publicly available datasets, MovieLens 20M, MovieLens 100K, and The Movies Dataset, to find the user request pattern over time. We find hourly, daily, monthly, and yearly trends of all the datasets. Our resulted pattern could be used in other research while generating and analyzing the user request pattern in MEC-based video caching scenarios.