• Title/Summary/Keyword: Cache data

Search Result 487, Processing Time 0.034 seconds

Determination of a Grain Size for Reducing Cache Miss Rate of Direct-Mapped Caches (직접 사상 캐쉬의 캐쉬 실패율을 감소시키기 위한 성김도 정책)

  • Jung, In-Bum;Kong, Ki-Sok;Lee, Joon-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.7
    • /
    • pp.665-674
    • /
    • 2000
  • In data parallel programs incurring high cache locality, the choice of grain sizes affects cache performance. Though the grain sizes chosen provide fair load balance among processors, the grain sizes that ignore underlying caching effect result in address interferences between grains allocated to a processor. These address interferences appear to have a negative impact on the cache locality, since they result in cache conflict misses. To address this problem, we propose a best grain size driven from a cache size and the number of processors based on direct mapped cache's characteristic. Since the proposed method does not map the grains to the same location in the cache, cache conflict misses are reduced. Simulation results show that the proposed best grain size substantially improves the performance of tested data parallel programs through the reduction of cache misses on direct-mapped caches.

  • PDF

Load Balancing Method for Query Processing Based on Cache Management in the Grid Database (그리드 데이터베이스에서 질의 처리를 위한 캐쉬 관리 기반의 부하분산 기법)

  • Shin, Soong-Sun;Back, Sung-Ha;Eo, Sang-Hun;Lee, Dong-Wook;Kim, Gyoung-Bae;Chung, Weon-Il;Bae, Hae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.7
    • /
    • pp.914-927
    • /
    • 2008
  • Grid database management systems are used for large data processing, high availability and data integration in grid computing. Furthermore the grid database management systems are in the use of manipulating the queries that are sent to distributed nodes for efficient query processing. However, when the query processing is concentrated in a random node, it will be occurred with imbalance workload and decreased query processing. In this paper we propose a load balancing method for query processing based on cache Management in grid databases. This proposed method focuses on managing a cache in nodes by cache manager. The cache manager connects a node to area group and then the cache manager maintains a cached meta information in node. A node is used for caching the efficient meta information which is propagated to other node using cache manager. The workload of node is distributed by using caching meta information of node. This paper shows that there is an obvious improvement compared with existing methods, through adopting the proposed algorithm.

  • PDF

A Cache-Conscious Compression Index Based on the Level of Compression Locality (압축 지역성 수준에 기반한 캐쉬 인식 압축 색인)

  • Kim, Won-Sik;Yoo, Jae-Jun;Lee, Jin-Soo;Han, Wook-Shin
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.7
    • /
    • pp.1023-1043
    • /
    • 2010
  • As main memory get cheaper, it becomes increasingly affordable to load entire index of DBMS and to access the index. Since speed gap between CPU and main memory is growing bigger, many researches to reduce a cost of main memory access are under the progress. As one of those, cache conscious trees can reduce the cost of main memory access. Since cache conscious trees reduce the number of cache miss by compressing data in node, cache conscious trees can reduce the cost of main memory. Existing cache conscious trees use only fixed one compression technique without consideration of properties of data in node. First, this paper proposes the DC-tree that uses various compression techniques and change data layout in a node according to properties of data in order to reduce cache miss. Second, this paper proposes the level of compression locality that describes properties of data in node by formula. Third, this paper proposes Forced Partial Decomposition (FPD) that reduces the nutter of cache miss. DC-trees outperform 1.7X than B+-tree, 1.5X than simple prefix B+-tree, and 1.3X than pkB-tree, in terms of the number of cache misses. Since proposed DC-trees can be adopted in commercial main memory database system, we believe that DC-trees are practical result.

Mitigating Cache Pollution Attack in Information Centric Mobile Internet

  • Chen, Jia;Yue, Liang;Chen, Jing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5673-5691
    • /
    • 2019
  • Information centric mobile network can significantly improve the data retrieving efficiency by caching contents at mobile edge. However, the cache pollution attack can affect the data obtaining process severely by requiring unpopular contents deliberately. To tackle the problem, we design an algorithm of mitigating cache pollution attacks in information centric mobile network. Particularly, the content popularity distribution statistic is proposed to detect abnormal behavior. Then a probabilistic caching strategy based on abnormal behavior is applied to dynamically maintain the steady-state distribution for content visiting probability and achieve the purpose of defense. The experimental results show that the proposed scheme can achieve higher request hit ratio and smaller latency for false locality content pollution attack than the CacheShield approach and the baseline approach where no mitigation approach is applied.

Low-power Data Cache using Selective Way Precharge (데이터 캐시의 선택적 프리차지를 통한 에너지 절감)

  • Choi, Byeong-Chang;Suh, Hyo-Joong
    • The KIPS Transactions:PartA
    • /
    • v.16A no.1
    • /
    • pp.27-34
    • /
    • 2009
  • Recently, power saving with high performance is one of the hot issues in the mobile systems. Various technologies are introduced to achieve low-power processors, which include sub-micron semiconductor fabrication, voltage scaling, speed scaling and etc. In this paper, we introduce a new method that reduces of energy loss at the data cache. Our methods take the benefits in terms of speed and energy loss using selective way precharging of way prediction with concurrent way selecting. By the simulation results, our method achieves 10.2% energy saving compared to the way prediction method, and 56.4% energy saving compared to the common data cache structure.

High-Performance FFT Using Data Reorganization (데이터 재구성 기법을 이용한 고성능 FFT)

  • Park Neungsoo;Choi Yungho
    • The KIPS Transactions:PartA
    • /
    • v.12A no.3 s.93
    • /
    • pp.215-222
    • /
    • 2005
  • The efficient utilization of cache memories is a key factor in achieving high performance for computing large signal transforms. Nonunit stride access in computation of large DFTs causes cache conflict misses, thereby resulting in poor cache performance. It leads to a severe degradation in overall performance. In this paper, we propose a dynamic data layout approach considering the memory hierarchy system. In our approach, data reorganization is performed between computation stages to reduce the number of cache misses. Also, we develop an efficient search algorithm to determine the optimal tree with the minimum execution time among possible factorization trees considering the size of DFTs and the data access stride. Our approach is applied to compute the fast Fourier Transform (FFT). Experiments were performed on Pentium 4, $Athlon^{TM}$ 64, Alpha 21264, UtraSPARC III. Experiment results show that our FFT achieve performance improvement of up to 3.37 times better than the previous FFT packages.

Separate Factor Caching Scheme for Mobile Web Service (모바일 웹 서비스를 위한 요소분할 캐싱 기법)

  • Sim, Kun-Jung;Kang, Eui-Sun;Kim, Jong-Keun;Ko, Hee-Ae;Lim, Young-Hwan
    • The KIPS Transactions:PartD
    • /
    • v.14D no.4 s.114
    • /
    • pp.447-458
    • /
    • 2007
  • The objective of this study is to provide faster mobile web service by improving performance of Contents Cache used for mobile web service in the existing Mobile Gate System. It was found that two elements existed in Mark-Up page transcoded by Contents Generator. One of the elements was dependent only on the requested DIDL page and Mark-Up type. The other was dependent on each of the requested DIDL page, Mark-Up type, size of mobile display 모바일 장치 to request service, type of images available and color depth count of the images available. The conventional Contents Cache saved the entire Mark-Up page to hold both of the two elements. This caused the problem where storage space was not effectively used because reusable elements were repetitively saved in cache memory domain due to change in one of the elements even though all the other elements were the same. As a result, a larger number of transcoded Mark-Up pages could not be saved in the same cache memory size. Therefore, in this study, Mark-Up pages transcoded by Contents Generator were divided into two elements and were separately saved. Also, in order to respond to the demand for replacing data in cache with new data, this study applied two algorithms of LFU and LRU. This study proposed the method to implement cache performance of faster speed by enabling to save more number of the transcoded Mark-Up pages in the same cache storage space.

Design and evaluation of a fuzzy cooperative caching scheme for MANETs

  • Bae, Ihn-Han
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.3
    • /
    • pp.605-619
    • /
    • 2010
  • Caching of frequently accessed data in multi-hop ad hoc environment is a technique that can improve data access performance and availability. Cooperative caching, which allows sharing and coordination of cached data among several clients, can further en-hance the potential of caching techniques. In this paper, we propose a fuzzy cooperative caching scheme in mobile ad hoc networks. The cache management of the proposed caching scheme not only uses adaptively CacheData or CachePath based on data sim-ilarity and data utility, but also uses the replacement manager based on data pro t. Also, the proposed caching scheme uses a prefetch manager. When the TTL of the cached data expires, the prefetch manager evaluates the popularity index of the data. If the popularity index is larger than a threshold, the data is prefetched. Otherwise, its space is released. The performance of the proposed scheme is evaluated analytically and is compared to that of other cooperative caching schemes.

Efficient Schemes for Cache Consistency Maintenance in a Mobile Database System (이동 데이터베이스 시스템에서 효율적인 캐쉬 일관성 유지 기법)

  • Lim, Sang-Min;Kang, Hyun-Chul
    • The KIPS Transactions:PartD
    • /
    • v.8D no.3
    • /
    • pp.221-232
    • /
    • 2001
  • Due to rapid advance of wireless communication technology, demand on data services in mobile environment is gradually increasing. Caching at a mobile client could reduce bandwidth consumption and query response time, and yet a mobile client must maintain cache consistency. It could be efficient for the server to broadcast a periodic cache invalidation report for cache consistency in a cell. In case that long period of disconnection prevents a mobile client from checking validity of its cache based solely on the invalidation report received, the mobile client could request the server to check cache validity. In doing so, some schemes may be more efficient than others depending on the number of available channels and the mobile clients involved. In this paper, we propose new cache consistency schemes, effects, efficient especially (1) when channel capacity is enough to deal with the mobile clients involved or (2) when that is not the case, and evaluate their performance.

  • PDF

Warp-Based Load/Store Reordering to Improve GPU Time Predictability

  • Huangfu, Yijie;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.2
    • /
    • pp.58-68
    • /
    • 2017
  • While graphics processing units (GPUs) can be used to improve the performance of real-time embedded applications that require high throughput, it is challenging to estimate the worst-case execution time (WCET) of GPU programs, because modern GPUs are designed for improving the average-case performance rather than time predictability. In this paper, a reordering framework is proposed to regulate the access to the GPU data cache, which helps to improve the accuracy of the estimation of GPU L1 data cache miss rate with low performance overhead. Also, with the improved cache miss rate estimation, tighter WCET estimations can be achieved for GPU programs.