• Title/Summary/Keyword: cache storage

Search Result 138, Processing Time 0.025 seconds

Caching Algorithm for Core Network Offloading in Smallcell Environment (소형셀 환경에서 코어망 오프로딩을 위한 캐시 알고리즘)

  • Jung, So-Yi;Kim, Jae-Hyun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.3
    • /
    • pp.32-38
    • /
    • 2015
  • In this paper, we propose a smallcell local caching algorithm under user's context in smallcell environment. The proposed system reduces traffic to core network and the network cost, but increases it's performance. The proposed algorithm precache suitable files using smallcell's regional characteristics and target's personality. It can adjusts a storage allocation to make effective usage of our limited cache storage capacity. In order to evaluate the performance of the proposed cache algorithm, we define the cache efficiency, the decrement of core network traffic. The simulation results show that the proposed algorithm can improve performance by about 200% compared to existing web cache scheme.

Separate Factor Caching Scheme for Mobile Web Service (모바일 웹 서비스를 위한 요소분할 캐싱 기법)

  • Sim, Kun-Jung;Kang, Eui-Sun;Kim, Jong-Keun;Ko, Hee-Ae;Lim, Young-Hwan
    • The KIPS Transactions:PartD
    • /
    • v.14D no.4 s.114
    • /
    • pp.447-458
    • /
    • 2007
  • The objective of this study is to provide faster mobile web service by improving performance of Contents Cache used for mobile web service in the existing Mobile Gate System. It was found that two elements existed in Mark-Up page transcoded by Contents Generator. One of the elements was dependent only on the requested DIDL page and Mark-Up type. The other was dependent on each of the requested DIDL page, Mark-Up type, size of mobile display 모바일 장치 to request service, type of images available and color depth count of the images available. The conventional Contents Cache saved the entire Mark-Up page to hold both of the two elements. This caused the problem where storage space was not effectively used because reusable elements were repetitively saved in cache memory domain due to change in one of the elements even though all the other elements were the same. As a result, a larger number of transcoded Mark-Up pages could not be saved in the same cache memory size. Therefore, in this study, Mark-Up pages transcoded by Contents Generator were divided into two elements and were separately saved. Also, in order to respond to the demand for replacing data in cache with new data, this study applied two algorithms of LFU and LRU. This study proposed the method to implement cache performance of faster speed by enabling to save more number of the transcoded Mark-Up pages in the same cache storage space.

Design of A On-Chip Caches for RISC Processors (RISC 프로세서 On-Chip Cache의 설계)

  • 홍인식;임인칠
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.8
    • /
    • pp.1201-1210
    • /
    • 1990
  • This paper proposes on-chip instruction and data cache memories on RISC reduced instruction set computer) architecture which supports fast instruction fetch and data read/write, and enables RISC processor under research to obtain high performance. In the execution of HLL(high level language) programs, heavily used local scalar variables are stored in large register file, but arrays, structures, and global scalar variables are difficult for compiler to allocate registers. These problems can be solved by on-chip Instruction/Data cache. And each cycle of instruction fetch, pad delay causes the lowering of the processors's performance. Cache memories are designed in CMOS technology and SRAM(static-RAM), that saves layout area and power dissipation, is used for instruction and data storage. To speed up and support RISC processor's piplined architecture efficiently, hardwired logic technology is used overall circuits i cache blocks. The schematic capture and timing simulation of proposed cache memorises are performed on Apollo DN4000 workstation using Mentor Graphics CAD tools.

  • PDF

A Cache Privacy Protection Mechanism based on Dynamic Address Mapping in Named Data Networking

  • Zhu, Yi;Kang, Haohao;Huang, Ruhui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.6123-6138
    • /
    • 2018
  • Named data networking (NDN) is a new network architecture designed for next generation Internet. Router-side content caching is one of the key features in NDN, which can reduce redundant transmission, accelerate content distribution and alleviate congestion. However, several security problems are introduced as well. One important security risk is cache privacy leakage. By measuring the content retrieve time, adversary can infer its neighbor users' hobby for privacy content. Focusing on this problem, we propose a cache privacy protection mechanism (named as CPPM-DAM) to identify legitimate user and adversary using Bloom filter. An optimization for storage cost is further provided to make this mechanism more practical. The simulation results of ndnSIM show that CPPM-DAM can effectively protect cache privacy.

A Technique for Improving the Performance of Cache Memories

  • Cho, Doosan
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.104-108
    • /
    • 2021
  • In order to improve performance in IoT, edge computing system, a memory is usually configured in a hierarchical structure. Based on the distance from CPU, the access speed slows down in the order of registers, cache memory, main memory, and storage. Similar to the change in performance, energy consumption also increases as the distance from the CPU increases. Therefore, it is important to develop a technique that places frequently used data to the upper memory as much as possible to improve performance and energy consumption. However, the technique should solve the problem of cache performance degradation caused by lack of spatial locality that occurs when the data access stride is large. This study proposes a technique to selectively place data with large data access stride to a software-controlled cache. By using the proposed technique, data spatial locality can be improved by reducing the data access interval, and consequently, the cache performance can be improved.

Web Proxy Cache Replacement Algorithms using Object Type Partition (개체 타입별 분할공간을 이용한 웹 프락시 캐시의 대체 알고리즘)

  • Soo-haeng, Lee;Sang-bang, Choi
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.5C
    • /
    • pp.399-410
    • /
    • 2002
  • Web cache, which is functionally another word of proxy server, is located between client and server. Web cache has a limited storage area although it has broad bandwidth between client and proxy server, which are usually connected through LAN. Because of limited storage capacity, existing objects in web cache can be deleted for new objects by some rules called replacement algorithm. Hit rate and byte-hit rate are general metrics to evaluate replacement algorithms. Most of the replacement algorithms do satisfy only one metric, or sometimes none of them. In this paper, we propose two replacement algorithms to achieve both high hit rate and byte-hit rate with great satisfaction. In the first algorithm, the cache is appropriately partitioned according to file types as a basic model. In the second algorithm, the cache is composed of two levels; the upper level cache is managed by the basic algorithm, but the lower level is collectively used for all types of files as a shared area. To show the performance of the proposed algorithms, we evaluate hit rate and byte-hit rate of the proposed replacement algorithms using the trace driven simulation.

A Buffer Cache Replacement Algorithm for Considering both Hybrid Main Memory and Storage (하이브리드 메인 메모리와 스토리지의 특성을 고려한 버퍼 캐시 교체 정책)

  • Kang, Dong Hyun;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.947-953
    • /
    • 2015
  • PRAM is being considered as a potential successor to DRAM because of its characteristics such as byte-addressability, non-volatility, and high density. To gain its benefits, buffer cache replacement algorithm based on PRAM has been actively studied. However, most of the previous studies on buffer cache replacement algorithm limitedly exploit the byte-level performance of PRAM by focusing its limited lifetime and slower access latency compared to DRAM. In this paper, we propose a novel buffer cache replacement algorithm that fully considers the byte-level performance of PRAM and the performance of secondary storage. To take advantage of small size write on PRAM, proposed scheme keeps pages, which are frequently accessed with a small size write, on PRAM and allows the selective page migration from DRAM to PRAM. As a result, our scheme significantly reduces the number of PRAM writes. Our experimental results indicate for real workloads that our scheme reduces the number of PRAM writes by up to 92% and improves its performance by up to 62% compared to CLOCK.

A Study on Write Cache Policy using a Flash Memory (플래시 메모리를 사용한 쓰기 캐시 정책 연구)

  • Kim, Young-Jin;Anggorosesar, Aldhino;Lee, Jeong-Bae;Rim, Kee-Wook
    • Annual Conference of KIPS
    • /
    • 2009.11a
    • /
    • pp.77-78
    • /
    • 2009
  • In this paper, we study a pattern-aware write cache policy using a NAND flash memory in disk-based mobile storage systems. Our work is designed to face a mix of a number of sequential accesses and fewer non-sequential ones in mobile storage systems by redirecting the latter to a NAND flash memory and the former to a disk. Experimental results show that our policy improves the overall I/O performance by reducing the overhead significantly from a non-volatile cache over a traditional one.

Policy for Selective Flushing of Smartphone Buffer Cache using Persistent Memory (영속 메모리를 이용한 스마트폰 버퍼 캐시의 선별적 플러시 정책)

  • Lim, Soojung;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.1
    • /
    • pp.71-76
    • /
    • 2022
  • Buffer cache bridges the performance gap between memory and storage, but its effectiveness is limited due to periodic flush, performed to prevent data loss in smartphones. This paper shows that selective flushing technique with small persistent memory can reduce the flushing overhead of smartphone buffer cache significantly. This is due to our I/O analysis of smartphone applications in that a certain hot data account for most of file writes, while a large proportion of file data incurs single-writes. The proposed selective flushing policy performs flushing to persistent memory for frequently updated data, and storage flushing is performed only for single-write data. This eliminates storage write traffic and also improves the space efficiency of persistent memory. Simulations with popular smartphone application I/O traces show that the proposed policy reduces write traffic to storage by 24.8% on average and up to 37.8%.

The Effect of Absorbing Hot Write References on FTLs for Flash Storage Supporting High Data Integrity (데이터 무결성을 보장하는 플래시 저장 장치에서 잦은 쓰기 참조 흡수가 플래시 변환 계층에 미치는 영향)

  • Shim, Myoung-Sub;Doh, In-Hwan;Moon, Young-Je;Lee, Hyo-J.;Choi, Jong-Moo;Lee, Dong-Hee;Noh, Sam-H.
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.3
    • /
    • pp.336-340
    • /
    • 2010
  • Flash storages are prevalent as portable storage in computing systems. When we consider the detachability of Flash storage devices, data integrity becomes an important issue. To assure extreme data integrity, file systems synchronously write all file data to storage accompanying hot write references. In this study, we concentrate on the effect of hot write references on Flash storage, and we consider the effect of absorbing the hot write references via nonvolatile write cache on the performance of the FTL schemes in Flash storage. In 80 doing, we quantify the performance of typical FTL schemes for workloads that contain hot write references through a wide range of experiments on a real system environment. Through the results, we conclude that the impact of the underlying FTL schemes on the performance of Flash storage is dramatically reduced by absorbing the hot write references via nonvolatile write cache.