• Title/Summary/Keyword: Cache data

Search Result 487, Processing Time 0.029 seconds

An Efficient Variable Rearrangement Technique for STT-RAM Based Hybrid Caches

  • Youn, Jonghee M.;Cho, Doosan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.2
    • /
    • pp.67-78
    • /
    • 2016
  • The emerging Spin-Transfer Torque RAM (STT-RAM) is a promising component that can be used to improve the efficiency as a result of its high storage density and low leakage power. However, the state-of-the-art STT-RAM is not ready to replace SRAM technology due to the negative effect of its write operations. The write operations require longer latency and more power than the same operations in SRAM. Therefore, a hybrid cache with SRAM and STT-RAM technologies is proposed to obtain the benefits of STT-RAM while minimizing its negative effects by using SRAM. To efficiently use of the hybrid cache, it is important to place write intensive data onto the cache. Such data should be placed on SRAM to minimize the negative effect. Thus, we propose a technique that optimizes placement of data in main memory. It drives the proper combination of advantages and disadvantages for SRAM and STT-RAM in the hybrid cache. As a result of the proposed technique, write intensive data are loaded to SRAM and read intensive data are loaded to STT-RAM. In addition, our technique also optimizes temporal locality to minimize conflict misses. Therefore, it improves performance and energy consumption of the hybrid cache architecture in a certain range.

A Cache Consistency Algorithm for Client Caching Data Management Systems (클라이언트 캐슁 데이터 관리 시스템을 위한 캐쉬 일관성 알고리즘)

  • Kim Chi-Yeon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2006.05a
    • /
    • pp.1043-1046
    • /
    • 2006
  • Cached data management of clients is required to guarantee the correctness of client's applications. There are two categories of cache consistency algorithms : detection-based and avoidance-based cache consistency algorithm. Detection?.based schemes allow stale data access and then check the validity of any cached data before they ran be allowed to commit. In contrast, under avoidance-based algorithms, transactions never have the opportunity to access stale data. In this paper, we propose a new avoidance-based cache consistency algorithm make use of version. The proposed method maintains the two versions at clients and servers, so it has no callback message and it can be reduced abort ratio of transactions compare with the single-versioned algorithms. In addition to, the proposed method can be decreased cache miss using by mixed invalidation and propagation for remote update action.

  • PDF

A Study on the Data Retrieval By Using a Cache Forward/Backward Technique (캐쉬 Forward/Backward기법을 이용한 데이터 검색에 관한 연구)

  • Kim Soo-Jang
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2003.08a
    • /
    • pp.229-233
    • /
    • 2003
  • 최근, 인터넷 사용자가 급증하면서 빠른 서비스에 대한 문제가 큰 관심이 되고있다. 특히 데이터베이스 시스템에서 저장 삭세 수정 등은 사용자에게 긴 대기시간을 요구할 수도 있기 때문에 사용자의 불평을 야기할 수 있다. 이 논문에서는 3-티어 방식에서 요즘 많이 사용되는 application server의 cache에 대해서 말하고자 한다. 기존 application server는 데이터를 application server cache에 저장하여 같은 데이터를 서비스할 경우 server의 cache를 사용하지만 이 논문에서 제안하는 것은 접속된 client를 관리하여 각각의 client에 cache를 만들고 application server나 또는 데이터베이스 server가 서비스를 하지 못할 경우는 가장 최근의 데이터를 가지고 있는 client를 찾아 client cache에 있는 데이터를 서비스 하자는 것이다.

  • PDF

Buffer Cache Management of Smartphones Exploiting Write-Only-Once Characteristics (1회성 쓰기 참조 특성을 고려하는 스마트폰 버퍼캐쉬 관리 기법)

  • Kim, Dohee;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.6
    • /
    • pp.129-134
    • /
    • 2015
  • This paper analyzes file access characteristics of smartphone apps and finds that a large portion of file writes are performed only once. Based on this observation, we present a new buffer cache management scheme that considers this characteristics. Buffer cache improves storage performance by maintaining hot file data in memory thereby servicing subsequent requests without storage accesses. However, it should flush modified data to storage in order to resist system crashes. The proposed scheme evicts cache data that has been written only once upon flushes, thus improving cache space utilization. Simulation experiments show that the proposed scheme improves cache hit ratio by 5-33% and power consumption by 27-92%.

A Hardware Cache Prefetching Scheme for Multimedia Data with Intermittently Irregular Strides (단속적(斷續的) 불규칙 주소간격을 갖는 멀티미디어 데이타를 위한 하드웨어 캐시 선인출 방법)

  • Chon Young-Suk;Moon Hyun-Ju;Jeon Joongnam;Kim Sukil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.11
    • /
    • pp.658-672
    • /
    • 2004
  • Multimedia applications are required to process the huge amount of data at high speed in real time. The memory reference instructions such as loads and stores are the main factor which limits the high speed execution of processor. To enhance the memory reference speed, cache prefetch schemes are used so as to reduce the cache miss ratio and the total execution time by previously fetching data into cache that is expected to be referenced in the future. In this study, we present an advanced data cache prefetching scheme that improves the conventional RPT (reference prediction table) based scheme. We considers the cache line size in calculation of the address stride referenced by the same instruction, and enhances the prefetching algorithm so that the effect of prefetching could be maintained even if an irregular address stride is inserted into the series of uniform strides. According to experiment results on multimedia benchmark programs, the cache miss ratio has been improved 29% in average compared to the conventional RPT scheme while the bus usage has increased relatively small amount (0.03%).

Design of Push Agent Model Using Dual Cache for Increasing Hit-Ratio of Data Search (데이터 검색의 적중률 향상을 위한 이중 캐시의 푸시 에이전트 모델 설계)

  • Kim Kwang-jong;Ko Hyun;Kim Young-ja;Lee Yon-sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.153-166
    • /
    • 2005
  • Existing single cache structure has shown difference of hit-ratio according to individually replacement strategy However. It needs new improved cache structure for reducing network traffic and providing advanced hit-ratio. Therefore, this Paper design push agent model using dual cache for increasing hit-ratio by reducing server overload and network traffic by repetition request of persistent and identical information. In this model proposes dual cache structure to do achievement replace gradual cache using by two caches storage space for reducing server overload and network traffic. Also, we show new cache replace techniques and algorithms which executes data update and delete based on replace strategy of Log(Size) +LRU, LFU and PLC for effectiveness of data search in cache. And through an experiment, it evaluates Performance of dual cache push agent model.

  • PDF

A Cache Consistency Scheme to Consider Period of Data in Mobile Computing Environments (이동 컴퓨팅 환경에서 데이터의 주기성을 고려한 캐쉬 일관성 기법)

  • Lim, Jong-Won;Hwang, Byung-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.4
    • /
    • pp.421-431
    • /
    • 2007
  • Due to the rapid development of wireless communication technology, demands on data services in mobile computing environment are gradually increasing. In mobile computing environments, a mobile host began to use cache to overcome the weak point. Caching at mobile host could reduce the bandwidth consumption and query response time, but mobile host must maintain cache consistency. In this paper, we propose a cache consistency strategy to consider the period of data usage. The server allows an effective broadcasting by classifying data into two groups of periodic and non-periodic. By classifying the data, it prevent from elimination of cache after disconnection of periodic data by using expired time. By storing IR(Invalidation Report) messages, this scheme divides cached data by selection after disconnection. Consequently, we have shown much improvement in total consumption of bandwidth than the conventional scheme.

  • PDF

Energy-efficient Set-associative Cache Using Bi-mode Way-selector (에너지 효율이 높은 이중웨이선택형 연관사상캐시)

  • Lee, Sungjae;Kang, Jinku;Lee, Juho;Youn, Jiyong;Lee, Inhwan
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.1 no.1
    • /
    • pp.1-10
    • /
    • 2012
  • The way-lookup cache and the way-tracking cache are considered to be the most energy-efficient when used for level 1 and level 2 caches, respectively. This paper proposes an energy-efficient set-associative cache using the bi-mode way-selector that combines the way selecting techniques of the way-tracking cache and the way-lookup cache. The simulation results using an Alpha 21264-based system show that the bi-mode way-selecting L1 instruction cache consumes 27.57% of the energy consumed by the conventional set-associative cache and that it is as energy-efficient as the way-lookup cache when used for L1 instruction cache. The bi-mode way-selecting L1 data cache consumes 28.42% of the energy consumed by the conventional set-associative cache, which means that it is more energy-efficient than the way-lookup cache by 15.54% when used for L1 data cache. The bi-mode way-selecting L2 cache consumes 15.41% of the energy consumed by the conventional set-associative cache, which means that it is more energy-efficient than the way-tracking cache by 16.16% when used for unified L2 cache. These results show that the proposed cache can provide the best level of energy-efficiency regardless of the cache level.

A Study on Performance Improvement in Cellular IP Using Combined Cache and Different Handoff (통합 캐시 및 차별화된 핸드오프를 이용한 셀룰러 IP의 성능개선에 관한 연구)

  • Seo Jeong Hwa;Kim Nam
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.12B
    • /
    • pp.1063-1069
    • /
    • 2004
  • There are some problems with the methods of paging and routing cache(PRC) and quasi soft handoff that are proposed to improve the performance for the transmission of real time data. This paper proposed a reasonable solution of these problems by proposing a new method using a combined cache(CC) and applying a different handoff procedure according to each type of data. The combined cache dose not maintain the handoff state but the idlle/active state. The simulation results in a better performance than the above methods in terms of the control packet traffic load, initial recevied data packet traffic load and arrival tune of real time packet at the epoch of handoff.

A Cache Privacy Protection Mechanism based on Dynamic Address Mapping in Named Data Networking

  • Zhu, Yi;Kang, Haohao;Huang, Ruhui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.6123-6138
    • /
    • 2018
  • Named data networking (NDN) is a new network architecture designed for next generation Internet. Router-side content caching is one of the key features in NDN, which can reduce redundant transmission, accelerate content distribution and alleviate congestion. However, several security problems are introduced as well. One important security risk is cache privacy leakage. By measuring the content retrieve time, adversary can infer its neighbor users' hobby for privacy content. Focusing on this problem, we propose a cache privacy protection mechanism (named as CPPM-DAM) to identify legitimate user and adversary using Bloom filter. An optimization for storage cost is further provided to make this mechanism more practical. The simulation results of ndnSIM show that CPPM-DAM can effectively protect cache privacy.