• Title/Summary/Keyword: cache access history

Search Result 5, Processing Time 0.016 seconds

PSS Movement Prediction Algorithm for Seamless hando (휴대인터넷에서 seamless handover를 위한 단말 이동 예측 알고리즘)

  • Lee, Ho-Jeong;Yun, Chan-Young;Oh, Young-Hwan
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.12 s.354
    • /
    • pp.53-60
    • /
    • 2006
  • Handover of WiBro is based on 802.16e hard handover scheme. When PSS is handover, it is handover that confirm neighbor's cell condition and RAS ID in neighbor advertisement message. Serving RAS transmits HO-notification message to neighbor RAS. Transmiting HO-notification message to neighbor RAS, it occurs many signaling traffics. Also, When WiBro is handover, It occurs many packet loss. Therefore, user suffer service degradation. LPM handover is supporting seamless handover because it buffers data packets during handover. So It is proposed scheme that predicts is LPM handover and reserves target RAS with pre-authentication. These schemes occur many signaling traffics. In this paper, we propose PSS Movement Prediction to solve signaling traffic. Target RAS is decided by old data in history cache. When serving RAS receives HO-notification-RSP message to target RAS, target RAS inform to crossover node. And crossover node bicast data packet. If handover is over, target RAS forward data packet. Therefore, It reduces signaling traffics but increase handover success rate. When history cache success, It decrease about 48% total traffic. But When history cache fails, It increase about 6% total traffic

Implementation and Performance Analysis of Event Processing and Buffer Managing Techniques for DDS (고성능 데이터 발간/구독 미들웨어의 이벤트, 버퍼 처리 기술 및 성능 분석)

  • Yoon, Gunjae;Choi, Hoon
    • Journal of KIISE
    • /
    • v.44 no.5
    • /
    • pp.449-459
    • /
    • 2017
  • Data Distribution Service (DDS) is a communication middleware that supports a flexible, scalable and real-time communication capability. This paper describes several techniques to improve the performance of DDS middleware. Detailed events for the internal behavior of the middleware are defined. A DDS message is disassembled into several submessages of independent, meaningful units for event-driven structuring in order to reduce the processing complexity. The proposed technique of history cache management is also described. It utilizes the fact that status access and random access to the history cache occur more frequently in the DDS. These methods have been implemented in the EchoDDS, the DDS implementation developed by our team, and it showed improved performance.

Using Cache Access History for Reducing False Conflicts in Signature-Based Eager Hardware Transactional Memory (시그니처 기반 이거 하드웨어 트랜잭셔널 메모리에서의 캐시 접근 이력을 이용한 거짓 충돌 감소)

  • Kang, Jinku;Lee, Inhwan
    • Journal of KIISE
    • /
    • v.42 no.4
    • /
    • pp.442-450
    • /
    • 2015
  • This paper proposes a method for reducing false conflicts in signature-based eager hardware transactional memory (HTM). The method tracks the information on all cache blocks that are accessed by a transaction. If the information provides evidence that there are no conflicts for a given transactional request from another core, the method prevents the occurrence of a false conflict by forcing the HTM to ignore the decision based on the signature. The method is very effective in reducing false conflicts and the associated unnecessary transaction stalls and aborts, and can be used to improve the performance of the multicore processor that implements the signature-based eager HTM. When running the STAMP benchmark on a 16-core processor that implements the LogTM-SE, the increased speed (decrease in execution time) achieved with the use of the method is 20.6% on average.

Implementation of Memory Efficient Flash Translation Layer for Open-channel SSDs

  • Oh, Gijun;Ahn, Sungyong
    • International journal of advanced smart convergence
    • /
    • v.10 no.1
    • /
    • pp.142-150
    • /
    • 2021
  • Open-channel SSD is a new type of Solid-State Disk (SSD) that improves the garbage collection overhead and write amplification due to physical constraints of NAND flash memory by exposing the internal structure of the SSD to the host. However, the host-level Flash Translation Layer (FTL) provided for open-channel SSDs in the current Linux kernel consumes host memory excessively because it use page-level mapping table to translate logical address to physical address. Therefore, in this paper, we implemente a selective mapping table loading scheme that loads only a currently required part of the mapping table to the mapping table cache from SSD instead of entire mapping table. In addition, to increase the hit ratio of the mapping table cache, filesystem information and mapping table access history are utilized for cache replacement policy. The proposed scheme is implemented in the host-level FTL of the Linux kernel and evaluated using open-channel SSD emulator. According to the evaluation results, we can achieve 80% of I/O performance using the only 32% of memory usage compared to the previous host-level FTL.

Scalar First Replacement Strategy for Reference Prediction Table Used in Prefetching Streaming Data (스트리밍 데이터의 선인출에 사용되는 참조예측표의 스칼라 우선 교체 전략)

  • Lim, Chul-hoo;Chon, Young-Suk;Kim, Suk-il;Jeon, Joong-nam
    • The KIPS Transactions:PartA
    • /
    • v.11A no.3
    • /
    • pp.163-172
    • /
    • 2004
  • Multimedia applications tend to access their data as a streaming pattern with regular intervals. This characteristic can be utilized in prefetching the multimedia data into cache memory so as to reduce their execution speeds. The reference-prediction prefetch algorithm predicts the memory address that seems to be used in the next time based on the previous history of memory references stored in the prediction reference table. This paper proposes a strategy to manipulate the reference prediction table which contains all of the data reference instructions to scalar and streaming data. We have recognized that the scalar reference instructions do not contribute to the data prefetching algorithm. Therefore, when replacing an element in the reference prediction table, the proposed algorithm preferentially selects the scalar reference instruction before the stream reference instruction. It makes the stream reference instruction to stay for a long time compared to the FIFO replacement policy, and eventually improves the performance of data prefetching.