• Title/Summary/Keyword: Processor locality

Search Result 18, Processing Time 0.021 seconds

Bounding Worst-Case DRAM Performance on Multicore Processors

  • Ding, Yiqiang;Wu, Lan;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • v.7 no.1
    • /
    • pp.53-66
    • /
    • 2013
  • Bounding the worst-case DRAM performance for a real-time application is a challenging problem that is critical for computing worst-case execution time (WCET), especially for multicore processors, where the DRAM memory is usually shared by all of the cores. Typically, DRAM commands from consecutive DRAM accesses can be pipelined on DRAM devices according to the spatial locality of the data fetched by them. By considering the effect of DRAM command pipelining, we propose a basic approach to bounding the worst-case DRAM performance. An enhanced approach is proposed to reduce the overestimation from the invalid DRAM access sequences by checking the timing order of the co-running applications on a dual-core processor. Compared with the conservative approach, which assumes that no DRAM command pipelining exists, our experimental results show that the basic approach can bound the WCET more tightly, by 15.73% on average. The experimental results also indicate that the enhanced approach can further improve the tightness of WCET by 4.23% on average as compared to the basic approach.

Gated Recurrent Unit based Prefetching for Graph Processing (그래프 프로세싱을 위한 GRU 기반 프리페칭)

  • Shivani Jadhav;Farman Ullah;Jeong Eun Nah;Su-Kyung Yoon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.6-10
    • /
    • 2023
  • High-potential data can be predicted and stored in the cache to prevent cache misses, thus reducing the processor's request and wait times. As a result, the processor can work non-stop, hiding memory latency. By utilizing the temporal/spatial locality of memory access, the prefetcher introduced to improve the performance of these computers predicts the following memory address will be accessed. We propose a prefetcher that applies the GRU model, which is advantageous for handling time series data. Display the currently accessed address in binary and use it as training data to train the Gated Recurrent Unit model based on the difference (delta) between consecutive memory accesses. Finally, using a GRU model with learned memory access patterns, the proposed data prefetcher predicts the memory address to be accessed next. We have compared the model with the multi-layer perceptron, but our prefetcher showed better results than the Multi-Layer Perceptron.

  • PDF

A Study on Modular Min (Modular MIN에 관한 연구)

  • 장창수;최창훈;유창하
    • The Journal of the Korea Contents Association
    • /
    • v.2 no.2
    • /
    • pp.103-111
    • /
    • 2002
  • In parallel application programs with a localized communication, even if the MINs have lour diameters, overall system performance degrades when compared to the hypercube and tree structure. The reason is that it is impossible for MINs to provide some mechanisms for clustering to exploit the locality of reference. However proposed MIN can be constructed suitable for localized communication by providing the shortcut path and multiple paths inside the processor-memory duster which has frequent data communications. Therefore proposed MIN achieves enhanced performance in parallel application program with a localized communication.

  • PDF

Remote Cache Replacement Policy based on Processor Locality (프로세서 지역성에 기반 한 원격 캐시 교체 정책)

  • 한상윤;곽종옥;전주식
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10a
    • /
    • pp.4-6
    • /
    • 2004
  • 본 논문에서는 원격 캐쉬를 추가시킨 분산 메모리 구조 다중 프로세서 시스템의 성능 향상을 위해 새로운 원격 캐쉬 교체 정색을 제안한다. 일반적으로 다중 계층 내포성(MLI)을 치키는 다중 계층 메모리 구조에서 LRU 교체 정책을 사용할 경우, 상위 계층 캐쉬의 LRU 정보와 하위 계층 캐쉬의 LRU 정보가 서로 상이함으로 인해 하위 계층 캐쉬에서의 교체가 상위 계층에서 사용 중인 캐처 라인의 교체를 발생시켜 전체 시스템의 성능을 저하시키는 원인이 된다. 이러한 LRU 캐쉬 교체 정책의 단점을 보완하고자 각 노드 당 프로세서들의 원격 메모리 접근 지역성을 이용한 원격 캐쉬 교체정책의 사용으로 상위 캐쉬의 유용한 캐쉬 라인의 접근 실패율을 감소시킴으로써 다중 프로세서 시스템의 성능 향상을 꾀한다. 프로그램 기반 시뮬레이터를 통해 제안한 원격 캐쉬 교체 정책을 적용하였을 때, 기존의 LRU 교체 정책과 비교하여 무효화 수와 캐쉬 접근 실패가 평균 5%. 최대 10% 감소하였다.

  • PDF

Determination of a Grain Size for Reducing Cache Miss Rate of Direct-Mapped Caches (직접 사상 캐쉬의 캐쉬 실패율을 감소시키기 위한 성김도 정책)

  • Jung, In-Bum;Kong, Ki-Sok;Lee, Joon-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.7
    • /
    • pp.665-674
    • /
    • 2000
  • In data parallel programs incurring high cache locality, the choice of grain sizes affects cache performance. Though the grain sizes chosen provide fair load balance among processors, the grain sizes that ignore underlying caching effect result in address interferences between grains allocated to a processor. These address interferences appear to have a negative impact on the cache locality, since they result in cache conflict misses. To address this problem, we propose a best grain size driven from a cache size and the number of processors based on direct mapped cache's characteristic. Since the proposed method does not map the grains to the same location in the cache, cache conflict misses are reduced. Simulation results show that the proposed best grain size substantially improves the performance of tested data parallel programs through the reduction of cache misses on direct-mapped caches.

  • PDF

Efficient Implementation of SVM-Based Speech/Music Classifier by Utilizing Temporal Locality (시간적 근접성 향상을 통한 효율적인 SVM 기반 음성/음악 분류기의 구현 방법)

  • Lim, Chung-Soo;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.149-156
    • /
    • 2012
  • Support vector machines (SVMs) are well known for their pattern recognition capability, but proper care should be taken to alleviate their inherent implementation cost resulting from high computational intensity and memory requirement, especially in embedded systems where only limited resources are available. Since the memory requirement determined by the dimensionality and the number of support vectors is generally too high for a cache in embedded systems to accomodate, frequent accesses to the main memory occur inevitably whenever the cache is not able to provide requested data to the processor. These frequent accesses to the main memory result in overall performance degradation and increased energy consumption because a memory access typically takes longer and consumes more energy than a cache access or a register access. In this paper, we propose a technique that reduces the number of main memory accesses by optimizing the data access pattern of the SVM-based classifier in such a way that the temporal locality of the accesses increases, fully utilizing data loaded into the processor chip. With experiments, we confirm the enhancement made by the proposed technique in terms of the number of memory accesses, overall execution time, and energy consumption.

Shortest-Frame-First Scheduling Algorithm of Threads On Multithreaded Models (다중스레드 모델에서 최단 프레임 우선 스레드 스케줄링 알고리즘)

  • Sim, Woo-Ho;Yoo, Weon-Hee;Yang, Chang-Mo
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.5
    • /
    • pp.575-582
    • /
    • 2000
  • Because FIFO thread scheduling used in the existing multithreaded models does not consider locality in programs, it may result in the decrease of the performance of execution, caused by the frequent context switching overhead and delay of execution of relatively short frames. Quantum unit scheduling enhances the performance a little, but it still has the problems such as the decrease in the processor utilization and the longer delay due to its heavy dependency on the priority of the quantum units. In this paper, we propose shortest-frame-first(SFF) thread scheduling algorithm. Our algorithm selects and schedules the frame that is expected to take the shortest execution time using thread size and synchronization information analyzed at compile-time. We can estimate the relative execution time of each frame at compile-time. Using SFF thread scheduling algorithm on the multithreaded models, we can expect the faster execution, better utilization of the processor, increased throughput and short waiting time compared to FIFO scheduling.

  • PDF

Energy-aware Instruction Cache Design using Partitioning (분할 기법을 이용한 저전력 명령어 캐쉬 설계)

  • Kim, Jong-Myon;Jung, Jae-Wook;Kim, Cheol-Hong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.241-251
    • /
    • 2007
  • Energy consumption in the instruction cacheaccounts for a significant portion of the total processor energy consumption. Therefore, reducing energy consumption in the instruction cache is important in designing embedded processors. This paper proposes a method for reducing dynamic energy consumption in the instruction cache by partitioning it to smaller (less energy-consuming) sub-caches. When a request comes into the proposed cache, only one sub-cache is accessed by utilizing the locality of applications. By contrast, the other sub-caches are not accessed, leading todynamic energy reduction. In addition, the proposed cache reduces dynamic energy consumption by eliminating the energy consumed in tag matching. We evaluated the energy efficiency by running cycle accurate simulator, SimpleScalar. with power parameters obtained from CACTI. Simulation results show that the proposed cache reduces dynamic energy consumption by $37%{\sim}60%$ compared to the traditional direct-mapped instruction cache.