• Title/Summary/Keyword: 온칩 메모리

Search Result 23, Processing Time 0.019 seconds

Low-Power Data Cache Architecture and Microarchitecture-level Management Policy for Multimedia Application (멀티미디어 응용을 위한 저전력 데이터 캐쉬 구조 및 마이크로 아키텍쳐 수준 관리기법)

  • Yang Hoon-Mo;Kim Cheong-Gil;Park Gi-Ho;Kim Shin-Dug
    • The KIPS Transactions:PartA
    • /
    • v.13A no.3 s.100
    • /
    • pp.191-198
    • /
    • 2006
  • Today's portable electric consumer devices, which are operated by battery, tend to integrate more multimedia processing capabilities. In the multimedia processing devices, multimedia system-on-chips can handle specific algorithms which need intensive processing capabilities and significant power consumption. As a result, the power-efficiency of multimedia processing devices becomes important increasingly. In this paper, we propose a reconfigurable data caching architecture, in which data allocation is constrained by software support, and evaluate its performance and power efficiency. Comparing with conventional cache architectures, power consumption can be reduced significantly, while miss rate of the proposed architecture is very similar to that of the conventional caches. The reduction of power consumption for the reconfigurable data cache architecture shows 33.2%, 53.3%, and 70.4%, when compared with direct-mapped, 2-way, and 4-way caches respectively.

Low-Power 2-level Cache Architectures for Embedded System (내장형 시스템을 위한 저전력 2-레벨 캐쉬 메모리의 설계)

  • Jong-Min Lee;Soon-Tae Kim;Kyung-Ah Kim;Su-Ho Park;Yong-Ho Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.11a
    • /
    • pp.806-809
    • /
    • 2008
  • 온칩(on-chip) 캐쉬는 외부 메모리로의 접근을 감소시키는 중요한 역할을 한다. 본 연구에서는 내장형 시스템에 맞추어 설계된 2-레벨 캐쉬 메모리 구조를 제안하고자 한다. 레벨1(L1) 캐쉬의 구성으로 작은 크기, 직접사상(direct-mapped) 그리고 바로쓰기(write-through)를 채용한다. 대조적으로 레벨2(L2) 캐쉬는 일반적인 캐쉬 크기와 집합연관(Set-associativity) 그리고 나중쓰기(write-back) 정책을 채용한다. 결과적으로 L1캐쉬는 한 사이클 이내에 접근될 수 있고 L2캐쉬는 전체 캐쉬의 미스율(global miss rate)을 낮추는데 효과적이다. 두 캐쉬 계층간 바로쓰기(write-thorough) 정책에서 오는 빈번한 L2 캐쉬 접근으로 인한 에너지 소비를 줄이기 위해 본 연구에서는 One-way 접근 기법을 제안하였다. 본 연구에서 제안한 2-레벨 캐쉬 메모리 구조는 평균적으로 26%의 성능향상과 43%의 에너지 소비 그리고 77%의 에너지-지연 곱에서 이득을 보여주었다.

Design of Crossbar Switch On-chip Bus for Performance Improvement of SoC (SoC의 성능 향상을 위한 크로스바 스위치 온칩 버스 설계)

  • Heo, Jung-Burn;Ryoo, Kwang-Ki
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.3
    • /
    • pp.684-690
    • /
    • 2010
  • Most of the existing SoCs have shared bus architecture which always has a bottleneck state. The more IPs are in an SOC, the less performance it is of the SOC, Therefore, its performance is effected by the entire communication rather than CPU speed. In this paper, we propose cross-bar switch bus architecture for the reduction of the bottleneck state and the improvement of the performance. The cross-bar switch bus supports up to 8 masters and 16 slaves and parallel communication with architecture of multiple channel bus. Each slave has an arbiter which stores priority information about masters. So, it prevents only one master occupying one slave and supports efficient communication. We compared WISHBONE on-chip shared bus architecture with crossbar switch bus architecture of the SOC platform, which consists of an OpenRISC processor, a VGA/LCD controller, an AC97 controller, a debug interface, a memory interface, and the performance improved by 26.58% than the previous shared bus.

Count-Min HyperLogLog : Cardinality Estimation Algorithm for Big Network Data (Count-Min HyperLogLog : 네트워크 빅데이터를 위한 카디널리티 추정 알고리즘)

  • Sinjung Kang;DaeHun Nyang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.427-435
    • /
    • 2023
  • Cardinality estimation is used in wide range of applications and a fundamental problem processing a large range of data. While the internet moves into the era of big data, the function addressing cardinality estimation use only on-chip cache memory. To use memory efficiently, there have been various methods proposed. However, because of the noises between estimator, which is data structure per flow, loss of accuracy occurs in these algorithms. In this paper, we focus on minimizing noises. We propose multiple data structure that each estimator has the number of estimated value as many as the number of structures and choose the minimum value, which is one with minimum noises, We discover that the proposed algorithm achieves better performance than the best existing work using the same tight memory, such as 1 bit per flow, through experiment.

Design and Performance Analysis of High Performance Processor-Memory Integrated Architectures (고성능 프로세서-메모리 혼합 구조의 설계 및 성능 분석)

  • Kim, Young-Sik;Kim, Shin-Dug;Han, Tack-Don
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.10
    • /
    • pp.2686-2703
    • /
    • 1998
  • The widening pClformnnce gap between processor and memory causes an emergence of the promising architecture, processor-memory (PM) integration In this paper, various design issues for P-M integration are studied, First, an analytical model of the DRAM access time is constructed considering both the bank conflict ratio and the DRAM page hit ratio. Then the points of both the performance improvement and the perfonnance bottle neck are found by the proposed model as designing on-chip DRAM architectures. This paper proposes the new architecture, called the delayed precharge bank architecture, to improve the perfonnance of memory system as increasing the DRAM page hit ratio. This paper also adapts an efficient bank interleaving mechanism to the proposed architecture. This architecture is verified !II he better than the hierarchical multi-bank architecture as well as the conventional bank architecture by executiun driven simulation. Eight SPEC95 benchmarks are used for simulation as changing parameters for the cache architecture, the number of DRAM banks, and the delayed time quantum.

  • PDF

SEC-DED-DAEC codes without mis-correction for protecting on-chip memories (오정정 없이 온칩 메모리 보호를 위한 SEC-DED-DAEC 부호)

  • Jun, Hoyoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1559-1562
    • /
    • 2022
  • As electronic devices technology scales down into the deep-submicron to achieve high-density, low power and high performance integrated circuits, multiple bit upsets by soft errors have become a major threat to on-chip memory systems. To address the soft error problem, single error correction, double error detection and double adjacent error correction (SEC-DED-DAEC) codes have been recently proposed. But these codes do not troubleshoot mis-correction problem. We propose the SEC-DED_DAEC code with without mis-correction. The decoder for proposed code is implemented as hardware and verified. The results show that there is no mis-correction in the proposed codes and the decoder can be employed on-chip memory system.

Error correction codes to manage multiple bit upset in on-chip memories (온칩 메모리 내 다중 비트 이상에 대처하기 위한 오류 정정 부호)

  • Jun, Hoyoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.11
    • /
    • pp.1747-1750
    • /
    • 2022
  • As shrinking the semiconductor process into the deep sub-micron to achieve high-density, low power and high performance integrated circuits, MBU (multiple bit upset) by soft errors is one of the major challenge of on-chip memory systems. To address the MBU, single error correction, double error detection and double adjacent error correction (SEC-DED-DAEC) codes have been recently proposed. But these codes do not resolve mis-correction. We propose the SEC-DED-DAEC-TAED(triple adjacent error detection) code without mis-corrections. The generated H-matrix by the proposed heuristic algorithm to accomplish the proposed code is implemented as hardware and verified. The results show that there is no mis-correction in the proposed codes and the 2-stage pipelined decoder can be employed on-chip memory system.

An Area Efficient Low Power Data Cache for Multimedia Embedded Systems (멀티미디어 내장형 시스템을 위한 저전력 데이터 캐쉬 설계)

  • Kim Cheong-Ghil;Kim Shin-Dug
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.101-110
    • /
    • 2006
  • One of the most effective ways to improve cache performance is to exploit both temporal and spatial locality given by any program executional characteristics. This paper proposes a data cache with small space for low power but high performance on multimedia applications. The basic architecture is a split-cache consisting of a direct-mapped cache with small block sire and a fully-associative buffer with large block size. To overcome the disadvantage of small cache space, two mechanisms are enhanced by considering operational behaviors of multimedia applications: an adaptive multi-block prefetching to initiate various fetch sizes and an efficient block filtering to remove rarely reused data. The simulations on MediaBench show that the proposed 5KB-cache can provide equivalent performance and reduce energy consumption up to 40% as compared with 16KB 4-way set associative cache.

Analysis of Tensor Processing Unit and Simulation Using Python (텐서 처리부의 분석 및 파이썬을 이용한 모의실행)

  • Lee, Jongbok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.3
    • /
    • pp.165-171
    • /
    • 2019
  • The study of the computer architecture has shown that major improvements in price-to-energy performance stems from domain-specific hardware development. This paper analyzes the tensor processing unit (TPU) ASIC which can accelerate the reasoning of the artificial neural network (NN). The core device of the TPU is a MAC matrix multiplier capable of high-speed operation and software-managed on-chip memory. The execution model of the TPU can meet the reaction time requirements of the artificial neural network better than the existing CPU and the GPU execution models, with the small area and the low power consumption even though it has many MAC and large memory. Utilizing the TPU for the tensor flow benchmark framework, it can achieve higher performance and better power efficiency than the CPU or CPU. In this paper, we analyze TPU, simulate the Python modeled OpenTPU, and synthesize the matrix multiplication unit, which is the key hardware.

Efficient On-Chip Idle Cache Utilization Technique in Chip Multi-Processor Architecture (칩 멀티 프로세서 구조에서 온칩 유휴 캐시의 효과적인 활용 방안)

  • Kwak, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.10
    • /
    • pp.13-21
    • /
    • 2013
  • Recently, although the number of cores on a chip multi-processor increases, multi-programming or multi-threaded programming techniques to utilize the whole cores are still insufficient. Therefore, there inevitably exist some idle cores which are not working. This results in a waste of the caches, so-called idle caches which are dedicated to those idle cores. In this research, we propose amethodology to exploit idle caches effectively as victimcaches of on-chip memory resource. In simulation results, we have achieved 19.4%and 10.2%IPC improvement in 4-core and 16-core respectively, compared to previous technique.