• Title/Summary/Keyword: memory access rate

Search Result 105, Processing Time 0.022 seconds

The Influence of $O_2$ Gas on the Etch Characteristics of FePt Thin Films in $CH_4/O_2/Ar$ gas

  • Lee, Il-Hoon;Lee, Tea-Young;Chung, Chee-Won
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.02a
    • /
    • pp.408-408
    • /
    • 2012
  • It is well known that magnetic random access memory (MRAM) is nonvolatile memory devices using ferromagnetic materials. MRAM has the merits such as fast access time, unlimited read/write endurance and nonvolatility. Although DRAM has many advantages containing high storage density, fast access time and low power consumption, it becomes volatile when the power is turned off. Owing to the attractive advantages of MRAM, MRAM is being spotlighted as an alternative device in the future. MRAM consists of magnetic tunnel junction (MTJ) stack and complementary metal- oxide semiconductor (CMOS). MTJ stacks are composed of various magnetic materials. FePt thin films are used as a pinned layer of MTJ stack. Up to date, an inductively coupled plasma reactive ion etching (ICPRIE) method of MTJ stacks showed better results in terms of etch rate and etch profile than any other methods such as ion milling, chemical assisted ion etching (CAIE), reactive ion etching (RIE). In order to improve etch profiles without redepositon, a better etching process of MTJ stack needs to be developed by using different etch gases and etch parameters. In this research, influences of $O_2$ gas on the etching characteristics of FePt thin films were investigated. FePt thin films were etched using ICPRIE in $CH_4/O_2/Ar$ gas mix. The etch rate and the etch selectivity were investigated in various $O_2$ concentrations. The etch profiles were studied in varying etch parameters such as coil rf power, dc-bias voltage, and gas pressure. TiN was employed as a hard mask. For observation etch profiles, field emission scanning electron microscopy (FESEM) was used.

  • PDF

Design of a Low Memory Bandwidth Inter Predictor Using Implicit Weighted Prediction Technique (묵시적 가중 예측기법을 이용한 저 메모리 대역폭 인터 예측기 설계)

  • Kim, Jinyoung;Ryoo, Kwangki
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.12
    • /
    • pp.2725-2730
    • /
    • 2012
  • In this paper, for improving the H.264/AVC hardware performance, we propose an inter predictor hardware design using a multi reference frame selector and an implicit weighted predictor. previous reference frame are reused for Low Memory Bandwidth. The size of the reference memory in the predictor was reduced by about 46% and the external memory access rate was reduced by about 24% compared with the one in the reference software JM16.0. We designed the proposed system with Verilog-HDL and synthesized inter predictor circuit using the Magnachip 0.18um CMOS standard cell library. The synthesis result shows that the gate count is about 2,061k and the design can run at 91MHz.

Dual-Port SDRAM Optimization with Semaphore Authority Management Controller

  • Kim, Jae-Hwan;Chong, Jong-Wha
    • ETRI Journal
    • /
    • v.32 no.1
    • /
    • pp.84-92
    • /
    • 2010
  • This paper proposes the semaphore authority management (SAM) controller to optimize the dual-port SDRAM (DPSDRAM) in the mobile multimedia systems. Recently, the DPSDRAM with a shared bank enabling the exchange of data between two processors at high speed has been developed for mobile multimedia systems based on dual-processors. However, the latency of DPSDRAM caused by the semaphore for preventing the access contention at the shared bank slows down the data transfer rate and reduces the memory bandwidth. The methodology of SAM increases the data transfer rate by minimizing the semaphore latency. The SAM prevents the latency of reading the semaphore register of DPSDRAM, and reduces the latency of waiting for the authority of the shared bank to be changed. It also reduces the number of authority requests and the number of times authority changes. The experimental results using a 1 Gb DPSDRAM (OneDRAM) with the SAM controllers at 66 MHz show 1.6 times improvement of the data transfer rate between two processors compared with the traditional controller. In addition, the SAM shows bandwidth enhancement of up to 38% for port A and 31% for port B compared with the traditional controller.

Patterning issues for the fabrication of sub-micron memory capacitors′ electrodes (초미세 메모리 커패시터의 전극형성을 위한 식각 기술)

  • 김현우
    • Proceedings of the Materials Research Society of Korea Conference
    • /
    • 2003.11a
    • /
    • pp.160-160
    • /
    • 2003
  • This paper describes some of the key issues associated with the patterning of metal electrodes of sub-micron (especially at the critical dimension (CD) of 0.15 $\mu\textrm{m}$) dynamic random access memory (DRAM) devices. Due to reactive ion etching (RIE) lag, the Pt etch rate decreased drastically below the CD of 0.20 $\mu\textrm{m}$ and thus the storage node electrode with the CD of 0.15 $\mu\textrm{m}$ could not be fabricated using the Pt electrodes. Accordingly, we have proposed novel techniques to surmount the above difficulties. The Ru electrode for the stack-type structure is introduced and alternative schemes based on the introduction of the concave-type structure using Pt or Ru as an electrode material are outlined.

  • PDF

ASIC Design of Frame Sync Algorithm Using Memory for Wireless ATM (무선 ATM망에서 메모리를 이용한 프레임 동기 알고리즘의 ASIC 설계)

  • 황상철;김종원
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.82-85
    • /
    • 1998
  • Because ATM was originally designed for the optical fiber environment with bit error rate(BER) of 10-11, it is difficult to maintain ATM cell extraction capability in wireless environment where BER ranges from 10-6 to 10-3. Therefore, it must be proposed the algorithm of ATM cell extraction in wereless environment. In this paper, the frame structure and synchronization algorithm satisfyling the above condition are explained, and the new ASIC implementation method of this algorithm is proposed. The known method using shift register needs so many gates that it is not suitable for ASIC implementation. But in the proposed method, a considerable reduction in gate count can be achieved by using random access memory.

  • PDF

High Performance Data Cache Memory Architecture (고성능 데이터 캐시 메모리 구조)

  • Kim, Hong-Sik;Kim, Cheong-Ghil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.9 no.4
    • /
    • pp.945-951
    • /
    • 2008
  • In this paper, a new high performance data cache scheme that improves exploitation of both the spatial and temporal locality is proposed. The proposed data cache consists of a hardware prefetch unit and two sub-caches such as a direct-mapped (DM) cache with a large block size and a fully associative buffer with a small block size. Spatial locality is exploited by fetching and storing large blocks into a direct mapped cache, and is enhanced by prefetching a neighboring block when a DM cache hit occurs. Temporal locality is exploited by storing small blocks from the DM cache in the fully associative buffer according to their activity in the DM cache when they are replaced. Experimental results on Spec2000 programs show that the proposed scheme can reduce the average miss ratio by $12.53%\sim23.62%$ and the AMAT by $14.67%\sim18.60%$ compared to the previous schemes such as direct mapped cache, 4-way set associative cache and SMI(selective mode intelligent) cache[8].

A Adaptive Garbage Collection Policy for Flash-Memory Storage System in Embedded Systems (실시간 시스템에서의 플래시 메모리 저장 장치를 위한 적응적 가비지 컬렉션 정책)

  • Park, Song-Hwa;Lee, Jung-Hoon;Lee, Won-Oh;Kim, Hee-Earn
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.12 no.3
    • /
    • pp.121-130
    • /
    • 2017
  • NAND flash memory has advantages of non-volatility, little power consumption and fast access time. However, it suffers from inability that does not provide to update-in-place and the erase cycle is limited. Moreover, the unit of read/write operation is a page and the unit of erase operation is a block. Therefore, erase operation is slower than other operations. The AGC, the proposed garbage collection policy focuses on not only garbage collection time reduction for real-time guarantee but also wear-leveling for a flash memory lifetime. In order to achieve above goals, we define three garbage collection operating modes: Fast Mode, Smart Mode, and Wear-leveling Mode. The proposed policy decides the garbage collection mode depending on system CPU usage rate. Fast Mode selects the dirtiest block as victim block to minimize the erase operation time. However, Smart Mode selects the victim block by reflecting the invalid page number and block erase count to minimizing the erase operation time and deviation of block erase count. Wear-leveling Mode operates similar to Smart Mode and it makes groups and relocates the pages which has the similar update time. We implemented the proposed policy and measured the performance compare with the existing policies. Simulation results show that the proposed policy performs better than Cost-benefit policy with the 55% reduction in the operation time. Also, it performs better than Greedy policy with the 87% reduction in the deviation of erase count. Most of all, the proposed policy works adaptively according to the CPU usage rate, and guarantees the real-time performance of the system.

Garbage Collection Method for NAND Flash Memory based on Analysis of Page Ratio (페이지 비율 분석 기반의 NAND 플래시 메모리를 위한 가비지 컬렉션 기법)

  • Lee, Seung-Hwan;Ok, Dong-Seok;Yoon, Chang-Bae;Lee, Tae-Hoon;Chung, Ki-Dong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.9
    • /
    • pp.617-625
    • /
    • 2009
  • NAND flash memory is widely used in embedded systems because of many attractive features, such as small size, light weight, low power consumption and fast access speed. However, it requires garbage collection, which includes erase operations. Erase operation is very slow. Besides, the number of the erase operations allowed to be carried out for each block is limited. The proposed garbage collection method focuses on minimizing the total number of erase operations, the deviation value of each block and the garbage collection time. NAND flash memory consists of pages of three types, such as valid pages, invalid pages and free pages. In order to achieve above goals, we use a page rate to decide when to do garbage collection and to select the target victim block. Additionally, We implement allocating method and group management method. Simulation results show that the proposed policy performs better than Greedy or CAT with the maximum rate at 82% of reduction in the deviation value of erase operation and 75% reduction in garbage collection time.

Transient Overloads Control Mechanism for Virtual Memory System (가상 메모리 시스템의 일시적인 과부하 완화 기법)

  • Go, Young-Woong;Lee, Jae-Yong;Hong, Cheol-Ho;Yu, Hyukc
    • The KIPS Transactions:PartA
    • /
    • v.8A no.4
    • /
    • pp.319-330
    • /
    • 2001
  • In virtual memory system, when a process attempts to access a page that is not resident in memory, the system generates and handles a page fault that causes unpredictable delay. So virtual memory system is not appropriate for the real-time system, because it can increase the deadline miss ratio of real-time task. In multimedia system, virtual memory system may degrade the QoS(quality of service) of multimedia application. Furthermore, in general-purpose operating system, whenever a new task is dynamically loaded, virtual memory system suffers from extensive page fault that cause transient overloading state. In this paper, we present efficient overloading control mechanism called RBPFH (Rate-Based Page Fault Handling). A significant feature of the RBPFH algorithm is page fault dispersion that keeps page fault ratio from exceeding available bound by monitoring current system resources. Furthermore, whenever the amount of available system resource is changed, the RBPFH algorithm dynamically adjusts the page fault handling rate. The RBPFH algorithm is implemented in the Linux operating system and its performance measured. The results demonstrate RBPFH\`s superior performance in supporting multimedia applications. Experiment result shows that RBPFH could achieve 10%∼20% reduction in deadline miss ratio and 50%∼60% reduction in average delay.

  • PDF

System Integrity Monitoring System using Kernel-based Virtual Machine (커널 기반 가상머신을 이용한 시스템 무결성 모니터링 시스템)

  • Nam, Hyun-Woo;Park, Neung-Soo
    • The KIPS Transactions:PartC
    • /
    • v.18C no.3
    • /
    • pp.157-166
    • /
    • 2011
  • The virtualization layer is executed in higher authority layer than kernel layer and suitable for monitoring operating systems. However, existing virtualization monitoring systems provide simple information about the usage rate of CPU or memory. In this paper, the monitoring system using full virtualization technique is proposed, which can monitor virtual machine's dynamic kernel object as memory, register, GDT, IDT and system call table. To verify the monitoring system, the proposed system was implemented based on KVM(Kernel-based Virtual Machine) with full virtualization that is directly applied to linux kernel without any modification. The proposed system consists of KvmAccess module to access KVM's internal object and API to provide other external modules with monitoring result. In experiments, the CPU utilization for monitoring operations in the proposed monitering system is 0.35% when the system is monitored with 1-second period. The proposed monitoring system has a little performance degradation.