• Title/Summary/Keyword: Memory Buffer

Search Result 369, Processing Time 0.028 seconds

A Buffer Architecture based on Dynamic Mapping table for Write Performance of Solid State Disk (동적 사상 테이블 기반의 버퍼구조를 통한 Solid State Disk의 쓰기 성능 향상)

  • Cho, In-Pyo;Ko, So-Hyang;Yang, Hoon-Mo;Park, Gi-Ho;Kim, Shin-Dug
    • The KIPS Transactions:PartA
    • /
    • v.18A no.4
    • /
    • pp.135-142
    • /
    • 2011
  • This research is to design an effective buffer structure and its management for flash memory based high performance SSDs (Solid State Disks). Specifically conventional SSDs tend to show asymmetrical performance in read and /write operations, in addition to a limited number of erase operations. To minimize the number of erase operations and write latency, the degree of interleaving levels over multiple flash memory chips should be maximized. Thus, to increase the interleaving effect, an effective buffer structure is proposed for the SSD with a hybrid address mapping scheme and super-block management. The proposed buffer operation is designed to provide performance improvement and enhanced flash memory life cycle. Also its management is based on a new selection scheme to determine random and sequential accesses, depending on execution characteristics, and a method to enhance the size of sequential access unit by aggressive merging. Experiments show that a newly developed mapping table under the MBA is more efficient than the basic simple management in terms of maintenance and performance. The overall performance is increased by around 35% in comparison with the basic simple management.

Duplication-Aware Garbage Collection for Flash Memory-Based Virtual Memory Systems (플래시 메모리 기반의 가상 메모리 시스템을 위한 중복성을 고려한 GC 기법)

  • Ji, Seung-Gu;Shin, Dong-Kun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.3
    • /
    • pp.161-171
    • /
    • 2010
  • As embedded systems adopt monolithic kernels, NAND flash memory is used for swap space of virtual memory systems. While flash memory has the advantages of low-power consumption, shock-resistance and non-volatility, it requires garbage collections due to its erase-before-write characteristic. The efficiency of garbage collection scheme largely affects the performance of flash memory. This paper proposes a novel garbage collection technique which exploits data redundancy between the main memory and flash memory in flash memory-based virtual memory systems. The proposed scheme takes the locality of data into consideration to minimize the garbage collection overhead. Experimental results demonstrate that the proposed garbage collection scheme improves performance by 37% on average compared to previous schemes.

Fabrications and properties of MFIS capacitor using SiON buffer layer (SiON buffer layer를 이용한 MFIS Capacitor의 제작 및 특성)

  • 정상현;정순원;인용일;김광호
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2001.07a
    • /
    • pp.70-73
    • /
    • 2001
  • MFIS(Metal-ferroelectric-insulator- semiconductor) structures using silicon oxynitride(SiON) buffer layers were fabricatied and demonstrated nonvolatile memory operations. Oxynitride(SiON) films have been formed on p-Si(100) by RTP(rapid thermal process) in O$_2$+N$_2$ ambient at 1100$^{\circ}C$. The gate leakage current density of Al/SiON/Si(100) capacitor was about the order of 10$\^$-8/ A/cm$^2$ at the range of ${\pm}$ 2.5 MV/cm. The C-V characteristics of Al/LiNbO$_3$/SiON/Si(100) capacitor showed a hysteresis loop due to the ferroelectric nature of the LiNbO$_3$ thin films. Typical dielectric constant value of LiNbO$_3$ film of MFIS device was about 24. The memory window width was about 1.2V at the electric field of ${\pm}$300 kV/cm ranges.

  • PDF

Performance Evaluation of HMB-Supported DRAM-Less NVMe SSDs (HMB를 지원하는 DRAM-Less NVMe SSD의 성능 평가)

  • Kim, Kyu Sik;Kim, Tae Seok
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.7
    • /
    • pp.159-166
    • /
    • 2019
  • Unlike modern Solid-State Drives with DRAM, DRAM-less SSDs do not have DRAM because they are cheap and consume less power. Obviously, they have performance degradation problem due to lack of DRAM in the controller and this problem can be alleviated by utilizing host memory buffer(HMB) feature of NVMe, which allows SSDs to utilize the DRAM of host. In this paper, we show that commercial DRAM-less SSDs surely exhibit lower I/O performance than other SSDs with DRAM, but they can be improved by utilizing the HMB feature. Through various experiments and analysis, we also show that DRAM-less SSDs mainly exploit the DRAM of host as mapping table cache rather than read cache or write buffer to improve I/O performance.

Improving Periodic Flush Overhead of File Systems Using Non-volatile Buffer Cache (비휘발성 버퍼 캐시를 이용한 파일 시스템의 주기적인 flush 오버헤드 개선)

  • Lee, Eunji;Kang, Hyojung;Koh, Kern;Bahn, Hyokyung
    • Journal of KIISE
    • /
    • v.41 no.11
    • /
    • pp.878-884
    • /
    • 2014
  • File I/O buffer cache plays an important role in narrowing the wide speed gap between the main memory and the secondary storage. However, data loss or inconsistencies may occur if the system crashes before the data that has been updated in the buffer cache is flushed to storage. Thus, most operating systems adopt a daemon that periodically flushes dirty data to the secondary storage. In this study, we show that periodic flushes account for 30-70% of the total write traffic to storage and remove this inefficiency by implementing a small, non-volatile buffer cache. Specifically, we present space-efficient management techniques, such as delta-write and fragment-grouping, and show that the storage write traffic and throughput can be improved by a margin of 44.2% and 23.6%, respectively, with only a small NVRAM.

A Combined BTB Architecture for effective branch prediction (효율적인 분기 예측을 위한 공유 구조의 BTB)

  • Lee Yong-hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.7
    • /
    • pp.1497-1501
    • /
    • 2005
  • Branch instructions which make the sequential instruction flow changed cause pipeline stalls in microprocessor. The pipeline hazard due to branch instructions are the most serious problem that degrades the performance of microprocessors. Branch target buffer predicts whether a branch will be taken or not and supplies the address of the next instruction on the basis of that prediction. If the hanch target buffer predicts correctly, the instruction flow will not be stalled. This leads to the better performance of microprocessor. In this paper, the architecture of a ta8 memory that branch target buffer and TLB can share is presented. Because the two tag memories used for branch target buffer and TLB each is replaced by single combined tag memory, we can expect the smaller chip size and the faster prediction. This shared tag architecture is more advantageous for the microprocessors that uses more bits of address and exploits much more instruction level parallelism.

Improving Compiler to Prevent Buffer Overflow Attack (버퍼오버플로우 공격 방지를 위한 컴파일러 기법)

  • Kim, Jong-Ewi;Lee, Seong-Uck;Hong, Man-Pyo
    • The KIPS Transactions:PartC
    • /
    • v.9C no.4
    • /
    • pp.453-458
    • /
    • 2002
  • Recently, the number of hacking, that use buffer overflow vulnerabilities, are increasing. Although the buffer overflow Problem has been known for a long time, for the following reasons, it continuos to present a serious security threat. There are three defense method of buffer overflow attack. First, allow overwrite but do not allow unauthorized change of control flow. Second, do not allow overwriting at all. Third, allow change of control flow, but prevents execution of injected code. This paper is for allowing overwrites but do not allow unauthorized change of control flow which is the solution of extending compiler. The previous defense method has two defects. First, a program company with overhead because it do much thing before than applying for the method In execution of process. Second, each time function returns, it store return address in reserved memory created by compiler. This cause waste of memory too much. The new proposed method is to extend compiler, by processing after compiling and linking time. To complement these defects, we can reduce things to do in execution time. By processing additional steps after compile/linking time and before execution time. We can reduce overhead.

A Buffer Cache Replacement Algorithm for Considering both Hybrid Main Memory and Storage (하이브리드 메인 메모리와 스토리지의 특성을 고려한 버퍼 캐시 교체 정책)

  • Kang, Dong Hyun;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.947-953
    • /
    • 2015
  • PRAM is being considered as a potential successor to DRAM because of its characteristics such as byte-addressability, non-volatility, and high density. To gain its benefits, buffer cache replacement algorithm based on PRAM has been actively studied. However, most of the previous studies on buffer cache replacement algorithm limitedly exploit the byte-level performance of PRAM by focusing its limited lifetime and slower access latency compared to DRAM. In this paper, we propose a novel buffer cache replacement algorithm that fully considers the byte-level performance of PRAM and the performance of secondary storage. To take advantage of small size write on PRAM, proposed scheme keeps pages, which are frequently accessed with a small size write, on PRAM and allows the selective page migration from DRAM to PRAM. As a result, our scheme significantly reduces the number of PRAM writes. Our experimental results indicate for real workloads that our scheme reduces the number of PRAM writes by up to 92% and improves its performance by up to 62% compared to CLOCK.

In-depth Analysis and Performance Improvement of a Flash Disk-based Matrix Transposition Algorithm (플래시 디스크 기반 행렬전치 알고리즘 심층 분석 및 성능개선)

  • Lee, Hyung-Bong;Chung, Tae-Yun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.12 no.6
    • /
    • pp.377-384
    • /
    • 2017
  • The scope of the matrix application is so broad that it can not be limited. A typical matrix application area in computer science is image processing. Particularly, radar scanning equipment implemented on a small embedded system requires real-time matrix transposition for image processing, and since its memory size is small, a general matrix transposition algorithm can not be applied. In this case, matrix transposition must be done in disk space, such as flash disk, using a limited memory buffer. In this paper, we analyze and improve a recently published flash disk-based matrix transposition algorithm named as asymmetric sub-matrix transposition algorithm. The performance analysis shows that the asymmetric sub-matrix transposition algorithm has lower performance than the conventional sub-matrix transposition algorithm, but the improved asymmetric sub-matrix transposition algorithm is superior to the sub-matrix transposition algorithm in 13 of the 16 experimental data.

An Optical Asynchronous Transfer Mode(ATM) Switching System Using Free Space Optics and an Output Buffer Memory (자유공간 광학과 출력 버퍼 메모리를 이용한 광 Asynchronous Transfer Mode(ATM) 교환방식)

  • 지윤규;이상신
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.4
    • /
    • pp.326-334
    • /
    • 1991
  • We propose an optical Asynchronous Transfer Mode(ATM) switching system using free-space optics and an output buffer memory. The distributor system in the switching fabric was analyzed using the Huygens-Fresnel principle and lens transformation. For monochromatic illumination, a pattern similar to the Fourier transform of the input distribution was observed across the output plane. A spatially broadened intensity distribution across the the output plane can be expected when the system is illminated with a partially coherent, quasimonochromatic beam. Spatially coherent pulses as short as 100fs can propagate through the distributor without severe spatial broadening.

  • PDF