• Title/Summary/Keyword: SSDs

Search Result 72, Processing Time 0.024 seconds

Implementation of Memory Efficient Flash Translation Layer for Open-channel SSDs

  • Oh, Gijun;Ahn, Sungyong
    • International journal of advanced smart convergence
    • /
    • v.10 no.1
    • /
    • pp.142-150
    • /
    • 2021
  • Open-channel SSD is a new type of Solid-State Disk (SSD) that improves the garbage collection overhead and write amplification due to physical constraints of NAND flash memory by exposing the internal structure of the SSD to the host. However, the host-level Flash Translation Layer (FTL) provided for open-channel SSDs in the current Linux kernel consumes host memory excessively because it use page-level mapping table to translate logical address to physical address. Therefore, in this paper, we implemente a selective mapping table loading scheme that loads only a currently required part of the mapping table to the mapping table cache from SSD instead of entire mapping table. In addition, to increase the hit ratio of the mapping table cache, filesystem information and mapping table access history are utilized for cache replacement policy. The proposed scheme is implemented in the host-level FTL of the Linux kernel and evaluated using open-channel SSD emulator. According to the evaluation results, we can achieve 80% of I/O performance using the only 32% of memory usage compared to the previous host-level FTL.

Optimizing Garbage Collection Overhead of Host-level Flash Translation Layer for Journaling Filesystems

  • Son, Sehee;Ahn, Sungyong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.27-35
    • /
    • 2021
  • NAND flash memory-based SSD needs an internal software, Flash Translation Layer(FTL) to provide traditional block device interface to the host because of its physical constraints, such as erase-before-write and large erase block. However, because useful host-side information cannot be delivered to FTL through the narrow block device interface, SSDs suffer from a variety of problems such as increasing garbage collection overhead, large tail-latency, and unpredictable I/O latency. Otherwise, the new type of SSD, open-channel SSD exposes the internal structure of SSD to the host so that underlying NAND flash memory can be managed directly by the host-level FTL. Especially, I/O data classification by using host-side information can achieve the reduction of garbage collection overhead. In this paper, we propose a new scheme to reduce garbage collection overhead of open-channel SSD by separating the journal from other file data for the journaling filesystem. Because journal has different lifespan with other file data, the Write Amplification Factor (WAF) caused by garbage collection can be reduced. The proposed scheme is implemented by modifying the host-level FTL of Linux and evaluated with both Fio and Filebench. According to the experiment results, the proposed scheme improves I/O performance by 46%~50% while reducing the WAF of open-channel SSDs by more than 33% compared to the previous one.

Extended Buffer Management with Flash Memory SSDs (플래시메모리 SSD를 이용한 확장형 버퍼 관리)

  • Sim, Do-Yoon;Park, Jang-Woo;Kim, Sung-Tan;Lee, Sang-Won;Moon, Bong-Ki
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.308-314
    • /
    • 2010
  • As the price of flash memory continues to drop and the technology of flash SSD controller innovates, high performance flash SSDs with affordable prices flourish in the storage market. Nevertheless, it is hard to expect that flash SSDs will replace harddisks completely as database storage. Instead, the approach to use flash SSD as a cache for harddisks would be more practical, and, in fact, several hybrid storage architectures for flash memory and harddisk have been suggested in the literature. In this paper, we propose a new approach to use flash SSD as an extended buffer for main buffer in database systems, which stores the pages replaced out from main buffer and returns the pages which are re-referenced in the upper buffer layer, improving the system performance drastically. In contrast to the existing approaches to use flash SSD as a cache in the lower storage layer, our approach, which uses flash SSD as an extended buffer in the upper host, can provide fast random read speed for the warm pages which are being replaced out from the limited main buffer. In fact, for all the pages which are missing from the main buffer in a real TPC-C trace, the hit ratio in the extended buffer could be more than 60%, and this supports our conjecture that our simple extended buffer approach could be very effective as a cache. In terms of performance/price, our extended buffer architecture outperforms two other alternative approaches with the same cost, 1) large main buffer and 2) more harddisks.

A Hetero-Mirroring Scheme to Improve I/O Performance of High-Speed Hybrid Storage (고속 하이브리드 저장장치의 입출력 성능개선을 위한 헤테로-미러링 기법)

  • Byun, Si-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.12
    • /
    • pp.4997-5006
    • /
    • 2010
  • A flash-memory-based SSDs(Solid State Disks) are one of the best media to support portable and desktop computers' storage devices. Their features include non-volatility, low power consumption, and fast access time for read operations, which are sufficient to present flash memories as major database storage components for desktop and server computers. However, we need to improve traditional storage management schemes based on HDD(Hard Disk Drive) and RAID(Redundant array of independent disks) due to the relatively slow or freezing characteristics of write operations of SSDs, as compared to fast read operations. In order to achieve this goal, we propose a new storage management scheme called Hetero-Mirroring based on traditional HDD mirroring scheme. Hetero-Mirroring-based scheme improves RAID-1 operation performance by balancing write-workloads and delaying write operations to avoid SSD freezing. Our test results show that our scheme significantly reduces the write operation overheads and freezing overheads, and improves the performance of traditional SSD-RAID-1 scheme by 18 percent, and the response time of the scheme by 38 percent.

File-System-Level SSD Caching for Improving Application Launch Time (응용프로그램의 기동시간 단축을 위한 파일 시스템 수준의 SSD 캐싱 기법)

  • Han, Changhee;Ryu, Junhee;Lee, Dongeun;Kang, Kyungtae;Shin, Heonshik
    • Journal of KIISE
    • /
    • v.42 no.6
    • /
    • pp.691-698
    • /
    • 2015
  • Application launch time is an important performance metric to user experience in desktop and laptop environment, which mostly depends on the performance of secondary storage. Application launch times can be reduced by utilizing solid-state drive (SSD) instead of hard disk drive (HDD). However, considering a cost-performance trade-off, utilizing SSDs as caches for slow HDDs is a practicable alternative in reducing the application launch times. We propose a new SSD caching scheme which migrates data blocks from HDDs to SSDs. Our scheme operates entirely in the file system level and does not require an extra layer for mapping SSD-cached data that is essential in most other schemes. In particular, our scheme does not incur mapping overheads that cause significant burdens on the main memory, CPU, and SSD space for mapping table. Experimental results conducted with 8 popular applications demonstrate our scheme yields 56% of performance gain in application launch, when data blocks along with metadata are migrated.

Performance Analysis of NVMe SSDs and Design of Direct Access Engine on Virtualized Environment (가상화 환경에서 NVMe SSD 성능 분석 및 직접 접근 엔진 개발)

  • Kim, Sewoog;Choi, Jongmoo
    • KIISE Transactions on Computing Practices
    • /
    • v.24 no.3
    • /
    • pp.129-137
    • /
    • 2018
  • NVMe(Non-Volatile Memory Express) SSD(Solid State Drive) is a high-performance storage that makes use of flash memory as a storage cell, PCIe as an interface and NVMe as a protocol on the interface. It supports multiple I/O queues which makes it feasible to process parallel-I/Os on multi-core environments and to provide higher bandwidth than SATA SSDs. Hence, NVMe SSD is considered as a next generation-storage for data-center and cloud computing system. However, in the virtualization system, the performance of NVMe SSD is not fully utilized due to the bottleneck of the software I/O stack. Especially, when it uses I/O stack of the hypervisor or the host operating system like Xen and KVM, I/O performance degrades seriously due to doubled-I/O stack between host and virtual machine. In this paper, we propose a new I/O engine, called Direct-AIO (Direct-Asynchronous I/O) engine, that can access NVMe SSD directly for I/O performance improvements on QEMU emulator. We develop our proposed I/O engine and analyze I/O performance differences between the existed I/O engine and Direct-AIO engine.

Dynamic Bandwidth Distribution Method for High Performance Non-volatile Memory in Cloud Computing Environment (클라우드 환경에서 고성능 저장장치를 위한 동적 대역폭 분배 기법)

  • Kwon, Piljin;Ahn, Sungyong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.3
    • /
    • pp.97-103
    • /
    • 2020
  • Linux Cgroups takes a fundamental role for sharing system resources among multiple containers on container-based cloud computing environment. Especially for I/O resource, Linux Cgroups supports a mechanism for sharing I/O bandwidth in proportion to I/O weight. However, the current mechanism of Linux Cgroups using BFQ I/O scheduler seriously degrades the I/O performance with high bandwidth storage device such as NVMe SSDs. In this paper, we proposed a new feedback based I/O bandwidth sharing scheme for Linux Cgroups which allocates I/O credits to containers according to I/O weights and adjusts the amount of credits to performance fluctuation of NVMe SSDs. The proposed scheme is implemented on Linux kernel 5.3 and evaluated. The evaluation results show that it can share the I/O bandwidth among multiple containers proportionally to I/O weights while improving I/O performance more than twice as high as the existing scheme.

A Study on the Disruptive Technology of Secondary Memory Unit: Focus on the HDD vs SSD Case (보조기억장치의 와해성 기술 사례에 관한 연구: HDD 대 SSD 사례를 중심으로)

  • Lee, Sang-Hyun
    • Journal of the Korea Convergence Society
    • /
    • v.4 no.1
    • /
    • pp.21-26
    • /
    • 2013
  • Due to a lack of research regarding disruptive technologies in domestic research, the purpose of this study is to aid in the understanding of disruptive technologies through empirical analysis of cases selected in the computer data storage industry. Analysis results have shown that SSDs, which threaten the existence of HDDs, adhere to the conditions of being a disruptive technology as first presented by Christensen(1992). SSDs are not only technologically superior to HDDs but can be mass produced due to its applicability in a vast array of product categories made possible by their miniaturization, weight reduction, and safety. This diversity of applicable fields makes it possible for mass production leading to further decrease in the unit price ultimately continuing the diffusion of this technology. By presenting empirical cases to aid in the understanding of disruptive technology, it is determined that the findings of this study contribute greatly to both academia and the business world.

Variation of Effective SSD According to Electron Energies and Irradiated Field Sizes (전자선 에너지 및 조사야에 따른 유효선원 피부 간 거리 변화)

  • Yang, Chil-Yong;Yum, Ha-Yong;Jung, Tae-Sik
    • Radiation Oncology Journal
    • /
    • v.5 no.2
    • /
    • pp.157-163
    • /
    • 1987
  • It is known that fixed source to skin distance (SSD) cannot be used when the treatment field is sloped or larger than the size of second collimator in electron beam irradiation and inverse square law using effective ssd should be adopted. Effective SSDs were measured in different field sizes in each 6, 9, 12, 15 and 18MeV electron energy by suing NELAC 1018D linear accelerator of Kosin Medical Center. We found important parmeters of effective SSD. 1. Minimum effective SSD was 58.8cm in small field size of $6\pm6cm$ and maximum effective SSD was 94.9cm in large field size of $25\pm25cm$, with 6MeV energy. It's difference was 36.1cm. The dose rate at measuring point was quite different even with a small difference of SSD in small field $(6\times6cm)$ and low energy (6 MeV). 2. Effective SSD increased with field size in same electron energy. 3. Effective SSDs gradually increased with the electron energies and reached maximum at 12 or 15 MeV electron energy and decreased again at 18MeV electron energy in each identical field size. And so the effective SSD should be measured in each energy and field size for practical radiotherapy.

  • PDF

An Analysis on the Performance of TRIM Commands on SSDs and its Application to the Ext4 File System (SSD에서의 TRIM 명령어 처리 성능 분석 및 Ext4 파일 시스템으로의 적용)

  • Son, Hyobong;Lee, Youngjae;Kim, Yongserk;Kim, Jin-Soo
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.1
    • /
    • pp.52-57
    • /
    • 2015
  • In this paper, we analyze the performance of the TRIM commands on various SSDs and, based on our analysis results, we enhance the performance of these TRIM commands in the Ext4 file system. We observed that the performance of the TRIM commands improves as the size of the LBA-range increases, the sector number is aligned and continuous or more LBA-ranges are notified via a single TRIM command. However, although the performance is better when multiple LBA-ranges are informed by a single TRIM command, the Ext4 file system issues a single TRIM command for every LBA-range. In this paper, we modify the Ext4 file system to convey at most 64 LBA-ranges per TRIM command. Evaluations through Filebench show that the performance of file deletion operations is improved by up to 35%.