• Title/Summary/Keyword: read ahead

Search Result 12, Processing Time 0.018 seconds

Improving the Read Performance of Compressed File Systems Considering Kernel Read-ahead Mechanism (커널의 미리읽기를 고려한 압축파일시스템의 읽기성능향상)

  • Ahn, Sung-Yong;Hyun, Seung-Hwan;Koh, Kern
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.6
    • /
    • pp.678-682
    • /
    • 2010
  • Compressed filesystem is frequently used in the embedded system to increase cost efficiency. One of the drawbacks of compressed filesystem is low read performance. Moreover, read-ahead mechanism that improves the read throughput of storage device has negative effect on the read performance of compressed filesystem, increasing read latency. Main reason is that compressed filesystem has too big read-ahead miss penalty due to decompression overhead. To solve this problem, this paper proposes new read technique considering kernel read-ahead mechanism for compressed filesystem. Proposed technique improves read throughput of device by bulk read from device and reduces decompression overhead of compressed filesystem by selective decompression. We implement proposed technique by modifying CramFS and evaluate our implementation in the Linux kernel 2.6.21. Performance evaluation results show that proposed technique reduces the average major page fault handling latency by 28%.

A Local Buffer Allocation Scheme for Multimedia Data on Linux (리눅스 상에서 멀티미디어 데이타를 고려한 지역 버퍼 할당 기법)

  • 신동재;박성용;양지훈
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.4
    • /
    • pp.410-419
    • /
    • 2003
  • The buffer cache of general operating systems such as Linux manages file data by using global block replacement policy and read ahead. As a result, multimedia data with a low locality of reference and various consumption rate have low cache hit ratio and consume additional buffers because of read ahead. In this paper we have designed and implemented a new buffer allocation algorithm for multimedia data on Linux. Our approach keeps one read-ahead cache per every opened multimedia file and dynamically changes the read-ahead group size based on the buffer consumption rate of the file. This distributes resources fairly and optimizes the buffer consumption. This paper compares the system performance with that of Linux 2.4.17 in terms of buffer consumption and buffer hit ratio.

A Prediction-Based Data Read Ahead Policy using Decision Tree for improving the performance of NAND flash memory based storage devices (낸드 플래시 메모리 기반 저장 장치의 성능 향상을 위해 결정트리를 이용한 예측 기반 데이터 미리 읽기 정책)

  • Lee, Hyun-Seob
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.4
    • /
    • pp.9-15
    • /
    • 2022
  • NAND flash memory is used as a medium for various storage devices due to its high data processing speed with low power consumption. However, since the read processing speed of data is about 10 times faster than the write processing speed, various studies are being conducted to improve the speed difference. In particular, flash dedicated buffer management policies have been studied to improve write speed. However, SSD(solid state disks), which has recently been used for various purposes, is more vulnerable to read performance than write performance. In this paper, we find out why read performance is slower than write performance in SSD composed of NAND flash memory and study buffer management policies to improve it. The buffer management policy proposed in this paper proposes a method of improving the speed of a flash-based storage device by analyzing the pattern of read data and applying a policy of pre-reading data to be requested in the future from NAND flash memory. It also proves the effectiveness of the read-ahead policy through simulation.

Improving Prefetching Effects by Exploiting Reference Patterns (참조패턴을 이용한 선반입의 개선)

  • Lee, Hyo-Jeong;Doh, In-Hwan;Noh, Sam-H.
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.2
    • /
    • pp.226-230
    • /
    • 2008
  • Prefetching is one of widely used techniques to improve performance of I/O. But it has been reported that prefetching can bring adverse result on some reference pattern. This paper proposes a prefet-ching frame that can be adopted on existing prefetching techniques simply. The frame called IPRP (Improving Prefetching Effects by Exploiting Reference Patterns) and detects reference patterns online and control pre-fetching upon the characteristics of the detected pattern. In our experiment, we adopted IPRP on Linux read-ahead prefetching. IPRP could prevent adverse result clearly when Linux read-ahead prefetching increases total execution time about $40%{\sim}70%$. When Linux read-ahead prefetching could bring some benefit, IPRP with read- ahead performed similar or slightly better benefit on execution time. With this result we could see our IPRP can complement and improve legacy prefetching techniques efficiently.

An Efficient Cleaning Scheme for File Defragmentation on Log-Structured File System (로그 구조 파일 시스템의 파일 단편화 해소를 위한 클리닝 기법)

  • Park, Jonggyu;Kang, Dong Hyun;Seo, Euiseong;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.43 no.6
    • /
    • pp.627-635
    • /
    • 2016
  • When many processes issue write operations alternately on Log-structured File System (LFS), the created files can be fragmented on the file system layer although LFS sequentially allocates new blocks of each process. Unfortunately, this file fragmentation degrades read performance because it increases the number of block I/Os. Additionally, read-ahead operations which increase the number of data to request at a time exacerbates the performance degradation. In this paper, we suggest a new cleaning method on LFS that minimizes file fragmentation. During a cleaning process of LFS, our method sorts valid data blocks by inode numbers before copying the valid blocks to a new segment. This sorting re-locates fragmented blocks contiguously. Our cleaning method experimentally eliminates 60% of file fragmentation as compared to file fragmentation before cleaning. Consequently, our cleaning method improves sequential read throughput by 21% when read-ahead is applied.

Performance Optimization in GlusterFS on SSDs (SSD 환경 아래에서 GlusterFS 성능 최적화)

  • Kim, Deoksang;Eom, Hyeonsang;Yeom, Heonyoung
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.2
    • /
    • pp.95-100
    • /
    • 2016
  • In the current era of big data and cloud computing, the amount of data utilized is increasing, and various systems to process this big data rapidly are being developed. A distributed file system is often used to store the data, and glusterFS is one of popular distributed file systems. As computer technology has advanced, NAND flash SSDs (Solid State Drives), which are high performance storage devices, have become cheaper. For this reason, datacenter operators attempt to use SSDs in their systems. They also try to install glusterFS on SSDs. However, since the glusterFS is designed to use HDDs (Hard Disk Drives), when SSDs are used instead of HDDs, the performance is degraded due to structural problems. The problems include the use of I/O-cache, Read-ahead, and Write-behind Translators. By removing these features that do not fit SSDs which are advantageous for random I/O, we have achieved performance improvements, by up to 255% in the case of 4KB random reads, and by up to 50% in the case of 64KB random reads.

Performance Evaluation and Analysis of NVM Storage for Ultra-Light Internet of Things (초경량 사물인터넷을 위한 비휘발성램 스토리지 성능평가 및 분석)

  • Lee, Eunji;Yoo, Seunghoon;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.6
    • /
    • pp.181-186
    • /
    • 2015
  • With the rapid growth of semiconductor technologies, small-sized devices with powerful computing abilities are becoming a reality. As this environment has a limit on power supply, NVM storage that has a high density and low power consumption is preferred to HDD or SSD. However, legacy software layers optimized for HDDs should be revisited. Specifically, as storage performance approaches DRAM performance, existing I/O mechanisms and software configurations should be reassessed. This paper explores the challenges and implications of using NVM storage with a broad range of experiments. We measure the performance of a system with NVM storage emulated by DRAM with proper timing parameters and compare it with that of HDD storage environments under various configurations. Our experimental results show that even with storage as fast as DRAM, the performance gain is not large for read operations as current I/O mechanisms do a good job hiding the slow performance of HDD. To assess the potential benefit of fast storage media, we change various I/O configurations and perform experiments to quantify the effects of existing I/O mechanisms such as buffer caching, read-ahead, synchronous I/O, direct I/O, block I/O, and byte-addressable I/O on systems with NVM storage.

An Efficient AMI Simulator Design adapted in Smart Grid (스마트그리드에서의 효율적인 AMI 구현을 위한 통합 시뮬레이터 설계)

  • Yang, Il-Kwon;Choi, Seung-Hwan;Lee, Sang-Ho
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.10
    • /
    • pp.1368-1375
    • /
    • 2013
  • The Smart Grid, which can monitor or diagnose the power grid in real time to operate efficiently, has been pushed ahead systematically as one of alternatives to solve these issues by combining the advanced Information Communication Technology and the electrical network. Hence, the electric company which introduces smart grid technology can read remotely the electrical meter readings by means of two-way communication between the meter and the central system. This enabled the customer and the utility to take part in reasonable electrical energy utilization. AMI became one of the core foundations in realizing the Smart Grid. It is hard to test the entire process of AMI system before the full deployment because it covers the broad objects from the customer to the utility operation system and requires mass data handling and management. Therefore, we design an efficient AMI network model and a simulator for performance evaluation required to simulate the network model similar to the real environment. This tool supports to evaluate the efficiency of the AMI network equipments and deployment. Additionally, it estimates the appropriate number of deployments and the proper capabilities.

Efficient Management of PCM-based Swap Systems with a Small Page Size

  • Park, Yunjoo;Bahn, Hyokyung
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.15 no.5
    • /
    • pp.476-484
    • /
    • 2015
  • Due to the recent advances in non-volatile memory technologies such as PCM, a new memory hierarchy of computer systems is expected to appear. In this paper, we explore the performance of PCM-based swap systems and discuss how this system can be managed efficiently. Specifically, we introduce three management techniques. First, we show that the page fault handling time can be reduced by attaching PCM on DIMM slots, thereby eliminating the software stack overhead of block I/O and the context switch time. Second, we show that it is effective to reduce the page size and turn off the read-ahead option under the PCM swap system where the page fault handling time is sufficiently small. Third, we show that the performance is not degraded even with a small DRAM memory under a PCM swap device; this leads to the reduction of DRAM's energy consumption significantly compared to HDD-based swap systems. We expect that the result of this paper will lead to the transition of the legacy swap system structure of "large memory - slow swap" to a new paradigm of "small memory - fast swap."

Research on Conditional Execution Out-of-order Instruction Issue Microprocessor Using Register Renaming Method (레지스터 리네이밍 방법을 사용하는 조건부 실행 비순차적 명령어 이슈 마이크로프로세서에 관한 연구)

  • 최규백;김문경;홍인표;이용석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.9A
    • /
    • pp.763-773
    • /
    • 2003
  • In this paper, we present a register renaming method for conditional execution out-of-order instruction issue microprocessors. Register renaming method reduces false data dependencies (write after read(WAR) and write after write(WAW)). To implement a conditional execution out-of-order instruction issue microprocessor using register renaming, we use a register file which includes both in-order state physical registers and look-ahead state physical registers to share all logical registers. And we design an in-order state indicator, a renaming state indicator, a physical register assigning indicator, a condition prediction buffer and a reorder buffer. As we utilize the above hardwares, we can do register renaming and trace the in-order state. In this paper, we present an improved register renaming method using smaller hardware resources than conventional register renaming method. And this method eliminates an associative lookup and provides a short recovery time.