• Title/Summary/Keyword: 버퍼 캐시

Search Result 68, Processing Time 0.028 seconds

An Efficient Latency Hiding method using accumulation buffer (누적 버퍼를 활용한 효율적인 Latency Hiding기법)

  • Lee, Min-Woo;Han, Tack-Don
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.07a
    • /
    • pp.297-300
    • /
    • 2012
  • 현재 cache의 성능 향상을 위한 많은 기법들이 제안되고 있으며, Latency Hiding 기법 역시 cache의 효율적인 사용을 위해 많은 연구가 진행 되어 왔다. write buffer를 사용한 write Latency hiding기법이나 multi threading을 사용한 Latency Hiding 방법 등 여러 기법들이 연구되어 왔으며, 지금도 Latency hiding을 위한 많은 연구들이 지속적으로 진행되고 있다. 본 논문 역시 효율적인 Latency Hiding을 위한 누적 버퍼를 제안한다. 본 논문은 누적 버퍼의 활용도를 조사하여 얼마나 효율적으로 Latency를 은폐했는지, 또 버퍼를 사용함으로써 얻는 다른 이점에 대해 집중적으로 연구하였다.

  • PDF

Extended Buffer Management with Flash Memory SSDs (플래시메모리 SSD를 이용한 확장형 버퍼 관리)

  • Sim, Do-Yoon;Park, Jang-Woo;Kim, Sung-Tan;Lee, Sang-Won;Moon, Bong-Ki
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.308-314
    • /
    • 2010
  • As the price of flash memory continues to drop and the technology of flash SSD controller innovates, high performance flash SSDs with affordable prices flourish in the storage market. Nevertheless, it is hard to expect that flash SSDs will replace harddisks completely as database storage. Instead, the approach to use flash SSD as a cache for harddisks would be more practical, and, in fact, several hybrid storage architectures for flash memory and harddisk have been suggested in the literature. In this paper, we propose a new approach to use flash SSD as an extended buffer for main buffer in database systems, which stores the pages replaced out from main buffer and returns the pages which are re-referenced in the upper buffer layer, improving the system performance drastically. In contrast to the existing approaches to use flash SSD as a cache in the lower storage layer, our approach, which uses flash SSD as an extended buffer in the upper host, can provide fast random read speed for the warm pages which are being replaced out from the limited main buffer. In fact, for all the pages which are missing from the main buffer in a real TPC-C trace, the hit ratio in the extended buffer could be more than 60%, and this supports our conjecture that our simple extended buffer approach could be very effective as a cache. In terms of performance/price, our extended buffer architecture outperforms two other alternative approaches with the same cost, 1) large main buffer and 2) more harddisks.

Advanced Disk Block Caching Algorithm for Disk I/O sub-system (디스크 입출력 서브시스템을 위한 개선된 디스크 블록 캐싱 알고리즘)

  • Jung, Soo-Mok;Rho, Kyung-Taeg
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.6
    • /
    • pp.139-146
    • /
    • 2007
  • A hard disk. which can be classified as an external storage is usually capacious and economical. In spite of the attractive characteristics and efforts on the performance improvement, however, the operation of the hard disk is apparently slower than a processor and the advancement has also been slowly conducted since it is based on mechanical process. On the other hand. the advancement of the processor has been drastically performed as semiconductor technology does. So, disk I/O sub-system becomes bottleneck of computer systems' performance. For this reason. the research on disk I/O sub-system is in progress to improve computer systems' performance. In this paper, we proposed multi-level LRU scheme and then apply it to the computer systems with buffer cache and disk cache. By applying the proposed scheme to computer systems. the average access time to ask blocks can be decreased. The efficiency of the proposed algorithm was verified by simulation results.

  • PDF

Return address stack for protecting from buffer overflow attack (버퍼오버플로우 공격 방지를 위한 리턴주소 스택)

  • Cho, Byungtae;Kim, Hyungshin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.10
    • /
    • pp.4794-4800
    • /
    • 2012
  • Many researches have been performed to resist buffer overflow attacks. However, the attack still poses one of the most important issue in system security field. It is because programmers are using library functions containing security hole and once buffer overflow vulnerability has been found, the security patches are distributed after the attacks are widely spreaded. In this paper, we propose a new cache level return address stack architecture for resisting buffer overflow attack. We implemented our hardware onto SimpleScalar simulator and verified its functionality. Our circuit can overcome the various disadvantages of previous works with small overhead.

Performance Evaluation of Catalog Management Schemes for Distributed Main Memory Database : Compilation Cases (분산 주기억장치 데이타베이스에서 컴파일 시 카탈로그 관리 기법의 성능평가)

  • 정한라;홍의경
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10a
    • /
    • pp.118-120
    • /
    • 2001
  • 분산 DBMS에 대한 연추는 디스크에 데이타가 저장되어 있는 환경을 가정한 관계 DBMS에서 주로 진행되어 왔다. 디스크에 데이타가 저장되어 있다고 가정하는 시스템은 질의 최적화, 버퍼 관리기, 인덱스관리 기법 등 여러 가지 측면에서 주기억 장치 DBMS와 크게 다르기 때문에 이런 분산 DBMS에서 연구된 결과들을 그대로 주기억장치 상주 DBMS의 분산 시스템에 적용하기에는 어려움이 있다. 본 논문에서는 이러한 주기억장치 상주 중앙 집중형 DBMS를 분산 시스템으로 확장할 때 고려해야 한 여러 문제 중 캐시의 유무에 따른 카탈로그의 구조에 대해 살펴보고 시뮬레이션을 통해 카탈로그 관리기법에 대한 성능을 평가한다. 카탈로그 관리 기법의 성능평가 대상으로는 사이트의 자치성을 고려하여 분할된 카탈로그 방식을 택하였다. 실험의 결과는 캐시를 이용하는 카탈로그가 캐시를 사용하지 않는 카탈로그보다 좋은 성능을 나타냈다.

  • PDF

A Low-Power Texture Mapping Technique for Mobile 3D Graphics (모바일 3D 그래픽스를 위한 저전력 텍스쳐 맵핑 기법)

  • Kim, Hyun-Hee;Kim, Ji-Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.45-57
    • /
    • 2009
  • ETexture mapping is a technique used for adding reality to an image in 3D graphics. However. this technique becomes the bottleneck of the 3D graphics pipeline because it requires large processing power and high memory bandwidth. For reducing memory latency in texture mapping, texture cache is used. As portable devices become smaller and they have power constraint, it is important to reduce the area and the power consumption of the texture cache. In this paper we propose using a small texture cache to reduce the area and the power consumption of the texture cache. Furthermore, we propose techniques to keep a performance comparable to large texture caches by using prefetch techniques and a victim cache. Simulation results show the proposed small texture cache can reduce the area and the power consumption up to 70% and 60%, respectively, by using $1{\sim}2K$ bytes texture cache compared to the conventional 16K bytes cache while keeping the performance.

Design of Global Buffer Manager in SAN-based Cluster File Systems (SAN 환경의 대용량 클러스터 파일 시스템을 위한 광역 버퍼 관리기의 설계)

  • Lee, Kyu-Woong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.11
    • /
    • pp.2404-2410
    • /
    • 2011
  • This paper describes the design overview of cluster file system $SANique^{TM}$ based on SAN(Storage Area Network) environment. The design issues and problems of the conventional global buffer manager are also illustrated under a large set of clustered computing hosts. We propose the efficient global buffer management method that provides the more scalability and availability. In our proposed global buffer management method, we reuse the maintained list of lock information from our cluster lock manager. The global buffer manger can easily find and determine the location of requested data block cache based on that lock information. We present the pseudo code of the global buffer manager and illustration of global cache operation in cluster environment.

Energy and Performance-Efficient Dynamic Load Distribution for Mobile Heterogeneous Storage Devices (에너지 및 성능 효율적인 이종 모바일 저장 장치용 동적 부하 분산)

  • Kim, Young-Jin;Kim, Ji-Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.9-17
    • /
    • 2009
  • In this paper, we propose a dynamic load distribution technique at the operating system level in mobile storage systems with a heterogeneous storage pair of a small form-factor and disk and a flash memory, which aims at saving energy consumption as well as enhancing I/O performance. Our proposed technique takes a combinatory approach of file placement and buffer cache management techniques to find how the load can be distributed in an energy and performance-aware way for a heterogeneous mobile storage air of a hard disk and a flash memory. We demonstrate that the proposed technique provides better experimental results with heterogeneous mobile storage devices compared with the existing techniques through extensive simulations.

A Global Buffer Manager for a Shared Disk File System in SAN Clusters (SAN 환경에서 공유 디스크 파일 시스템을 위한 전역 버퍼 관리자)

  • 박선영;손덕주;신범주;김학영;김명준
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.2
    • /
    • pp.134-145
    • /
    • 2004
  • With rapid growth in the amount of data transferred on the Internet, traditional storage systems have reached the limits of their capacity and performance. SAN (Storage Area Network), which connects hosts to disk with the Fibre Channel switches, provides one of the powerful solutions to scale the data storage and servers. In this environment, the maintenance of data consistency among hosts is an important issue because multiple hosts share the files on disks attached to the SAN. To preserve data consistency, each host can execute the disk I/O whenever disk read and write operations are requested. However, frequent disk I/O requests cause the deterioration of the overall performance of a SAN cluster. In this paper, we introduce a SANtopia global buffer manager to improve the performance of a SAN cluster reducing the number of disk I/Os. We describe the design and algorithms of the SANtopia global buffer manager, which provides a buffer cache sharing mechanism among the hosts in the SAN cluster. Micro-benchmark results to measure the performance of block I/O operations show that the global buffer manager achieves speed-up by the factor of 1.8-12.8 compared with the existing method using disk I/O operations. Also, File system micro-benchmark results show that SANtopia file system with the global buffer manager improves performance by the factor of 1.06 in case of directories and 1.14 in case of files compared with the file system without a global buffer manager.

Designing Hybrid HDD using SLC/MLC combined Flash Memory (SLC/MLC 혼합 플래시 메모리를 이용한 하이브리드 하드디스크 설계)

  • Hong, Seong-Cheol;Shin, Dong-Kun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.7
    • /
    • pp.789-793
    • /
    • 2010
  • Recently, flash memory-based non-volatile cache (NVC) is emerging as an effective solution to enhance both I/O performance and energy consumption of storage systems. To get significant performance and energy gains by NVC, it would be better to use multi-level-cell (MLC) flash memories since it can provide a large capacity of NVC with low cost. However, the number of available program/erase cycles of MLC flash memory is smaller than that of single-level-cell (SLC) flash memory limiting the lifespan of NVC. To overcome such a limitation, SLC/MLC combined flash memory is a promising solution for NVC. In this paper, we propose an effective management scheme for heterogeneous SLC and MLC regions of the combined flash memory.