• Title/Summary/Keyword: Prefetching System

Search Result 58, Processing Time 0.027 seconds

Flash memory system with spatial smart buffer for the substitution of a hard-disk (하드디스크 대용을 위한 공간적 스마트 버퍼 플래시 메모리 시스템)

  • Jung, Bo-Sung;Jung, Jung-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.3
    • /
    • pp.41-49
    • /
    • 2009
  • Flash memory has become increasingly requestion for the importance and the demand as a storage due to its low power consumption, cheap prices and large capacity medium. This research is to design a high performance flash memory structure for the substitution of a hard-disk by dynamic prefetching of aggressive spatial locality from the spatial smart buffer system. The proposed buffer system in a NAND flash memory consists of three parts, i.e., a fully associative victim buffer for temporal locality, a fully associative spatial buffer for spatial locality, and a dynamic fetching unit. We proposed new dynamic prefetching algorithm for aggressive spatial locality. That is to use the flash memory instead of the hard disk, the proposed flash system can achieve better performance gain by overcoming many drawbacks of the flash memory by the new structure and the new algorithm. According to the simulation results, compared with the smart buffer system, the average miss ratio is reduced about 26% for Mediabench applications. The average memory access times are improved about 35% for Mediabench applications, over 30% for Spec2000 applications.

Design of Caching Scheme for Mobile Underground Geospatial Information Map System (모바일용 지하공간정보지도 관리 시스템에서 응답속도 향상을 위한 캐싱 기법)

  • Kim, Yong-Tae;Kouh, Hoon-Joon
    • Journal of Convergence for Information Technology
    • /
    • v.12 no.1
    • /
    • pp.7-14
    • /
    • 2022
  • Unlike general maps, the underground geospatial Information is a system made to view underground information in a 3D shape. This system is managed by a tile maps to lighten the data. But there are various underground structures in the basement, and the structures are made of 3D data, so the data size is large. Therefore, when a client mobile program requests a tile map, the service server fetches the requested tile map from the DB server and transmits ti to the client, but there is a transmission delay time problem. In this paper, we design the tile cache method to improve the request response speed for the tile map data provided to the client in the mobile underground geospatial information system. We propose a method in which a service server predicts and prefetchs the next tile map while the client is viewing tile map, and stores the prefetching data in the memory of client mobile terminal. Then, the transmission delay time problem can be solved.

Table Comparison Prefetching using Available I/O Bandwidth in Parallel File System (병렬 파일 시스템에서의 가용 입출력 대역폭을 고려한 테이블 비교 선반입 정책)

  • 김재열;석성우;조종현;서대화
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10c
    • /
    • pp.630-632
    • /
    • 2000
  • 과도한 파일 입출력이 요구되는 병렬파일 시스템의 성능을 결정하는 중요한 요소로서 캐슁과 선반입을 들 수 있다. 본 논문은 캐쉬의 크기에 비해 상대적으로 큰 파일을 요청하는 경우에 시스템 성능에 막대한 영향을 미치는 선반입에 대해서 선반입할 데이터를 결정하는 알고리즘으로 테이블 비교법을 제안하고, 이와 더불어 예측된 데이터의 선반입 여부와 선반입 시기를 결정하는 경우 현재의 가용 입출력 대역폭을 고려하는 기법을 제안한다. 제안하는 선반입 알고리즘을 시뮬레이션을 통하여 기타 선반입 알고리즘과 비교해 본 결과 파일 시스템 성능이 향상되었음을 보여준다.

  • PDF

Efficient Prefetching and Asynchronous Writing for Flash Memory (플래시 메모리를 위한 효율적인 선반입과 비동기 쓰기 기법)

  • Park, Kwang-Hee;Kim, Deok-Hwan
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.2
    • /
    • pp.77-88
    • /
    • 2009
  • According to the size of NAND flash memory as the storage system of mobile device becomes large, the performance of address translation and life cycle management in FTL (Flash Translation Layer) to interact with file system becomes very important. In this paper, we propose the continuity counters, which represent the number of continuous physical blocks whose logical addresses are consecutive, to reduce the number of address translation. Furthermore we propose the prefetching method which preloads frequently accessed pages into main memory to enhance I/O performance of flash memory. Besides, we use the 2-bit write prediction and asynchronous writing method to predict addresses repeatedly referenced from host and prevent from writing overhead. The experiments show that the proposed method improves the I/O performance and extends the life cycle of flash memory. As a result, proposed CFTL (Clustered Flash Translation Layer)'s performance of address translation is faster 20% than conventional FTLs. Furthermore, CFTL is reduced about 50% writing time than that of conventional FTLs.

Prefetching based on the Type-Level Access Pattern in Object-Relational DBMSs (객체관계형 DBMS에서 타입수준 액세스 패턴을 이용한 선인출 전략)

  • Han, Wook-Shin;Moon, Yang-Sae;Whang, Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.529-544
    • /
    • 2001
  • Prefetching is an effective method to minimize the number of roundtrips between the client and the server in database management systems. In this paper we propose new notions of the type-level access pattern and the type-level access locality and developed an efficient prefetchin policy based on the notions. The type-level access patterns is a sequence of attributes that are referenced in accessing the objects: the type-level access locality a phenomenon that regular and repetitive type-level access patterns exist. Existing prefetching methods are based on object-level or page-level access patterns, which consist of object0ids of page-ids of the objects accessed. However, the drawback of these methods is that they work only when exactly the same objects or pages are accessed repeatedly. In contrast, even though the same objects are not accessed repeatedly, our technique effectively prefetches objects if the same attributes are referenced repeatedly, i,e of there is type-level access locality. Many navigational applications in Object-Relational Database Management System(ORDBMs) have type-level access locality. Therefore our technique can be employed in ORDBMs to effectively reduce the number of roundtrips thereby significantly enhancing the performance. We have conducted extensive experiments in a prototype ORDBMS to show the effectiveness of our algorithm. Experimental results using the 007 benchmark and a real GIS application show that our technique provides orders of magnitude improvements in the roundtrips and several factors of improvements in overall performance over on-demand fetching and context-based prefetching, which a state-of the art prefetching method. These results indicate that our approach significantly and is a practical method that can be implemented in commercial ORDMSs.

  • PDF

Memory Latency Hiding Techniques (메모리 지연을 감추는 기법들)

  • Ki, An-Do
    • Electronics and Telecommunications Trends
    • /
    • v.13 no.3 s.51
    • /
    • pp.61-70
    • /
    • 1998
  • The obvious way to make a computer system more powerful is to make the processor as fast as possible. Furthermore, adopting a large number of such fast processors would be the next step. This multiprocessor system could be useful only if it distributes workload uniformly and if its processors are fully utilized. To achieve a higher processor utilization, memory access latency must be reduced as much as possible and even more the remaining latency must be hidden. The actual latency can be reduced by using fast logic and the effective latency can be reduced by using cache. This article discusses what the memory latency problem is, how serious it is by presenting analytical and simulation results, and existing techniques for coping with it; such as write-buffer, relaxed consistency model, multi-threading, data locality optimization, data forwarding, and data prefetching.

An Area Efficient Low Power Data Cache for Multimedia Embedded Systems (멀티미디어 내장형 시스템을 위한 저전력 데이터 캐쉬 설계)

  • Kim Cheong-Ghil;Kim Shin-Dug
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.101-110
    • /
    • 2006
  • One of the most effective ways to improve cache performance is to exploit both temporal and spatial locality given by any program executional characteristics. This paper proposes a data cache with small space for low power but high performance on multimedia applications. The basic architecture is a split-cache consisting of a direct-mapped cache with small block sire and a fully-associative buffer with large block size. To overcome the disadvantage of small cache space, two mechanisms are enhanced by considering operational behaviors of multimedia applications: an adaptive multi-block prefetching to initiate various fetch sizes and an efficient block filtering to remove rarely reused data. The simulations on MediaBench show that the proposed 5KB-cache can provide equivalent performance and reduce energy consumption up to 40% as compared with 16KB 4-way set associative cache.

Modeling of a storage subsystem in multimedia information system

  • Lim, Cheol-Su
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.11
    • /
    • pp.2521-2530
    • /
    • 1997
  • In this paepr, we present a video-on-demand (VOD) system design model that address and integrates a number of inter-related issues. Then with analysis and performance evaluation, we investigate various aspects of disk and buffer managements in the given model. Based on the analysis results, we suggest that a distributed buffering scheme with intermediate buffers may te useful to transform bursty disk accesses into a continuous stream for for glitch-free performance of VOD systems. Also, through simulation, we illustrate that massive multimedia information storage design techniques such as prefetching, clustered striping, and real-time disk scheduling integrated with the distributed buffering mechanism may enhance end-to-end real-time performance of VOD systems under wide-area networks.

  • PDF

A Dynamic Prefetchiong Scheme for Handling Small Files based on Hadoop Distributed File System (하둡 분산 파일 시스템 기반 소용량 파일 처리를 위한 동적 프리페칭 기법)

  • Yoo, Sang-Hyun;Youn, Hee-Yong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2014.07a
    • /
    • pp.329-332
    • /
    • 2014
  • 클라우드 컴퓨팅이 활성화 됨에 따라 기존의 파일 시스템과는 다른 대용량 파일 처리에 효율적인 분산파일시스템의 요구가 대두 되었다. 그 중에 하둡 분산 파일 시스템(Hadoop Distribute File System, HDFS)은 기존의 분산파일 시스템과는 달리 가용성과 내고장성을 보장하고, 데이터 접근 패턴을 스트리밍 방식으로 지원하여 대용량 파일을 효율적으로 저장할 수 있다. 이러한 장점 때문에, 클라우드 컴퓨팅의 파일시스템으로 대부분 채택하고 있다. 하지만 실제 HDFS 데이터 집합에서 대용량 파일 보다 소용량 파일이 차지하는 비율이 높으며, 이러한 다수의 소 용량 파일은 데이터 처리에 있어 높은 처리비용을 초래 할 뿐 만 아니라 메모리 성능에 악영향을 끼친다. 하지만 소 용량 파일을 프리패칭 함으로서 이러한 문제점을 해결 할 수 있다. HDFS의 데이터 프리페칭은 기존의 데이터 프리페칭의 기법으로는 적용하기 어려워 HDFS를 위한 데이터 프리패칭 기법을 제안한다.

  • PDF

Differentiated Service for Hypermedia data on the Web (하이퍼미디어 데이터를 위한 차별화된 서비스 연구)

  • Rhee, Yoon-Jung;Kim, Tai-Yun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.10b
    • /
    • pp.1481-1484
    • /
    • 2001
  • Most implementations of HTTP servers do not distinguish among requests for hypermedia data from different clients. Commercialization of Web site is becoming increasingly common. Therefore providing quality of service with members paying to the site is often an important issue for the hosts. For some uses, such as web prefetching or multiple priority schemes, different levels of service are desirable. We propose server-side TCP connection management mechanisms to provide two different levels of Web service, high and regular levels by setting different timeout for inactive connection. Therefore this mechanism can effectively provide different service classes even in the absence of operating system and network support.

  • PDF