• Title/Summary/Keyword: Pre-fetch

Search Result 11, Processing Time 0.03 seconds

Design and Implementation of the Multi-level Pre-fetch and Deferred-flush in BADA-III for GIS Applications (GIS 응용을 위한 바다-III의 다단계 사전인출과 지연쓰기의 설계 및 구현)

  • Park, Jun-Ho;Park, Sung-Chul;Shim, Kwang-Hoon;Seong, Jun-Hwa;Park, Young-Chul
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.1 no.2
    • /
    • pp.67-79
    • /
    • 1998
  • Most GIS applications are read-intensive on a large number of spatial objects and when the spatial objects are composite objects, the contained objects within the composite objects are also accessed. In GIS applications, creation, deletion, and update operations on spatial objects occur very rarely, but once they occur they deal with a large number of spatial objects. This paper proposes the concept of the multi-level pre-fetch query to retrieve a large number of spatial objects efficiently, and the functionality of the deferred-flush on the newly created persistent objects into the database with the optimal performance, and presents the design and implementation details of those ideas into an object-oriented DBMS BADA-III while considering these characteristics of GIS applications. The multi-level pre-fetch query retrieves the objects that satisfy the query and the objects that are contained within the objects up to the level specified by users, and registers the retrieved objects on the client cache. The deferred-flush flushes a large number of composite objects that are created by the application with a minimal overhead of the server and a minimal number of communications between the client and the server. These two functionality are suitable for the applications that search or create a large number of composite objects like GIS applications.

  • PDF

Branch Prediction Latency Hiding Scheme using Branch Pre-Prediction and Modified BTB (분기 선예측과 개선된 BTB 구조를 사용한 분기 예측 지연시간 은폐 기법)

  • Kim, Ju-Hwan;Kwak, Jong-Wook;Jhon, Chu-Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.1-10
    • /
    • 2009
  • Precise branch predictor has a profound impact on system performance in modern processor architectures. Recent works show that prediction latency as well as prediction accuracy has a critical impact on overall system performance as well. However, prediction latency tends to be overlooked. In this paper, we propose Branch Pre-Prediction policy to tolerate branch prediction latency. The proposed solution allows that branch predictor can proceed its prediction without any information from the fetch engine, separating the prediction engine from fetch stage. In addition, we propose newly modified BTE structure to support our solution. The simulation result shows that proposed solution can hide most prediction latency with still providing the same level of prediction accuracy. Furthermore, the proposed solution shows even better performance than the ideal case, that is the predictor which always takes a single cycle prediction latency. In our experiments, IPC improvement is up to 11.92% and 5.15% in average, compared to conventional predictor system.

An efficient pipelined architecture for 3D graphics accelerator (3차원 그래픽 가속기의 효율적인 파이프라인 설계)

  • 우현재;정종철;이문기
    • Proceedings of the IEEK Conference
    • /
    • 2002.06b
    • /
    • pp.357-360
    • /
    • 2002
  • This paper is proposed about an efficient pipelined architecture for 3D graphics accelerator to reduce Cache miss ratio. Because cache miss takes a considerable time, about 20∼30 cycle, we reduce cache miss ratio to use pre-fetch. As a result of simulation, we figure out that the miss ratio of cache depends on the size of tile, cache memory and auxiliary cache memory. We can save 6.6% cache miss ratio maximumly.

  • PDF

A Mobile Streaming Agent Method for Demanding Video Services (동영상 주문 서비스를 위한 이동 에이전트 스트리밍 기법)

  • Lee, tae-gyu;Ko, myung-sook
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2010.05a
    • /
    • pp.463-465
    • /
    • 2010
  • 무선 이동 네트워크상의 이동 사용자의 동영상 주문 요청에 따라 실시간 동영상 전송서비스를 지원하기 위한 무선 스트리밍 플랫폼 구조 및 방법을 기술한다. 무선 채널은 잦은 네트워크 단절 및 전송 지연 등의 자원 한계성을 가지고 있다. 이러한 한계점들을 극복하고 이동 사용자가 요구하는 이동 실시간 스트리밍서비스를 지원하기 위해 프리패치(pre-fetch) 전송 방법과 패킷손실보존(packet loss conservation) 방법을 가진 다운로드 에이전트 시스템을 제안한다.

  • PDF

Power Performance of Instruction Pre-Fetch Unit (명령어 선 인출기의 전력 성능)

  • 송영규;오형철
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.365-368
    • /
    • 1999
  • In this paper, we investigate the effect of adopting branch-penalty compensation schemes on the power performance of TLBs(Translation Look-aside Buffers) and instruction caches. We found that the double-buffer branch-penalty compensation scheme can reduce the power consumption of the TLBs and the instruction caches considered by up to 14-21.3%. The power consumption is estimated through simulation at the architectural level, using the Kamble/Ghose method

  • PDF

Agent-based Wireless Streaming Transfer Method (에이전트 기반 무선 스트리밍 전송 기법)

  • Lee, Tae-Gyu;Ko, Myung-Sook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.679-682
    • /
    • 2011
  • 무선 네트워크상의 이동 사용자의 동영상 요청에 따라 실시간 동영상 전송서비스를 지원하기 위한 무선 스트리밍 플랫폼 구조 및 방법을 제안한다. 무선 전송 채널은 잦은 네트워크 단절 및 전송 지연 등의 자원 한계성을 가지고 있다. 이러한 한계점들을 극복하고 이동 사용자가 요구하는 이동 실시간 스트리밍서비스를 지원하기 위해 프리패치(pre-fetch) 전송 캐싱 방법과 패킷손실보존(packet loss conservation) 방법을 가진 전송 에이전트 시스템을 제안한다.

An Index Structure for Main-memory Storage Systems using The Level Pre-fetching

  • Lee, Seok-Jae;Yoon, Jong-Hyun;Song, Seok-Il;Yoo, Jae-Soo
    • International Journal of Contents
    • /
    • v.3 no.1
    • /
    • pp.19-23
    • /
    • 2007
  • Recently, several main-memory index structures have been proposed to reduce the impact of secondary cache misses. In mainmemory storage systems, secondary cache misses have a substantial effect on the performance of index structures. However, recent studies still stiffer from secondary cache misses when visiting each level of index tree. In this paper, we propose a new index structure that minimizes the total amount of cache miss latency. The proposed index structure prefetched grandchildren of a current node. The basic structure of the proposed index structure is based on that of the CSB+-Tree, which uses the concept of a node group to increase fan-out. However, the insert algorithm of the proposed index structure significantly reduces the cost of a split. The superiority of our algorithm is shown through performance evaluation.

Application-Oriented Context Pre-fetch Method for Enhancing Inference Performance in Ontology-based Context Management (온톨로지 기반의 상황정보관리에서 추론 성능 향상을 위한 어플리케이션 지향적 상황정보 선인출 기법)

  • Lee Jae-Ho;Park In-Suk;Lee Dong-Man;Hyun Soon-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.4
    • /
    • pp.254-263
    • /
    • 2006
  • Ontology-based context models are widely used in ubiquitous computing environment because they have advantages in the acquisition of conceptual context through inferencing, context sharing, and context reusing. Among the benefits, inferencing enables context-aware applications to use conceptual contexts which cannot be acquired by sensors. However, inferencing causes processing delay and thus becomes the major obstacle to the implementation of context-aware applications. The delay becomes longer as the amount of contexts increases. In this paper, we propose a context pre-fetching method to reduce the size of contexts to be processed in a working memory in attempt to speed up inferencing. For this, we extend the query-tree method to identify contexts relevant to the queries of a context-aware application. Maintaining the pre-fetched contexts optimal in a working memory, the processing delay of inference reduces without the loss of the benefits of ontology-based context model. We apply the proposed scheme to our ubiquitous computing middleware, Active Surroundings, and demonstrate the performance enhancement by experiments.

Removal of As, Cadmium and Lead in Sandy Soil with Sonification-Electrokinetic Remediation (초음파동전기기법을 이용한 비소, 카드뮴, 납으로 오염된 사질토 정화 연구)

  • Oh, SeungJin;Oh, Minah;Lee, Jai-Young
    • Journal of Soil and Groundwater Environment
    • /
    • v.18 no.7
    • /
    • pp.1-11
    • /
    • 2013
  • The actively soil pollution by the toxic heavy-metals like the arsenic, cadmium, lead due to the industrialization and economic activity. The uses the electrokinetic remediation of contaminated soil has many researches against the fine soil having a small size in the on going. However, it is the actual condition which the research result that is not effective due to the low surface charge of the particle and high permeability shows in the electrokinetic remediation in comparison with the fine soil in the case of the sandy soil in which the particle size is large. In this research, the electrokinetic remediation and ultrasonic wave fetch strategy is compound applied against the sandy soil polluted by the arsenic, cadmium, and lead removal efficiency of the sandy soil through the comparison with the existing electrokinetic remediation tries to be evaluated. First of all, desorption of contaminants in soil by ultrasonic extraction in the Pre-Test conducted to see desorption effective 5~15%. After that, By conducted Batch-Test results frequency output century 200 Khz, reaction time 30 min, contaminated soil used in experiment was 500 g. Removal efficiency of arsenic, cadmium, lead are 25.55%, 8.01%, 34.90%. But, As, Cd, Pb remediation efficiency less than 1% in EK1(control group).

Research on Web Cache Infection Methods and Countermeasures (웹 캐시 감염 방법 및 대응책 연구)

  • Hong, Sunghyuck;Han, Kun-Hee
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.2
    • /
    • pp.17-22
    • /
    • 2019
  • Cache is a technique that improves the client's response time, thereby reducing the bandwidth and showing an effective side. However, there are vulnerabilities in the cache technique as well as in some techniques. Web caching is convenient, but it can be exploited by hacking and cause problems. Web cache problems are mainly caused by cache misses and excessive cache line fetch. If the cache miss is high and excessive, the cache will become a vulnerability, causing errors such as transforming the secure data and causing problems for both the client and the system of the user. If the user is aware of the cache infection and the countermeasure against the error, the user will no longer feel the cache error or the problem of the infection occurrence. Therefore, this study proposed countermeasures against four kinds of cache infections and errors, and suggested countermeasures against web cache infections.