• Title/Summary/Keyword: data cache

Search Result 487, Processing Time 0.029 seconds

Performance Analysis of Cellular If Using Combined Cache and Alternative Handoff Method for Realtime Data Transmission (실시간 데이터를 지원하는 통합 캐시 및 차별화된 핸드오프를 이용한 셀룰러 IP의 성능분석)

  • Seo, Jeong-Hwa;Han, Tae-Young;Kim, Nam
    • The Journal of the Korea Contents Association
    • /
    • v.3 no.2
    • /
    • pp.65-72
    • /
    • 2003
  • In this paper, the new scheme using a Combined Cache(CC) that combing the Paging and Routing Cache(PRC) and an alternative handoff method according to the type of data transmission for achieving the efficient realtime communication. The PRC and quasi-soft handoff method reduce the path duplication. But they increase the network traffic load because of the handoff state packet of Mobile Host(MH). Moreover the use the same handoff method without differentiating the type of transmission data. Those problems are solved by operating U with a semi-soft handoff method for realtime data transmission and with a hard handoff method for non-realtime data transmission. As a result or simulation a better performance is obtained because of the reduction of the number of control packet in case that the number of cells are below 20. And the packet arrival time and loss of packet decreased significantly for realtime data transmission.

  • PDF

Design of an ARM9 Compatible 32bit RISC Microprocessor (ARM9 호환 32bit RISC Microprocessor의 설계)

  • Hwang, Bo-Sik;Nam, Hyoung-Gin
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.885-888
    • /
    • 2005
  • In this study, we designed an ARM9 compatible RISC microprocessor using VHDL. The microprocessor was designed to support Harvard architecture with separate instruction cache and data cache. The state machine was optimized for multi-cycle instructions. In addition, a data forwarding mechanism was adopted to reduce the stall cycles due to data hazards. Assembly programs were up-loaded into a ROM block for system-level simulation. Proper operation of the designed microprocessor was confirmed by investigating the contents of the internal registers as well as the RAM block. Futhermore, the simulation results clearly indicated that the operation speed of the processor designed in this study is enhanced by reducing the execution cycles required for multiplication related instructions.

  • PDF

Locally weighted linear regression prefetching method for hybrid memory system (하이브리드 메모리 시스템의 지역 가중 선형회귀 프리페치 방법)

  • Tang, Qian;Kim, Jeong-Geun;Kim, Shin-Dug
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.12-15
    • /
    • 2020
  • Data access characteristics can directly affect the efficiency of the system execution. This research is to design an accurate predictor by using historical memory access information, where highly accessible data can be migrated from low-speed storage (SSD/HHD) to high-speed memory (Memory/CPU Cache) in advance, thereby reducing data access latency and further improving overall performance. For this goal, we design a locally weighted linear regression prefetch scheme to cope with irregular access patterns in large graph processing applications for a DARM-PCM hybrid memory structure. By analyzing the testing result, the appropriate structural parameters can be selected, which greatly improves the cache prefetching performance, resulting in overall performance improvement.

General Web Cache Implementation Using NIO (NIO를 이용한 범용 웹 캐시 구현)

  • Lee, Chul-Hui;Shin, Yong-Hyeon
    • Journal of Advanced Navigation Technology
    • /
    • v.20 no.1
    • /
    • pp.79-85
    • /
    • 2016
  • Network traffic is increased rapidly, due to mobile and social network, such as smartphones and facebook, in recent web environment. In this paper, we improved web response time of existing system using direct buffer of NIO and DMA. This solved the disadvantage of JAVA, such as CPU performance reduction due to the blocking of I/O, garbage collection of buffer. Key values circulated many data due to priority change put on a hash map operated easily and apply a priority modification algorithm. Large response data is separated and stored at a fast direct buffer and improved performance. This paper showed that the proposed method using NIO was much improved performance, in many test situations of cache hit and cache miss.

A Cache Consistency Control for B-Tree Indices in a Database Sharing System (데이타베이스 공유 시스템에서 B-트리 인덱스를 위한 캐쉬 일관성 제어)

  • On, Gyeong-O;Jo, Haeng-Rae
    • The KIPS Transactions:PartD
    • /
    • v.8D no.5
    • /
    • pp.593-604
    • /
    • 2001
  • A database sharing system (DSS) refers to a system for high performance transaction processing. In the DSS, the processing nodes are coupled via a high speed network and share a common database at the disk level. Each node has a local memory and a separate copy of operating system. To reduce the number of disk accesses, the node caches data pages and index pages in its memory buffer. In general, B-tree index pages are accessed more often and thus cached at more processing nodes, than their corresponding data pages. There are also complicated operations in the B-tree such as Fetch, Fetch Next, Insertion and Deletion. Therefore, an efficient cache consistency scheme supporting high level concurrency is required. In this paper, we propose cache consistency schemes using identifiers of index pages and page_LSN of leaf page. The propose schemes can improve the system throughput by reducing the required message traffic between nodes and index re-traversal.

  • PDF

Performance Evaluation of Deferred Locking With Shadow Transaction (그림자 트랜잭션을 이용한 지연 로킹 기법의 성능 평가)

  • 권혁민
    • The Journal of Information Technology
    • /
    • v.3 no.3
    • /
    • pp.117-134
    • /
    • 2000
  • Client-server DBMS based on a data-skipping model can exploit client resources effectively by allowing inter-transaction caching. However, inter-transaction caching raises the need of transactional cache consistency maintenance(TCCM) protocol, since each client is able to cache a portion of the database dynamically. Detection-based TCCM schemes can reduce the message overhead required for cache consistency if they validate clients replica asynchronously, and thus they cm show high throughput rates. However, they tend to show high ratios of transaction abort since transactions can access invalid replica. For coping with this drawback, this paper develops a new notion of shadow transaction, which is a backup-purpose one that is kept ready to replace an aborted transaction. This paper proposes a new detection-based TCCM scheme named DL-ST on the basis of the notion of shadow transaction. Using a simulation model, this paper evaluates the effect of shadow transaction in terms of transaction through rate and abort ratio.

  • PDF

A Study on the Prediction Accuracy Bounds of Instruction Prefetching (명령어 선인출 예측 정확도의 한계에 관한 연구)

  • Kim, Seong-Baeg;Min, Sang-Lyul;Kim, Chong-Sang
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.8
    • /
    • pp.719-729
    • /
    • 2000
  • Prefetching aims at reducing memory latency by fetching, in advance, data that are likely to be requested by the processor in a near future. The effectiveness of prefetching is determined by how accurate the prediction on the needed instructions and data is. Most previous studies on prefetching were limited to proposing a particular prefetch scheme and its performance evaluation, paying little attention to theoretical aspects of prefetching. This paper focuses on the theoretical aspects of instruction prefetching. For this purpose, we propose a clairvoyant prefetch model that makes use of perfect history information. Based on this theoretical model, we analyzed upper limits on the prefetch prediction accuracies of the SPEC benchmarks. The results show that the prefetch prediction accuracy is very high when there is no cache. However, as the size of the instruction cache increases, the prefetch prediction accuracy drops drastically. For example, in the case of the spice benchmark, the prefetch prediction accuracy drops from 53% to 39% when the cache size increases from 2Kbyte to 16Kbyte (assuming 16byte block size). These results indicate that as the cache size increases, most localities are captured by the cache and that instruction prefetching based on the information extracted from the references that missed in the cache suffers from prediction inaccuracies

  • PDF

A Strategy To Reduce Network Traffic Using Two-layered Cache Servers for Continuous Media Data on the Wide Area Network (이중 캐쉬 서버를 사용한 실시간 데이터의 좡대역 네트워크 대역폭 감소 정책)

  • Park, Yong-Woon;Beak, Kun-Hyo;Chung, Ki-Dong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.10
    • /
    • pp.3262-3271
    • /
    • 2000
  • Continuous media objects, due to large volume and real-time consiraints in their delivery,are likely to consume much network andwidth Generally, proxy servers are used to hold the fiequently requested objects so as to reduce the network traffic to the central server but most of them are designed for text and image dae that they do not go well with continuous media data. So, in this paper, we propose a two-layered network cache management policy for continuous media object delivery on the wide area networks. With the proposed cache management scheme,in cach LAN, there exists one LAN cache and each LAN is further devided into a group of sub-LANs, each of which also has its own sub-LAN eache. Further, each object is also partitioned into two parts the front-end and rear-end partition. they can be loaded in the same cache or separately in different network caches according to their access frequencics. By doing so, cache replacement overhead could be educed as compared to the case of the full size daa allocation and replacement , this eventually reduces the backbone network traffic to the origin server.

  • PDF

Effective Reference Probability Incorporating the Effect of Expiration Time in Web Cache (웹 캐쉬에서 만기시간의 영향을 고려한 유효참조확률)

  • Lee, Jeong-Joon;Moon, Yang-Se;Whang, Kyu-Young;Hong, Eui-Kyung
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.688-701
    • /
    • 2001
  • Web caching has become an important problem addressing the performance issues in web applications. In this paper we propose a method that enhances the performance of web caching by incorporating the expiration time of web data we introduce the notion of the effective reference probability that incorporates the effect of expiration time into the reference probability used in the existing cache replacement algorithms .We formally define the effective reference probability and derive it theoretically using a probabilistic model. By simply replacing probabilities with the effective reference probability in the existing cache replacement algorithms we can take the effect of expiration time into account The results of performance evaluation through experiments show that the replacement algorithms using the effective reference probability always outperform the existing ones. The reason is that the proposed method precisely reflects the theoretical probability of getting the cache effect, and thus, incorporates the influence of the expiration time more effectively. In particular when the cache fraction is 0.05 and data update is comparatively frequent (i.e. the update frequency is more than 1/0 of the reference frequency) the performance enhancement is more than 30% in LRU-2 and 13% in Aggarwal's method (PSS integrating a refresh overhead factor) The results show that effective reference probability contributes significantly to the performance enhancement of the web cache in the presence of expiration time.

  • PDF

Performance Analysis of Caching Instructions on SVLIW Processor and VLIW Processor (SVLIW 프로세서와 VLIW 프로세서의 명령어 캐싱에 따른 성능 분석)

  • Ji, Sung-Hyun;Park, No-Kwang;Kim, Suk-Il
    • Journal of IKEEE
    • /
    • v.1 no.1 s.1
    • /
    • pp.101-110
    • /
    • 1997
  • SVLIW processor architectures can resolve resource collisions and data dependencies between the instructions while scheduling VLIW instructions at run-time. As a result, long NOP word instructions can be removed from the object code produced for the processor. Thus, the occurrence of cache misses on the SVLIW processor would be lesser than that on the same cache size VLIW processor. Less frequent cache misses on the SVLIW processor would incur less frequent memory access, and thus, the total execution cycles to complete an application would be shortened compared with cases on the VLIW processor. Such a feature eventually compromises effects of longer instruction pipeline stages than those of the VLIW processor. In this paper, we formulate and compare two execution cycle models of the two architectures. A simulation results show that the longer memory access cycles when cache miss occurs, the total execution cycles of SVLIW processor would be shorter than those of VLIW processor.

  • PDF