• Title/Summary/Keyword: data cache

Search Result 487, Processing Time 0.028 seconds

A Theoretical Superscalar Microprocessor Performance Model with Limited Functional Units Using Instruction Dependencies (한정된 연산유닛에서 명령어 종속성을 이용하는 수퍼스칼라 프로세서의 이론적 성능 모델)

  • Lee, Jong-Bok
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.2
    • /
    • pp.423-428
    • /
    • 2010
  • In the initial design phase of superscalar microprocessors, a performance model is necessary. A theoretic performance model is very useful since performance for various architecture parameters can be obtained by simply computing equations, without repeating simulations, Previous studies established theoretic performance models using the relation between the instruction window size and the issue width, with the penalties due to branch mispredictions and cache misses. However, the study was intended for unlimited number of functional units, which is insufficient for the real case application. This paper proposes a superscalar microprocessor theoretical performance model which also works for the limited functional units. To enhance the accuracy of our limited functional unit model, instruction dependency rates are employed. By using trace-driven data of SPEC 2000 integer programs as input, this paper shows that the theoretically computed performance of superscalar microprocessor with limited number of functional units is quite similar to the measured performance.

A Caching Scheme to Support Session Locality in Hierarchical SIP Networks

  • Choi, KwangHee;Kim, Hyunwoo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-9
    • /
    • 2013
  • Most calls of a called user are invoked by the group of calling users. This call pattern is defined as call locality. Similarly Internet sessions including IP telephony calls have this pattern. We define it session locality. In this paper, we propose a caching scheme to support session locality in hierarchical SIP networks. The proposed scheme can be applied easily by adding only one filed to cache to a data structure of the SIP mobility agent. And this scheme can reduce signaling cost, database access cost and session setup delay to locate a called user. Moreover, it distributes the load on the home registrar to the SIP mobility agents. Our performance evaluation shows the proposed caching scheme outperforms the hierarchical SIP scheme when session to mobility ratio is high.

Dynamic Prefetch Filtering Schemes to Enhance Utilization of Data Cache (데이터 캐시의 활용도를 높이는 동적 선인출 필터링 기법)

  • 전영숙;이병권;김석일;전중남
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10a
    • /
    • pp.562-564
    • /
    • 2004
  • 캐시 선인출 기법은 메모리 참조에 따른 지연시간을 줄이는 효과적인 방법이다. 그러나 너무 적극적인 선인출은 캐시 오염을 유발시켜 선인출에 의한 장점을 상쇄시킨다. 본 연구에서는 캐시의 오염을 줄이기 위해 동적으로 필터 테이블을 참조하여 선인출 명령을 수행할 지의 여부를 결정하는 4가지 필터링 방법들을 비교 평가한다. 비교 연구를 위한 이상적인 필터링 구조를 제안하였으며, 기존 연구에서의 잠김 현상을 개선하기 위한 이진 상태 구조를 제안하였다. 또한, 정교한 필터링을 위한 블록주소 참조 방식을 제안하였다. 일반적으로 많이 사용되는 일반 벤치마크 프로그램과 멀티미디어 벤치마크 프로그램들에 대하여 실험한 결과, 캐시 미스율이 이진 상태 구조는 평균 5.6%, 블록주소 참조 구조는 7.9% 각각 감소하였다.

  • PDF

A Lifetime-based Cache Policy for Stable Data Distribution in NDN (NDN에서 안정성 있는 데이터 배포를 위한 Lifetime 기반의 캐시 정책)

  • CHOI, SU-HO;JOE, IN-WHEE
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.92-95
    • /
    • 2019
  • CDN은 여러 서버에 데이터를 분산하여 캐시 한 뒤 빠르게 데이터를 배포하는 네트워크이다. 하지만 CDN은 서버 성능에 영향을 받고 데이터의 Lifetime이 얼마 없는 경우 중앙 서버 혹은 데이터 제작자와의 통신을 통해 데이터를 새로 받아야 한다. 반면에, NDN은 데이터를 라우터별로 캐시하여 트래픽을 분산시키고 딜레이를 줄였다. 하지만, 캐시 된 데이터의 Lifetime에 따라 데이터를 받지 못하는 문제가 있다. 본 논문에선 데이터 요청자가 Lifetime 내에 데이터를 못 받으면 요청자의 네트워크 상황과 Interest 패킷의 주기를 고려하여 캐시 된 데이터의 Lifetime을 증가시켜 안정성 있는 데이터 배포를 목표로 하였다. 캐시 정책이 적용된 시뮬레이션을 진행했으며, 얼마 남지 않은 Lifetime을 증가시켜 데이터 요청자가 데이터를 온전히 받을 수 있음을 확인하였다.

Lifetime Extension Method for Non-Volatile Memory based Deep Learning System by analyzing Data Write Pattern (데이터 쓰기 패턴 분석을 통한 비휘발성 메모리 기반 딥러닝 시스템의 수명 연장 기법)

  • Choi, Juhee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.3
    • /
    • pp.1-6
    • /
    • 2022
  • Modern computer systems usually have special hardware for operations used in deep learning workload even edge computing environment. Non-volatile memories (NVMs) have been considered for alternative memory storage because they consume little static energy and occupy small area. However, there is a problem for NVMs to be directly adopted. An NVM cell has limited write endurance, so that the lifetime of NVM-based memory system is much shorter than that of conventional memory system. To overcome this problem for the deep learning system, this paper proposes a novel method to extend the lifetime based on the analysis of the deep learning workloads. If an incoming block has more than a predefined number of frequently used values, the cacheline is defined as write friendly block. During the victim selection, the cacheline has lower possibility to be chosen as victim. The experimental results show that the lifetime is increased by about 50% and energy consumption is decreased by 3% with a little performance hurt.

Communication Resource Allocation Strategy of Internet of Vehicles Based on MEC

  • Ma, Zhiqiang
    • Journal of Information Processing Systems
    • /
    • v.18 no.3
    • /
    • pp.389-401
    • /
    • 2022
  • The business of Internet of Vehicles (IoV) is growing rapidly, and the large amount of data exchange has caused problems of large mobile network communication delay and large energy loss. A strategy for resource allocation of IoV communication based on mobile edge computing (MEC) is thus proposed. First, a model of the cloud-side collaborative cache and resource allocation system for the IoV is designed. Vehicles can offload tasks to MEC servers or neighboring vehicles for communication. Then, the communication model and the calculation model of IoV system are comprehensively analyzed. The optimization objective of minimizing delay and energy consumption is constructed. Finally, the on-board computing task is coded, and the optimization problem is transformed into a knapsack problem. The optimal resource allocation strategy is obtained through genetic algorithm. The simulation results based on the MATLAB platform show that: The proposed strategy offloads tasks to the MEC server or neighboring vehicles, making full use of system resources. In different situations, the energy consumption does not exceed 300 J and 180 J, with an average delay of 210 ms, effectively reducing system overhead and improving response speed.

Data Replication and Migration Scheme for Load Balancing in Distributed Memory Environments (분산 인-메모리 환경에서 부하 분산을 위한 데이터 복제와 이주 기법)

  • Choi, Kitae;Yoon, Sangwon;Park, Jaeyeol;Lim, Jongtae;Bok, Kyoungsoo;Yoo, Jaesoo
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.44-49
    • /
    • 2016
  • Recently, data has been growing dramatically along with the growth of social media and digital devices. A distributed memory processing system has been used to efficiently process large amounts of data. However, if a load is concentrated in a certain node in distributed environments, a node performance significantly degrades. In this paper, we propose a load balancing scheme to distribute load in a distributed memory environment. The proposed scheme replicates hot data to multiple nodes for managing a node's load and migrates the data by considering the load of the nodes when nodes are added or removed. The client reduces the number of accesses to the central server by directly accessing the data node through the metadata information of the hot data. In order to show the superiority of the proposed scheme, we compare it with the existing load balancing scheme through performance evaluation.

A mobile data caching synchronization strategy based on in-demand replacement priority (수요에 따른 교체 우선 순위 기반 모바일 데이터베이스 캐쉬 동기화 정책)

  • Zhao, Jinhua;Xia, Ying;Lee, Soon-Jo;Bae, Hae-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.2
    • /
    • pp.13-21
    • /
    • 2012
  • Mobile data caching is usually used as an effective way to improve the speed of local transaction processing and reduce server load. In mobile database environment, due to its characters - low bandwidth, excessive latency and intermittent network, caching is especially crucial. A lot of mobile data caching strategies have been proposed to handle these problems over the last few years. However, with smart phone widely application these approaches cannot support vast data requirements efficiently. In this paper, to make full use of cache data, lower wireless transmission quantity and raise transaction success rate, we design a new mobile data caching synchronization strategy based on in-demand and replacement priority. We experimentally verify that our techniques significantly reduce quantity of wireless transmission and improve transaction success rate, especially when mobile client request a large amount of data.

Modeling of Data References with Temporal Locality and Popularity Bias (시간 지역성과 인기 편향성을 가진 데이터 참조의 모델링)

  • Hyokyung Bahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.119-124
    • /
    • 2023
  • This paper proposes a new reference model that can represent data access with temporal locality and popularity bias. Among existing reference models, the LRU-stack model can express temporal locality, which is a characteristic that the more recently referenced data has, the higher the probability of being referenced again. However, it cannot take into account differences in popularity of the data. Conversely, the independent reference model can reflect the different popularity of data, but has the limitation of not being able to model changes in data reference trends over time. The reference model presented in this paper overcomes the limitations of these two models and has the feature of reflecting both the popularity bias of data and their changes over time. This paper also examines the relationship between the cache replacement algorithm and the reference model, and shows the optimality of the proposed model.

Implementation and Evaluation of Proxy Caching Mechanisms with Video Qualify Adjustment

  • Sasabe, Masahiro;Taniguchi, Yoshiaki;Wakamiya, Naoki;Murata, Masayuki;Miyahara, Hideo
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.121-124
    • /
    • 2002
  • The proxy mechanism widely used in WWW systems offers low-delay data delivery by means of "proxy server". By applying the proxy mechanisms to the video streaming system, we expect that high-quality and low-delay video distribution can be accomplished without introducing extra load on the system. In addition, it is effective to adapt the quality of cached video data appropriately in the proxy if user requests are diverse due to heterogeneity in terms of the available bandwidth, end-system performance, and user′s preferences on the perceived video quality. We have proposed proxy caching mechanisms to accomplish the high-quality and highly-interactive video streaming services. In our proposed system, a video stream is divided into blocks for efficient use of the cache buffer. The proxy server is assumed to be able to adjust the quality of a cached or retrieved video block to the request through video filters. In this paper, to verify the practicality of our mechanisms, we implemented them on a real system and conducted experiments. Through evaluations from several performance aspects, it was shown that our proposed mechanisms can provide users with a low-latency and high-quality video streaming service in a heterogeneous environment.

  • PDF