• Title/Summary/Keyword: Trace-driven simulation

Search Result 58, Processing Time 0.026 seconds

Performance Study of Multicore Digital Signal Processor Architectures (멀티코어 디지털 신호처리 프로세서의 성능 연구)

  • Lee, Jongbok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.4
    • /
    • pp.171-177
    • /
    • 2013
  • Due to the demand for high speed 3D graphic rendering, video file format conversion, compression, encryption and decryption technologies, the importance of digital signal processor system is growing rapidly. In order to satisfy the real-time constraints, high performance digital signal processor is required. Therefore, as in general purpose computer systems, digital signal processor should be designed as multicore architecture as well. Using UTDSP benchmarks as input, the trace-driven simulation has been performed and analyzed for the 2 to 16-core digital signal processor architectures with the cores from simple RISC to in-order and out-of-order superscalar processors for the various window sizes, extensively.

Reducing Outgoing Traffic of Proxy Cache by Using Client-Cluster

  • Kim Kyung-Baek;Park Dae-Yeon
    • Journal of Communications and Networks
    • /
    • v.8 no.3
    • /
    • pp.330-338
    • /
    • 2006
  • Many web cache systems and policies concerning them have been proposed. These studies, however, consider large objects less useful than small objects in terms of performance, and evict them as soon as possible. Even if this approach increases the hit rate, the byte hit rate decreases and the connections occurring over congested links to outside networks waste more bandwidth in obtaining large objects. This paper puts forth a client-cluster approach for improving the web cache system. The client-cluster is composed of the residual resources of clients and utilizes them as exclusive storage for large objects. This proposed system achieves not only a high hit rate but also a high byte hit rate, while reducing outgoing traffic. The distributed hash table (DHT) based peer-to-peer lookup protocol is utilized to manage the client-cluster. With the natural characteristics of this protocol, the proposed system with the client-cluster is self-organizing, fault-tolerant, well-balanced, and scalable. Additionally, the large objects are managed with an index based allocation method, which balances the loads of all clients well. The performance of the cache system is examined via a trace driven simulation and an effective enhancement of the proxy cache performance is demonstrated.

Traffic Forecast Assisted Adaptive VNF Dynamic Scaling

  • Qiu, Hang;Tang, Hongbo;Zhao, Yu;You, Wei;Ji, Xinsheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.11
    • /
    • pp.3584-3602
    • /
    • 2022
  • NFV realizes flexible and rapid software deployment and management of network functions in the cloud network, and provides network services in the form of chained virtual network functions (VNFs). However, using VNFs to provide quality guaranteed services is still a challenge because of the inherent difficulty in intelligently scaling VNFs to handle traffic fluctuations. Most existing works scale VNFs with fixed-capacity instances, that is they take instances of the same size and determine a suitable deployment location without considering the cloud network resource distribution. This paper proposes a traffic forecasted assisted proactive VNF scaling approach, and it adopts the instance capacity adaptive to the node resource. We first model the VNF scaling as integer quadratic programming and then propose a proactive adaptive VNF scaling (PAVS) approach. The approach employs an efficient traffic forecasting method based on LSTM to predict the upcoming traffic demands. With the obtained traffic demands, we design a resource-aware new VNF instance deployment algorithm to scale out under-provisioning VNFs and a redundant VNF instance management mechanism to scale in over-provisioning VNFs. Trace-driven simulation demonstrates that our proposed approach can respond to traffic fluctuation in advance and reduce the total cost significantly.

Scheduling of Artificial Intelligence Workloads in Could Environments Using Genetic Algorithms (유전 알고리즘을 이용한 클라우드 환경의 인공지능 워크로드 스케줄링)

  • Seokmin Kwon;Hyokyung Bahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.3
    • /
    • pp.63-67
    • /
    • 2024
  • Recently, artificial intelligence (AI) workloads encompassing various industries such as smart logistics, FinTech, and entertainment are being executed on the cloud. In this paper, we address the scheduling issues of various AI workloads on a multi-tenant cloud system composed of heterogeneous GPU clusters. Traditional scheduling decreases GPU utilization in such environments, degrading system performance significantly. To resolve these issues, we present a new scheduling approach utilizing genetic algorithm-based optimization techniques, implemented within a process-based event simulation framework. Trace driven simulations with diverse AI workload traces collected from Alibaba's MLaaS cluster demonstrate that the proposed scheduling improves GPU utilization compared to conventional scheduling significantly.

An On-chip Cache and Main Memory Compression System Optimized by Considering the Compression rate Distribution of Compressed Blocks (압축블록의 압축률 분포를 고려해 설계한 내장캐시 및 주 메모리 압축시스템)

  • Yim, Keun-Soo;Lee, Jang-Soo;Hong, In-Pyo;Kim, Ji-Hong;Kim, Shin-Dug;Lee, Yong-Surk;Koh, Kern
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.1_2
    • /
    • pp.125-134
    • /
    • 2004
  • Recently, an on-chip compressed cache system was presented to alleviate the processor-memory Performance gap by reducing on-chip cache miss rate and expanding memory bandwidth. This research Presents an extended on-chip compressed cache system which also significantly expands main memory capacity. Several techniques are attempted to expand main memory capacity, on-chip cache capacity, and memory bandwidth as well as reduce decompression time and metadata size. To evaluate the performance of our proposed system over existing systems, we use execution-driven simulation method by modifying a superscalar microprocessor simulator. Our experimental methodology has higher accuracy than previous trace-driven simulation method. The simulation results show that our proposed system reduces execution time by 4-23% compared with conventional memory system without considering the benefits obtained from main memory expansion. The expansion rates of data and code areas of main memory are 57-120% and 27-36%, respectively.

Efficient Cache Management Scheme in Database based on Block Classification (블록 분류에 기반한 데이타베이스의 효율적 캐쉬 관리 기법)

  • Sin, Il-Hoon;Koh, Kern
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.7
    • /
    • pp.369-376
    • /
    • 2002
  • Although LRU is not adequate for database that has non-uniform reference pattern, it has been adopted in most database systems due to the absence of the proper alternative. We analyze database block reference pattern with the realistic database trace. Based on this analysis, we propose a new cache replacement policy. Trace analysis shows that extremely non-popular blocks take up about 70 % of the entire blocks. The influence of recency on blocks' re-reference likelihood is at first strong due to temporal locality, however, it rapidly decreases and eventually becomes negligible as stack distance increases. Based on this observation, RCB(Reference Characteristic Based) cache replacement policy, which we propose in this paper, classifies the entire blocks into four block groups by blocks' recency and re-reference likelihood, and operates different priority evaluation methods for each block group. RCB policy evicts non-popular blocks more quickly than the others and evaluates the priority of the block by frequency that has not been referenced for a long time. In a trace-driven simulation, RCB delivers a better performance than the existing polices(LRU, 2Q, LRU-K, LRFU). Especially compared to LRU. It reduces miss count by 5~l2.7%. Time complexity of RCB is O(1), which is the same with LRU and 2Q and superior to LRU-K(O(log$_2$N)) and LRFU(O(l) ~ O(log$_2$N)).

Efficient Exploration of On-chip Bus Architectures and Memory Allocation (온 칩 버스 구조와 메모리 할당에 대한 효율적인 설계 공간 탐색)

  • Kim Sungcham;Im Chaeseok;Ha Soonhoi
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.2
    • /
    • pp.55-67
    • /
    • 2005
  • Separation between computation and communication in system design allows the system designer to explore the communication architecture independently of component selection and mapping. In this paper we present an iterative two-step exploration methodology for bus-based on-chip communication architecture and memory allocation, assuming that memory traces from the processing elements are given from the mapping stage. The proposed method uses a static performance estimation technique to reduce the large design space drastically and quickly, and applies a trace-driven simulation technique to the reduced set of design candidates for accurate Performance estimation. Since local memory traffics as well as shared memory traffics are involved in bus contention, memory allocation is considered as an important axis of the design space in our technique. The viability and efficiency of the proposed methodology arc validated by two real -life examples, 4-channel digital video recorder (DVR) and an equalizer for OFDM DVB-T receiver.

Peducing the Overhead of Virtual Address Translation Process (가상주소 변환 과정에 대한 부담의 줄임)

  • U, Jong-Jeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.1
    • /
    • pp.118-126
    • /
    • 1996
  • Memory hierarchy is a useful mechanism for improving the memory access speed and making the program space larger by layering the memories and separating program spaces from memory spaces. However, it needs at least two memory accesses for each data reference : a TLB(Translation Lookaside Buffer) access for the address translation and a data cache access for the desired data. If the cache size increases to the multiplication of page size and the cache associativity, it is difficult to access the TLB with the cache in parallel, thereby making longer the critical timing path in the processor. To achieve such parallel accesses, we present the hybrid mapped TLB which combines a direct mapped TLB with a very small fully-associative mapped TLB. The former can reduce the TLB access time. while the latter removes the conflict misses from the former. The trace-driven simulation shows that under given workloads the proposed TLB is effective even when a fully-associative mapped TLB with only four entries is added because the effects of its increased misses are offset by its speed benefits.

  • PDF

A Low Power 3D Graphics Accelerator Considering Both Active and Standby Modes for Mobile Devices (모바일기기의 동작모드와 대기모드를 모두 고려한 저전력 3차원 그래픽 가속기)

  • Kim, Young-Sik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.2
    • /
    • pp.57-64
    • /
    • 2007
  • This paper proposed the low power texture cache for mobile 3D graphics accelerators. It is very important to reduce the leakage power in the standby mode for mobile 3D graphics accelerators and the memory access latency of texture mapping in the active mode which needs a large memory bandwidth. The proposed structure reduces the leakage power using variable threshold values of power mode transitions according to the selected texture filtering algorithms of application programs, which has the run time gain for texture mapping. In the trace driven cache simulation the proposed structure shows the best 7% performance gain to the previous MSA cache according to the new performance metric considering both normalized leakage power and run time impact.

Efficient Buffer Allocation Policy for the Adaptive Block Replacement Scheme (적응력있는 블록 교체 기법을 위한 효율적인 버퍼 할당 정책)

  • Choi, Jong-Moo;Cho, Seong-Je;Noh, Sam-Hyuk;Min, Sang-Lyul;Cho, Yoo-Kun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.3
    • /
    • pp.324-336
    • /
    • 2000
  • The paper proposes an efficient buffer management scheme to enhance performance of the disk I/O system. Without any user level information, the proposed scheme automatically detects the block reference patterns of applications by associating block attributes with forward distance of a block. Based on the detected patterns, the scheme applies an appropriate replacement policy to each application. We also present a new block allocation scheme to improve the performance of buffer cache when kernel needs to allocate a cache block due to a cache miss. The allocation scheme analyzes the cache hit ratio of each application based on block reference patterns and allocates a cache block to maximize cache hit ratios of system. These all procedures are performed on-line, as well as automatically at system level. We evaluate the scheme by trace-driven simulation. Experimental results show that our scheme leads to significant improvements in hit ratios of cache blocks compared to the traditional schemes and requires low overhead.

  • PDF