• Title/Summary/Keyword: Benchmarks

Search Result 380, Processing Time 0.024 seconds

A Performance Study of Embedded Multicore Processor Architectures (임베디드 멀티코어 프로세서의 성능 연구)

  • Lee, Jongbok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.163-169
    • /
    • 2013
  • Recently, the importance of embedded system is growing rapidly. In-order to satisfy the real-time constraints of the system, high performance embedded processor is required. Therefore, as in general purpose computer systems, embedded processor should be designed as multicore architecture as well. Using MiBench benchmarks as input, the trace-driven simulation has been performed and analyzed for the 2-core to 16-core embedded processor architectures with different types of cores from simple RISC to in-order and out-of-order superscalar processors, extensively. As a result, the achievable performance is as high as 23 times over the single core embedded RISC processor.

Measuring the Managerial Efficiency of Insurance Companies in Saudi Arabia: A Data Envelopment Analysis Approach

  • NAUSHAD, Mohammad;FARIDI, Mohammad Rishad;FAISAL, Shaha
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.7 no.6
    • /
    • pp.297-304
    • /
    • 2020
  • This paper applies the Data Envelopment Analysis (DEA) to compute the managerial efficiency of 30 insurance companies listed on the Saudi stock exchange for the duration of four years from 2015 to 2018. The companies taken as a sample of study included both conventional and Takaful insurance companies. The insurance sector of KSA is one of the largest sectors in the country, contributing a substantial percentage in the non-oil economy. Efficiency measurement and evaluation will provide a venue to introspect and benchmark frontiers to the sector. In the present study, we have utilized the basic Banker Charnes Cooper and Charnes Copper Rhodes models of DEA. Two inputs, namely, general & administrative expenses and policy & acquisition costs, and two outputs (Net premium earned and Investment Income & other incomes) were taken for efficiency calculations. The final outcomes of the study reveal that a good number of insurance companies operating in KSA are found to be efficient on managerial efficiency scale. Three firms remain the leader on the frontier of the managerial efficiency. And no company found with zero (0) efficiency or a negative efficiency. It is expected that the outcome of the study will provide benchmarks to managers and a road map to further improvement.

Dynamic modeling of nonlocal compositionally graded temperature-dependent beams

  • Ebrahimi, Farzad;Fardshad, Ramin Ebrahimi
    • Advances in aircraft and spacecraft science
    • /
    • v.5 no.1
    • /
    • pp.141-164
    • /
    • 2018
  • In this paper, the thermal effect on buckling and free vibration characteristics of functionally graded (FG) size-dependent Timoshenko nanobeams subjected to an in-plane thermal loading are investigated by presenting a Navier type solution for the first time. Material properties of FG nanobeam are supposed to vary continuously along the thickness according to the power-law form and the material properties are assumed to be temperature-dependent. The small scale effect is taken into consideration based on nonlocal elasticity theory of Eringen. The nonlocal equations of motion are derived based on Timoshenko beam theory through Hamilton's principle and they are solved applying analytical solution. According to the numerical results, it is revealed that the proposed modeling can provide accurate frequency results of the FG nanobeams as compared to some cases in the literature. The detailed mathematical derivations are presented and numerical investigations are performed while the emphasis is placed on investigating the effect of the several parameters such as thermal effect, material distribution profile, small scale effects, aspect ratio and mode number on the critical buckling temperature and normalized natural frequencies of the temperature-dependent FG nanobeams in detail. It is explicitly shown that the thermal buckling and vibration behaviour of a FG nanobeams is significantly influenced by these effects. Numerical results are presented to serve as benchmarks for future analyses of FG nanobeams.

Analysis and Improvement of I/O Performance Degradation by Journaling in a Virtualized Environment (가상화 환경에서 저널링 기법에 의한 입출력 성능저하 분석 및 개선)

  • Kim, Sunghwan;Lee, Eunji
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.6
    • /
    • pp.177-181
    • /
    • 2016
  • This paper analyzes the host cache effectiveness in full virtualization, particularly associated with journaling of guests. We observe that the journal access of guests degrades cache performance significantly due to the write-once access pattern and the frequent sync operations. To remedy this problem, we design and implement a novel caching policy, called PDC (Pollution Defensive Caching), that detects the journal accesses and prevents them from entering the host cache. The proposed PDC is implemented in QEMU-KVM 2.1 on Linux 4.14 and provides 3-32% performance improvement for various file and I/O benchmarks.

Acceleration of Simulated Fault Injection Using a Checkpoint Forwarding Technique

  • Na, Jongwhoa;Lee, Dongwoo
    • ETRI Journal
    • /
    • v.39 no.4
    • /
    • pp.605-613
    • /
    • 2017
  • Simulated fault injection (SFI) is widely used to assess the effectiveness of fault tolerance mechanisms in safety-critical embedded systems (SCESs) because of its advantages such as controllability and observability. However, the long test time of SFI due to the large number of test cases and the complex simulation models of modern SCESs has been identified as a limiting factor. We present a method that can accelerate an SFI tool using a checkpoint forwarding (CF) technique. To evaluate the performance of CF-based SFI (CF-SFI), we have developed a CF mechanism using Verilog fault-injection tools and two systems under test (SUT): a single-core-based co-simulation model and a triple modular redundant co-simulation model. Both systems use the Verilog simulation model of the OpenRISC 1200 processor and can execute the embedded benchmarks from MiBench. We investigate the effectiveness of the CF mechanism and evaluate the two SUTs by measuring the test time as well as the failure rates. Compared to the SFI with no CF mechanism, the proposed CF-SFI approach reduces the test time of the two SUTs by 29%-45%.

Research on Low-energy Adaptive Clustering Hierarchy Protocol based on Multi-objective Coupling Algorithm

  • Li, Wuzhao;Wang, Yechuang;Sun, Youqiang;Mao, Jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1437-1459
    • /
    • 2020
  • Wireless Sensor Networks (WSN) is a distributed Sensor network whose terminals are sensors that can sense and check the environment. Sensors are typically battery-powered and deployed in where the batteries are difficult to replace. Therefore, maximize the consumption of node energy and extend the network's life cycle are the problems that must to face. Low-energy adaptive clustering hierarchy (LEACH) protocol is an adaptive clustering topology algorithm, which can make the nodes in the network consume energy in a relatively balanced way and prolong the network lifetime. In this paper, the novel multi-objective LEACH protocol is proposed, in order to solve the proposed protocol, we design a multi-objective coupling algorithm based on bat algorithm (BA), glowworm swarm optimization algorithm (GSO) and bacterial foraging optimization algorithm (BFO). The advantages of BA, GSO and BFO are inherited in the multi-objective coupling algorithm (MBGF), which is tested on ZDT and SCH benchmarks, the results are shown the MBGF is superior. Then the multi-objective coupling algorithm is applied in the multi-objective LEACH protocol, experimental results show that the multi-objective LEACH protocol can greatly reduce the energy consumption of the node and prolong the network life cycle.

A Performance Study of Multi-core Out-of-Order Superscalar Processor Architecture (멀티코어 비순차 수퍼스칼라 프로세서의 성능 연구)

  • Lee, Jong-Bok
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.10
    • /
    • pp.1502-1507
    • /
    • 2012
  • In order to overcome the hardware complexity and power consumption problems, recently the multi-core architecture has been prevalent. For hardware simplicity, usually RISC processor is adopted as the unit core processor. However, if the performance of unit core processor is enhanced, the overall performance of the multi-core processor architecture can be further increased. In this paper, out-of-order superscalar processor is utilized for the multi-core processor architecture. Using SPEC 2000 benchmarks as input, the trace-driven simulation has been performed for the out-of-order superscalar cores between 2 and 16 extensively. As a result, the 16-core out-of-order superscalar processor for the window size of 16 resulted in 17.4 times speed up over the single-core out-of-order superscalar processor, and 50 times speed up over the single core RISC processor. When compared for the same number of cores on the average, the multi-core out-of-order superscalar processor performance achieved 3.2 times speed up over the multi-core RISC processor and 1.6 times speed up over the multi-core in-order superscalar processor.

Efficient Hybrid Transactional Memory Scheme using Near-optimal Retry Computation and Sophisticated Memory Management in Multi-core Environment

  • Jang, Yeon-Woo;Kang, Moon-Hwan;Chang, Jae-Woo
    • Journal of Information Processing Systems
    • /
    • v.14 no.2
    • /
    • pp.499-509
    • /
    • 2018
  • Recently, hybrid transactional memory (HyTM) has gained much interest from researchers because it combines the advantages of hardware transactional memory (HTM) and software transactional memory (STM). To provide the concurrency control of transactions, the existing HyTM-based studies use a bloom filter. However, they fail to overcome the typical false positive errors of a bloom filter. Though the existing studies use a global lock, the efficiency of global lock-based memory allocation is significantly low in multi-core environment. In this paper, we propose an efficient hybrid transactional memory scheme using near-optimal retry computation and sophisticated memory management in order to efficiently process transactions in multi-core environment. First, we propose a near-optimal retry computation algorithm that provides an efficient HTM configuration using machine learning algorithms, according to the characteristic of a given workload. Second, we provide an efficient concurrency control for transactions in different environments by using a sophisticated bloom filter. Third, we propose a memory management scheme being optimized for the CPU cache line, in order to provide a fast transaction processing. Finally, it is shown from our performance evaluation that our HyTM scheme achieves up to 2.5 times better performance by using the Stanford transactional applications for multi-processing (STAMP) benchmarks than the state-of-the-art algorithms.

Compiling Haskell to Java via an Intermediate Code L (중간언어 L-코드를 이용한 Haskell-Java 언어 번역기 구현)

  • Choi, Kwang-Hoon;Han, Tai-Sook
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.12
    • /
    • pp.955-965
    • /
    • 2001
  • We propose a systematic method of compiling Haskell based on the spineless Tagless G-machine (STGM) for the Java, Virtual Machine (JVM) We introduce an intermediate language called L-code to identify each micro-operation of the machine by its instruction, Each macro operation of the machine is identified by a binding Each instruction of the L-code can be easily translated into Java statements. After our determination on representation and L-code program from a STG program is translated into Java program according to out compilation rules. Our experiment shows that the execution times of translated benchmarks are competitive compared with those in Haskell interpreter Hugs, particularly when Glasgow Haskell compiler's STG -level optimizations are applied.

  • PDF

Bagging deep convolutional autoencoders trained with a mixture of real data and GAN-generated data

  • Hu, Cong;Wu, Xiao-Jun;Shu, Zhen-Qiu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5427-5445
    • /
    • 2019
  • While deep neural networks have achieved remarkable performance in representation learning, a huge amount of labeled training data are usually required by supervised deep models such as convolutional neural networks. In this paper, we propose a new representation learning method, namely generative adversarial networks (GAN) based bagging deep convolutional autoencoders (GAN-BDCAE), which can map data to diverse hierarchical representations in an unsupervised fashion. To boost the size of training data, to train deep model and to aggregate diverse learning machines are the three principal avenues towards increasing the capabilities of representation learning of neural networks. We focus on combining those three techniques. To this aim, we adopt GAN for realistic unlabeled sample generation and bagging deep convolutional autoencoders (BDCAE) for robust feature learning. The proposed method improves the discriminative ability of learned feature embedding for solving subsequent pattern recognition problems. We evaluate our approach on three standard benchmarks and demonstrate the superiority of the proposed method compared to traditional unsupervised learning methods.