• Title/Summary/Keyword: Parallel Programs

Search Result 217, Processing Time 0.027 seconds

TBBench: A Micro-Benchmark Suite for Intel Threading Building Blocks

  • Marowka, Ami
    • Journal of Information Processing Systems
    • /
    • v.8 no.2
    • /
    • pp.331-346
    • /
    • 2012
  • Task-based programming is becoming the state-of-the-art method of choice for extracting the desired performance from multi-core chips. It expresses a program in terms of lightweight logical tasks rather than heavyweight threads. Intel Threading Building Blocks (TBB) is a task-based parallel programming paradigm for multi-core processors. The performance gain of this paradigm depends to a great extent on the efficiency of its parallel constructs. The parallel overheads incurred by parallel constructs determine the ability for creating large-scale parallel programs, especially in the case of fine-grain parallelism. This paper presents a study of TBB parallelization overheads. For this purpose, a TBB micro-benchmarks suite called TBBench has been developed. We use TBBench to evaluate the parallelization overheads of TBB on different multi-core machines and different compilers. We report in detail in this paper on the relative overheads and analyze the running results.

Deterministic Parallelism for Symbolic Execution Programs based on a Name-Freshness Monad Library

  • Ahn, Ki Yung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.2
    • /
    • pp.1-9
    • /
    • 2021
  • In this paper, we extend a generic library framework based on the state monad to exploit deterministic parallelism in a purely functional language Haskell and provide benchmarks for the extended features on a multicore machine. Although purely functional programs are known to be well-suited to exploit parallelism, unintended squential data dependencies could prohibit effective parallelism. Symbolic execution programs usually implement fresh name generation in order to prevent confusion between variables in different scope with the same name. Such implementations are often based on squential state management, working against parallelism. We provide reusable primitives to help developing parallel symbolic execution programs with unbound-genercis, a generic name-binding library for Haskell, avoiding sequential dependencies in fresh name generation. Our parallel extension does not modify the internal implementation of the unbound-generics library, having zero possibility of degrading existing serial implementations of symbolic execution based on unbound-genecrics. Therefore, our extension can be applied only to the parts of source code that need parallel speedup.

Performance Comparison between Haskell Eval Monad and Cloud Haskell (Haskell Eval 모나드와 Cloud Haskell 간의 성능 비교)

  • Kim, Yeoneo;An, Hyungjun;Byun, Sugwoo;Woo, Gyun
    • Journal of KIISE
    • /
    • v.44 no.8
    • /
    • pp.791-802
    • /
    • 2017
  • Competition in the modern CPU market has shifted from speeding up the clock speed of a single core to increasing the number of cores. As such, there is a growing interest in parallel programming to maximize the use of resources of many core processors. In this paper, we propose parallel programming models in Haskell to find an advisable parallel programming model for many-core environments. Specifically, we used Eval monad and Cloud Haskell to develop two versions of parallel programs: plagiarism detection and K-means. Then, we evaluated the performance of the developed programs in 32-core and 120-core environments. The results of our experiment show that the Eval monad is highly efficient in an environment with a small number of cores. On the other hand, the Cloud Haskell runtime shows 37% improvement over Eval monad and the scalability shows a 134% improvement over Eval monad as the number of cores increases.

A Performance Comparison between Coarray and MPI for Parallel Wave Propagation Modeling and Reverse-time Migration (코어레이와 MPI를 이용한 병렬 파동 전파 모델링과 거꿀 참반사 보정 성능 비교)

  • Ryu, Donghyun;Kim, Ahreum;Ha, Wansoo
    • Geophysics and Geophysical Exploration
    • /
    • v.19 no.3
    • /
    • pp.131-135
    • /
    • 2016
  • Coarray is a parallel processing technique introduced in the Fortran 2008 standard. Coarray can implement parallel processing using simple syntax. In this research, we examined applicability of Coarray to seismic parallel processing by comparing performance of seismic data processing programs using Coarray and MPI. We compared calculation time using seismic wave propagation modeling and one to one communication time using domain decomposition technique. We also compared performance of parallel reverse-time migration programs using Coarray and MPI. Test results show that the computing speed of Coarray method is similar to that of MPI. On the other hand, MPI has superior communication speed to that of Coarray.

Implementation and Performance Analysis of High Performance Computing Library for Parallel Processing (병렬처리를 위한 고성능 라이브러리의 구현과 성능 평가)

  • 김영태;이용권
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.7
    • /
    • pp.379-386
    • /
    • 2004
  • We designed a portable parallel library HPCL(High Performance Computing Library) with following objectives: (1) to provide a close relationship between the parallel code and the original sequential code that will help future versions of the sequential code and (2) to enhance performance of the parallel code. The library is an interface written in C and Fortran programming languages between MPI(Message Passing Interface) and parallel programs in Fortran. Performance results were determined on clusters of PC's and IBM SP4.

A Design of An Optimizer For Conversion of Parallel Constructs of Data Parallel Language Programs (자료 병렬 언어 프로그램의 병렬 구조 변환을 위한 최적화기 설계)

  • Gu, Mi-Sun;Park, Myeong-Sun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.3
    • /
    • pp.792-803
    • /
    • 1999
  • Most data parallel language compilers are source-to-source translators. Most Compilers of HPF which is recognized as a standard data parallel language convert a parallel program in PHF in a Fortran 77 program inserted message passing primitives. By the way, they currently generate significant amount of ineffective codes in the course of the conversion. Especially, FORALL construct is converted into several DO loops, so loop overhead of these codes is very increased. In this paper, we define and use relation distance vector to keep necessary informations. Then we evaluate and analyze execution time for the codes converted by our method and by PARADIGM method for various array sizes.

  • PDF

Static Analysis of AND-parallelism in Logic Programs based on Abstract Interpretation (추상해석법을 이용한 논리언어의 AND-병렬 태스크 추출 기법)

  • Kim, Hiecheol;Lee, Yong-Doo
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 1997.11a
    • /
    • pp.79-89
    • /
    • 1997
  • Logic programming has many advantages as a paradigm for parallel programming because it offers ease of programming while retaining high expressive power due to its declarative semantics. In parallel logic programming, one of the important issues is the compile-time parallelism detection. Static data-dependency analysis has been widely used to gather some information needed for the detection of AND-parallelism. However, the static data-dependency analysis cannot fully detect AND-parallelism because it does not provide some necessary functions such as the propagation of groundness. As an alternative approach, abstract interpretation provides a promising way to deal with AND-parallelism detection, while a full-blown abstract interpretation is not efficient in terms of computation since it inherently employs some complex operations not necessary for gathering the information on AND-parallelism. In this paper, we propose an abstract domain which can provide a precise and efficient way to use the abstract interpretation for the detection of AND-parallelism of logic programs.

  • PDF

Improving Haskell GC-Tuning Time Using Divide-and-Conquer (분할 정복법을 이용한 Haskell GC 조정 시간 개선)

  • An, Hyungjun;Kim, Hwamok;Liu, Xiao;Kim, Yeoneo;Byun, Sugwoo;Woo, Gyun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.9
    • /
    • pp.377-384
    • /
    • 2017
  • The performance improvement of a single core processor has reached its limit since the circuit density cannot be increased any longer due to overheating. Therefore, the multicore and manycore architectures have emerged as viable approaches and parallel programming becomes more important. Haskell, a purely functional language, is getting popular in this situation since it naturally supports parallel programming owing to its beneficial features including the implicit parallelism in evaluating expressions and the monadic tools supporting parallel constructs. However, the performance of Haskell parallel programs is strongly influenced by the performance of the run-time system including the garbage collector. Though a memory profiling tool namely GC-tune has been suggested, we need a more systematic way to use this tool. Since GC-tune finds the optimal memory size by executing the target program with all the different possible GC options, the GC-tuning time takes too long. This paper suggests a basic divide-and-conquer method to reduce the number of GC-tune executions by reducing the search area by one-quarter for every searching step. Applying this method to two parallel programs, a maximally independent set and a K-means programs, the memory tuning time is reduced by 7.78 times with accuracy 98% on average.

Generic Scheduling Method for Distributed Parallel Systems (분산병렬 시스템에서 유전자 알고리즘을 이용한 스케쥴링 방법)

  • Kim, Hwa-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.1B
    • /
    • pp.27-32
    • /
    • 2003
  • This paper presents the Genetic Algorithm based Task Scheduling (GATS) method for the scheduling of programs with diverse embedded parallelism types in Distributed Parallel Systems, which consist of a set of loosely coupled parallel and vector machines connected via high speed networks The distributed parallel processing tries to solve computationally intensive problems that have several types of parallelism, on a suite of high performance and parallel machines in a manner that best utilizes the capabilities of each machine. When scheduling in distributed parallel systems, the matching of the parallelism characteristics between tasks and parallel machines rather than load balancing should be carefully handled with the minimization of communication cost in order to obtain more speedup. This paper proposes the based initialization methods for an initial population and the knowledge-based mutation methods to accommodate the parallelism type matching in genetic algorithms.