• Title/Summary/Keyword: Space Partitioning

Search Result 175, Processing Time 0.029 seconds

Implementation and Performance Evaluation of Parallel Programming Translator for High Performance Fortran (High Performance Fortran 병렬 프로그래밍 변환기의 구현 및 성능 평가)

  • Kim, Jung-Gwon;Hong, Man-Pyo;Kim, Dong-Gyu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.4
    • /
    • pp.901-915
    • /
    • 1999
  • Parallel computers are known to be excellent in performance per cost also satisfying scalability and high performance. However parallel machines have enjoyed limited success because of difficulty in parallel programming and non-portability between parallel machines. Recently, researchers have sought to develop data parallel language that provides machine independent programming systems. Data parallel language such as High Performance Fortran provides a basis to write a parallel program based on a global name space by partitioning data and computation, generating message-passing function. In this paper, we describe the Parallel Programming Translator(PPTran), source-to-source data parallel compiler, generating MPI SPMD parallel program from HPF input program through four phases such as data dependence analysis, partitioning data, partitioning computation, and code generation with explicit message-passing and verify the performance of PPTran

  • PDF

Safety Assessment on Disposal of HLW from P&T Cycle (핵변환 잔류 고준위 방사성 폐기물 처분 성능 평가)

  • 이연명;황용수;강철형
    • Tunnel and Underground Space
    • /
    • v.11 no.2
    • /
    • pp.132-145
    • /
    • 2001
  • The purpose and need of the study is to quantify the advantage or disadvantage of the environmental friendliness of the partitioning of nuclear fuel cycle. To this end, a preliminary study on the quantitative effect of the partition on the permanent disposal of spent PWR and CANDU fuel (HLW) was carried out. Before any analysis, the so-called reference radionuclide release scenario from a potential repository embedded into a crystalline rock was developed. Firstly, the feature, event and processes (FEPs) which lead to the release of nuclides from waste disposed of in a repository and the transport to and through the biosphere were identified. Based on the selected FEPs, the ‘Well Scenario’which might be the worst case scenario was set up. For the given scenario, annual individual doses to a local resident exposed to radioactive hazard were estimated and compared to that from direct disposal. Even though partitioning and transmutation could be an ideal solution to reduce the inventory which eventually decreases the release time as well as the peaks in the annual dose and also minimize the repository area through the proper handling of nuclides, it should overcome major disadvantages such as echnical issues on the partitioning and transmutation system, cost, and public acceptance, and environment friendly issues. In this regard, some relevant issues are also discussed to show the direction for further studies.

  • PDF

Adaptive Random Testing through Iterative Partitioning with Enlarged Input Domain (입력 도메인 확장을 이용한 반복 분할 기반의 적응적 랜덤 테스팅 기법)

  • Shin, Seung-Hun;Park, Seung-Kyu
    • The KIPS Transactions:PartD
    • /
    • v.15D no.4
    • /
    • pp.531-540
    • /
    • 2008
  • An Adaptive Random Testing(ART) is one of test case generation algorithms, which was designed to get better performance in terms of fault-detection capability than that of Random Testing(RT) algorithm by locating test cases in evenly spreaded area. Two ART algorithms, such as Distance-based ART(D-ART) and Restricted Random Testing(RRT), had been indicated that they have significant drawbacks in computations, i.e., consuming quadratic order of runtime. To reduce the amount of computations of D-ART and RRT, iterative partitioning of input domain strategy was proposed. They achieved, to some extent, the moderate computation cost with relatively high performance of fault detection. Those algorithms, however, have yet the patterns of non-uniform distribution in test cases, which obstructs the scalability. In this paper we analyze the distribution of test cases in an iterative partitioning strategy, and propose a new method of input domain enlargement which makes the test cases get much evenly distributed. The simulation results show that the proposed one has about 3 percent of improvement in terms of mean relative F-measure for 2-dimension input domain, and shows 10 percent improvement for 3-dimension space.

CHARMS: A Mapping Heuristic to Explore an Optimal Partitioning in HW/SW Co-Design (CHARMS: 하드웨어-소프트웨어 통합설계의 최적 분할 탐색을 위한 매핑 휴리스틱)

  • Adeluyi, Olufemi;Lee, Jeong-A
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.9
    • /
    • pp.1-8
    • /
    • 2010
  • The key challenge in HW/SW co-design is how to choose the appropriate HW/SW partitioning from the vast array of possible options in the mapping set. In this paper we present a unique and efficient approach for addressing this problem known as Customized Heuristic Algorithm for Reducing Mapping Sets(CHARMS). CHARMS uses sensitivity to individual task computational complexity as well the computed weighted values of system performance influencing metrics to streamline the mapping sets and extract the most optimal cases. Using H.263 encoder, we show that CHARMS sieves out 95.17% of the sub-optimal mapping sets, leaving the designer with 4.83% of the best cases to select from for run-time implementation.

An Efficient Pattern Partitioning Method in Multi-dimensional Feature Space (다차원 특징 공간에서의 효울적 패턴 분할 기법)

  • Kim, Jin-Il
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.3
    • /
    • pp.833-841
    • /
    • 1998
  • The ann of this study is 10 propose all eff'tcient mclhod for partition of multi-dimensIOnal feature space into pattern subspace for automated generation of fuzzy rule. The suggested mclhod predicates on sequential subdivision of the fuzzy subspacc. and the size of construc1cd pattern space is variable. Under this procedure, n-dimensional pattern space, after considering the distributional characteristic patterns, is partitioned into two different pattern subspaces. From the two subspaces, the pattern space for further subdivision is chosen; then, this subdivision procedure recursively repeats itself until the stopping condition is fulfilled. The result of this study is applied to 2, 4, 7 band of satellite Landsat TM and satisfac10ry result is acquired.

  • PDF

Improvement of the subcooled boiling model using a new net vapor generation correlation inferred from artificial neural networks to predict the void fraction profiles in the vertical channel

  • Tae Beom Lee ;Yong Hoon Jeong
    • Nuclear Engineering and Technology
    • /
    • v.54 no.12
    • /
    • pp.4776-4797
    • /
    • 2022
  • In the one-dimensional thermal-hydraulic (TH) codes, a subcooled boiling model to predict the void fraction profiles in a vertical channel consists of wall heat flux partitioning, the vapor condensation rate, the bubbly-to-slug flow transition criterion, and drift-flux models. Model performance has been investigated in detail, and necessary refinements have been incorporated into the Safety and Performance Analysis Code (SPACE) developed by the Korean nuclear industry for the safety analysis of pressurized water reactors (PWRs). The necessary refinements to models related to pumping factor, net vapor generation (NVG), vapor condensation, and drift-flux velocity were investigated in this study. In particular, a new NVG empirical correlation was also developed using artificial neural network (ANN) techniques. Simulations of a series of subcooled flow boiling experiments at pressures ranging from 1 to 149.9 bar were performed with the refined SPACE code, and reasonable agreement with the experimental data for the void fraction in the vertical channel was obtained. From the root-mean-square (RMS) error analysis for the predicted void fraction in the subcooled boiling region, the results with the refined SPACE code produce the best predictions for the entire pressure range compared to those using the original SPACE and RELAP5 codes.

A Memory-based Learning using Repetitive Fixed Partitioning Averaging (반복적 고정분할 평균기법을 이용한 메모리기반 학습기법)

  • Yih, Hyeong-Il
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.11
    • /
    • pp.1516-1522
    • /
    • 2007
  • We had proposed the FPA(Fixed Partition Averaging) method in order to improve the storage requirement and classification rate of the Memory Based Reasoning. The algorithm worked not bad in many area, but it lead to some overhead for memory usage and lengthy computation in the multi classes area. We propose an Repetitive FPA algorithm which repetitively partitioning pattern space in the multi classes area. Our proposed methods have been successfully shown to exhibit comparable performance to k-NN with a lot less number of patterns and better result than EACH system which implements the NGE theory.

  • PDF

A Hashing Method Using PCA-based Clustering (PCA 기반 군집화를 이용한 해슁 기법)

  • Park, Cheong Hee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.6
    • /
    • pp.215-218
    • /
    • 2014
  • In hashing-based methods for approximate nearest neighbors(ANN) search, by mapping data points to k-bit binary codes, nearest neighbors are searched in a binary embedding space. In this paper, we present a hashing method using a PCA-based clustering method, Principal Direction Divisive Partitioning(PDDP). PDDP is a clustering method which repeatedly partitions the cluster with the largest variance into two clusters by using the first principal direction. The proposed hashing method utilizes the first principal direction as a projective direction for binary coding. Experimental results demonstrate that the proposed method is competitive compared with other hashing methods.

HW/SW Partitioning Techniques for Multi-Mode Multi-Task Embedded Applications (멀티모드 멀티태스크 임베디드 어플리케이션을 위한 HW/SW 분할 기법)

  • Kim, Young-Jun;Kim, Tae-Whan
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.337-347
    • /
    • 2007
  • An embedded system is called a multi-mode embedded system if it performs multiple applications by dynamically reconfiguring the system functionality. Further, the embedded system is called a multi-mode multi-task embedded system if it additionally supports multiple tasks to be executed in a mode. In this Paper, we address a HW/SW partitioning problem, that is, HW/SW partitioning of multi-mode multi-task embedded applications with timing constraints of tasks. The objective of the optimization problem is to find a minimal total system cost of allocation/mapping of processing resources to functional modules in tasks together with a schedule that satisfies the timing constraints. The key success of solving the problem is closely related to the degree of the amount of utilization of the potential parallelism among the executions of modules. However, due to an inherently excessively large search space of the parallelism, and to make the task of schedulabilty analysis easy, the prior HW/SW partitioning methods have not been able to fully exploit the potential parallel execution of modules. To overcome the limitation, we propose a set of comprehensive HW/SW partitioning techniques which solve the three subproblems of the partitioning problem simultaneously: (1) allocation of processing resources, (2) mapping the processing resources to the modules in tasks, and (3) determining an execution schedule of modules. Specifically, based on a precise measurement on the parallel execution and schedulability of modules, we develop a stepwise refinement partitioning technique for single-mode multi-task applications. The proposed techniques is then extended to solve the HW/SW partitioning problem of multi-mode multi-task applications. From experiments with a set of real-life applications, it is shown that the proposed techniques are able to reduce the implementation cost by 19.0% and 17.0% for single- and multi-mode multi-task applications over that by the conventional method, respectively.

Fuaay Decision Tree Induction to Obliquely Partitioning a Feature Space (특징공간을 사선 분할하는 퍼지 결정트리 유도)

  • Lee, Woo-Hang;Lee, Keon-Myung
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.3
    • /
    • pp.156-166
    • /
    • 2002
  • Decision tree induction is a kind of useful machine learning approach for extracting classification rules from a set of feature-based examples. According to the partitioning style of the feature space, decision trees are categorized into univariate decision trees and multivariate decision trees. Due to observation error, uncertainty, subjective judgment, and so on, real-world data are prone to contain some errors in their feature values. For the purpose of making decision trees robust against such errors, there have been various trials to incorporate fuzzy techniques into decision tree construction. Several researches hove been done on incorporating fuzzy techniques into univariate decision trees. However, for multivariate decision trees, few research has been done in the line of such study. This paper proposes a fuzzy decision tree induction method that builds fuzzy multivariate decision trees named fuzzy oblique decision trees, To show the effectiveness of the proposed method, it also presents some experimental results.