• Title/Summary/Keyword: Code Optimization

Search Result 589, Processing Time 0.023 seconds

Optimization Using Partial Redundancy Elimination in SSA Form (SSA Form에서 부분 중복 제거를 이용한 최적화)

  • Kim, Ki-Tae;Yoo, Weon-Hee
    • The KIPS Transactions:PartD
    • /
    • v.14D no.2
    • /
    • pp.217-224
    • /
    • 2007
  • In order to determine the value and type statically. CTOC uses the SSA Form which separates the variable according to assignment. The SSA Form is widely being used as the intermediate expression of the compiler for data flow analysis as well as code optimization. However, the conventional SSA Form is more associated with variables rather than expressions. Accordingly, the redundant expressions are eliminated to optimize expressions of the SSA From. This paper defines the partial redundant expression to obtain a more optimized code and also implements the technique for eliminating such expressions.

DNA Computing Adopting DNA coding Method to solve Traveling Salesman Problem (Traveling Salesman Problem을 해결하기 위한 DNA 코딩 방법을 적용한 DNA 컴퓨팅)

  • Kim, Eun-Gyeong;Yun, Hyo-Gun;Lee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.1
    • /
    • pp.105-111
    • /
    • 2004
  • DNA computing has been using to solve TSP (Traveling Salesman Problems). However, when the typical DNA computing is applied to TSP, it can`t efficiently express vertices and weights of between vertices. In this paper, we proposed ACO (Algorithm for Code Optimization) that applies DNA coding method to DNA computing to efficiently express vertices and weights of between vertices for TSP. We applied ACO to TSP and as a result ACO could express DNA codes which have variable lengths and weights of between vertices more efficiently than Adleman`s DNA computing algorithm could. In addition, compared to Adleman`s DNA computing algorithm, ACO could reduce search time and biological error rate by 50% and could search for a shortest path in a short time.

Simulation, design optimization, and experimental validation of a silver SPND for neutron flux mapping in the Tehran MTR

  • Saghafi, Mahdi;Ayyoubzadeh, Seyed Mohsen;Terman, Mohammad Sadegh
    • Nuclear Engineering and Technology
    • /
    • v.52 no.12
    • /
    • pp.2852-2859
    • /
    • 2020
  • This paper deals with the simulation-based design optimization and experimental validation of the characteristics of an in-core silver Self-Powered Neutron Detector (SPND). Optimized dimensions of the SPND are determined by combining Monte Carlo simulations and analytical methods. As a first step, the Monte Carlo transport code MCNPX is used to follow the trajectory and fate of the neutrons emitted from an external source. This simulation is able to seamlessly integrate various phenomena, including neutron slowing-down and shielding effects. Then, the expected number of beta particles and their energy spectrum following a neutron capture reaction in the silver emitter are fetched from the TENDEL database using the JANIS software interface and integrated with the data from the first step to yield the origin and spectrum of the source electrons. Eventually, the MCNPX transport code is used for the Monte Carlo calculation of the ballistic current of beta particles in the various regions of the SPND. Then, the output current and the maximum insulator thickness to avoid breakdown are determined. The optimum design of the SPND is then manufactured and experimental tests are conducted. The calculated design parameters of this detector have been found in good agreement with the obtained experimental results.

Capacity Design of Eccentrically Braced Frame Using Multiobjective Optimization Technique (다목적 최적화 기법을 이용한 편심가새골조의 역량설계)

  • Hong, Yun-Su;Yu, Eunjong
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.33 no.6
    • /
    • pp.419-426
    • /
    • 2020
  • The structural design of the steel eccentrically braced frame (EBF) was developed and analyzed in this study through multiobjective optimization (MOO). For the optimal design, NSGA-II which is one of the genetic algorithms was utilized. The amount of structure and interfloor displacement were selected as the objective functions of the MOO. The constraints include strength ratio and rotation angle of the link, which are required by structural standards and have forms of the penalty function such that the values of the objective functions increase drastically when a condition is violated. The regulations in the code provision for the EBF system are based on the concept of capacity design, that is, only the link members are allowed to yield, whereas the remaining members are intended to withstand the member forces within their elastic ranges. However, although the pareto front obtained from MOO satisfies the regulations in the code provision, the actual nonlinear behavior shows that the plastic deformation is concentrated in the link member of a certain story, resulting in the formation of a soft story, which violates the capacity design concept in the design code. To address this problem, another constraint based on the Eurocode was added to ensure that the maximum values of the shear overstrength factors of all links did not exceed 1.25 times the minimum values. When this constraint was added, it was observed that the resulting pareto front complied with both the design regulations and capacity design concept. Ratios of the link length to beam span ranged from 10% to 14%, which was within the category of shear links. The overall design is dominated by the constraint on the link's overstrength factor ratio. Design characteristics required by the design code, such as interstory drift and member strength ratios, were conservatively compared to the allowable values.

Generating Intermediate Representation of IDL Using the CFE (CFE를 사용한 IDL 중간 표현 생성)

  • Park, Chan-Mo;Song, Gi-Beom;Hong, Seong-Pyo;Lee, Hyok;Lee, Jeong-Ki;Lee, Joon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.05a
    • /
    • pp.192-197
    • /
    • 1999
  • Programmers who write distributed programs are faced with a dilemma when writing the systems communication code. If the code is written by hand or partly by hand, then the speed of the application may be maximized, but the human effort required to implement and maintain the system is greatly increased. On the other hand, if the code is generated using a CORBA IDL compiler then the programmer effort will be reduced, but the performance of the application may be poor. So we need the optimizing the code generated by CORBA IDL compiler. We introduce the techniques which have been used by typical programming languages into compilation of IDL. We separate the phase of compilation into three phase. The first phase parses interface definition in IDL, manages nested scope and generates AST(Abstract Syntax Tree). The second phase implements the optimization. The third phase generates the code in target language. In this paper, we focus on the first phase. We separate interface definition into interface and message representation from AST. This supports the separate optimization of code in second phase.

  • PDF

DEVELOPMENT OF A CORE THERMO-FLUID ANALYSIS CODE FOR PRISMATIC GAS COOLED REACTORS

  • Tak, Nam-Il;Lee, Sung Nam;Kim, Min-Hwan;Lim, Hong Sik;Noh, Jae Man
    • Nuclear Engineering and Technology
    • /
    • v.46 no.5
    • /
    • pp.641-654
    • /
    • 2014
  • A new computer code, named CORONA (Core Reliable Optimization and thermo-fluid Network Analysis), was developed for the core thermo-fluid analysis of a prismatic gas cooled reactor. The CORONA code is targeted for whole-core thermo-fluid analysis of a prismatic gas cooled reactor, with fast computation and reasonable accuracy. In order to achieve this target, the development of CORONA focused on (1) an efficient numerical method, (2) efficient grid generation, and (3) parallel computation. The key idea for the efficient numerical method of CORONA is to solve a three-dimensional solid heat conduction equation combined with one-dimensional fluid flow network equations. The typical difficulties in generating computational grids for a whole core analysis were overcome by using a basic unit cell concept. A fast calculation was finally achieved by a block-wise parallel computation method. The objective of the present paper is to summarize the motivation and strategy, numerical approaches, verification and validation, parallel computation, and perspective of the CORONA code.

ONE-DIMENSIONAL ANALYSIS OF THERMAL STRATIFICATION IN THE AHTR COOLANT POOL

  • Zhao, Haihua;Peterson, Per F.
    • Nuclear Engineering and Technology
    • /
    • v.41 no.7
    • /
    • pp.953-968
    • /
    • 2009
  • It is important to accurately predict the temperature and density distributions in large stratified enclosures both for design optimization and accident analysis. Current reactor system analysis codes only provide lumped-volume based models that can give very approximate results. Previous scaling analysis has shown that stratified mixing processes in large stably stratified enclosures can be described using one-dimensional differential equations, with the vertical transport by jets modeled using integral techniques. This allows very large reductions in computational effort compared to three-dimensional CFD simulation. The BMIX++ (Berkeley mechanistic MIXing code in C++) code was developed to implement such ideas. This paper summarizes major models for the BMIX++ code, presents the two-plume mixing experiment simulation as one validation example, and describes the codes' application to the liquid salt buffer pool system in the AHTR (Advanced High Temperature Reactor) design. Three design options have been simulated and they exhibit significantly different stratification patterns. One of design options shows the mildest thermal stratification and is identified as the best design option. This application shows that the BMIX++ code has capability to provide the reactor designers with insights to understand complex mixing behavior with mechanistic methods. Similar analysis is possible for liquid-metal cooled reactors.

A NOVEL APPROACH TO FIND OPTIMIZED NEUTRON ENERGY GROUP STRUCTURE IN MOX THERMAL LATTICES USING SWARM INTELLIGENCE

  • Akbari, M.;Khoshahval, F.;Minuchehr, A.;Zolfaghari, A.
    • Nuclear Engineering and Technology
    • /
    • v.45 no.7
    • /
    • pp.951-960
    • /
    • 2013
  • Energy group structure has a significant effect on the results of multigroup transport calculations. It is known that $UO_2-PuO_2$ (MOX) is a recently developed fuel which consumes recycled plutonium. For such fuel which contains various resonant nuclides, the selection of energy group structure is more crucial comparing to the $UO_2$ fuels. In this paper, in order to improve the accuracy of the integral results in MOX thermal lattices calculated by WIMSD-5B code, a swarm intelligence method is employed to optimize the energy group structure of WIMS library. In this process, the NJOY code system is used to generate the 69 group cross sections of WIMS code for the specified energy structure. In addition, the multiplication factor and spectral indices are compared against the results of continuous energy MCNP-4C code for evaluating the energy group structure. Calculations performed in four different types of $H_2O$ moderated $UO_2-PuO_2$ (MOX) lattices show that the optimized energy structure obtains more accurate results in comparison with the WIMS original structure.

Implementation of Dead Code Elimination in CTOC (CTOC에서 죽은 코드 제거 구현)

  • Kim, Ki-Tae;Kim, Je-Min;Yoo, Won-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.2 s.46
    • /
    • pp.1-8
    • /
    • 2007
  • Although the Java bytecode has numerous advantages, there are also shortcomings such as slow execution speed and difficulty in analysis. Therefore, in order for the Java class file to be effectively executed under the execution environment such as the network, it is necessary to convert it into optimized code. We implements CTOC. In order to statically determine the value and type, CTOC uses the SSA Form which separates the variable according to assignment. Also, it uses a Tree Form for statements. But, due to insertion of the $\phi$-function in the process of conversion into the SSA Form, the number of nodes increased. This paper shows the dead code elimination to obtain a more optimized code in SSA Form. We add new live field in each node and achieve dead code elimination in tree structures. We can confirm after dead code elimination though test results that nodes decreases.

  • PDF

Code Automatic Analysis Technique for Virtualization-based Obfuscation and Deobfuscation (가상화 기반 난독화 및 역난독화를 위한 코드 자동 분석 기술)

  • Kim, Soon-Gohn
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.6
    • /
    • pp.724-731
    • /
    • 2018
  • Code obfuscation is a technology that makes programs difficult to understand for the purpose of interpreting programs or preventing forgery or tampering. Inverse reading is a technology that analyzes the meaning of origin through reverse engineering technology by receiving obfuscated programs as input. This paper is an analysis of obfuscation and reverse-toxicization technologies for binary code in a virtualized-based environment. Based on VMAttack, a detailed analysis of static code analysis, dynamic code analysis, and optimization techniques were analyzed specifically for obfuscation and reverse-dipidization techniques before obfuscating and reverse-dipulation techniques. Through this thesis, we expect to be able to carry out various research on virtualization and obfuscation. In particular, it is expected that research from stack-based virtual machines can be attempted by adding capabilities to enable them to run on register-based virtual machines.