• Title/Summary/Keyword: Codes Benchmarking

Search Result 7, Processing Time 0.021 seconds

Comparison of Physics Model for 600 MeV Protons and 290 MeV·n-1 Oxygen Ions on Carbon in MCNPX

  • Lee, Arim;Kim, Donghyun;Jung, Nam-Suk;Oh, Joo-Hee;Oranj, Leila Mokhtari;Lee, Hee-Seock
    • Journal of Radiation Protection and Research
    • /
    • v.41 no.2
    • /
    • pp.123-131
    • /
    • 2016
  • Background: With the increase in the number of particle accelerator facilities under either operation or construction, the accurate calculation using Monte Carlo codes become more important in the shielding design and radiation safety evaluation of accelerator facilities. Materials and Methods: The calculations with different physics models were applied in both of cases: using only physics model and using the mix and match method of MCNPX code. The issued conditions were the interactions of 600 MeV proton and $290MeV{\cdot}n^{-1}$ oxygen with a carbon target. Both of cross-section libraries, JENDL High Energy File 2007 (JENDL/HE-2007) and LA150, were tested in this calculation. In the case of oxygen ion interactions, the calculation results using LAQGSM physics model and JENDL/HE-2007 library were compared with D. Satoh's experimental data. Other Monte Carlo calculations using PHITS and FLUKA codes were also carried out for further benchmarking study. Results and Discussion: It was clearly found that the physics models, especially intra-nuclear cascade model, gave a great effect to determine proton-induced secondary neutron spectrum in MCNPX code. The variety of physics models related to heavy ion interactions did not make big difference on the secondary particle productions. Conclusion: The variations of secondary neutron spectra and particle transports depending on various physics models in MCNPX code were studied and the result of this study can be used for the shielding design and radiation safety evaluation.

Uncertainty and sensitivity analysis in reactivity-initiated accident fuel modeling: synthesis of organisation for economic co-operation and development (OECD)/nuclear energy agency (NEA) benchmark on reactivity-initiated accident codes phase-II

  • Marchand, Olivier;Zhang, Jinzhao;Cherubini, Marco
    • Nuclear Engineering and Technology
    • /
    • v.50 no.2
    • /
    • pp.280-291
    • /
    • 2018
  • In the framework of OECD/NEA Working Group on Fuel Safety, a RIA fuel-rod-code Benchmark Phase I was organized in 2010-2013. It consisted of four experiments on highly irradiated fuel rodlets tested under different experimental conditions. This benchmark revealed the need to better understand the basic models incorporated in each code for realistic simulation of the complicated integral RIA tests with high burnup fuel rods. A second phase of the benchmark (Phase II) was thus launched early in 2014, which has been organized in two complementary activities: (1) comparison of the results of different simulations on simplified cases in order to provide additional bases for understanding the differences in modelling of the concerned phenomena; (2) assessment of the uncertainty of the results. The present paper provides a summary and conclusions of the second activity of the Benchmark Phase II, which is based on the input uncertainty propagation methodology. The main conclusion is that uncertainties cannot fully explain the difference between the code predictions. Finally, based on the RIA benchmark Phase-I and Phase-II conclusions, some recommendations are made.

Neutronics analysis of JSI TRIGA Mark II reactor benchmark experiments with SuperMC3.3

  • Tan, Wanbin;Long, Pengcheng;Sun, Guangyao;Zou, Jun;Hao, Lijuan
    • Nuclear Engineering and Technology
    • /
    • v.51 no.7
    • /
    • pp.1715-1720
    • /
    • 2019
  • Jozef Stefan Institute (JSI), TRIGA Mark II reactor employs the homogeneous mixture of uranium and zirconium hydride fuel type. Since its upgrade, a series of fresh fuel steady state experimental benchmarks have been conducted. The benchmark results have provided data for testing computational neutronics codes which are important for reactor design and safety analysis. In this work, we investigated the JSI TRIGA Mark II reactor neutronics characteristics: the effective multiplication factor and two safety parameters, namely the control rod worth and the fuel temperature reactivity coefficient using SuperMC. The modeling and real-time cross section generation methods of SuperMC were evaluated in the investigation. The calculation analysis indicated the following: the effective multiplication factor was influenced by the different cross section data libraries; the control rod worth evaluation was better with Monte Carlo codes; the experimental fuel temperature reactivity coefficient was smaller than calculated results due to change in water temperature. All the results were in good agreement with the experimental values. Hence, SuperMC could be used for the designing and benchmarking of other TRIGA Mark II reactors.

An LLVM-Based Implementation of Static Analysis for Detecting Self-Modifying Code and Its Evaluation (자체 수정 코드를 탐지하는 정적 분석방법의 LLVM 프레임워크 기반 구현 및 실험)

  • Yu, Jae-IL;Choi, Kwang-hoon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.2
    • /
    • pp.171-179
    • /
    • 2022
  • Self-Modifying-Code is a code that changes the code by itself during execution time. This technique is particularly abused by malicious code to bypass static analysis. Therefor, in order to effectively detect such malicious codes, it is important to identify self-modifying-codes. In the meantime, Self-modify-codes have been analyzed using dynamic analysis methods, but this is time-consuming and costly. If static analysis can detect self-modifying-code it will be of great help to malicious code analysis. In this paper, we propose a static analysis method to detect self-modified code for binary executable programs converted to LLVM IR and apply this method by making a self-modifying-code benchmark. As a result of the experiment in this paper, the designed static analysis method was effective for the standardized LLVM IR program that was compiled and converted to the benchmark program. However, there was a limitation in that it was difficult to detect the self-modifying-code for the unstructured LLVM IR program in which the binary was lifted and transformed. To overcome this, we need an effective way to lift the binary code.

CFD ANALYSIS OF HEAVY LIQUID METAL FLOW IN THE CORE OF THE HELIOS LOOP

  • Batta, A.;Cho, Jae-Hyun;Class, A.G.;Hwang, Il-Soon
    • Nuclear Engineering and Technology
    • /
    • v.42 no.6
    • /
    • pp.656-661
    • /
    • 2010
  • Lead-alloys are very attractive nuclear coolants due to their thermo-hydraulic, chemical, and neutronic properties. By utilizing the HELIOS (Heavy Eutectic liquid metal Loop for Integral test of Operability and Safety of PEACER$^2$) facility, a thermal hydraulic benchmarking study has been conducted for the prediction of pressure loss in lead-alloy cooled advanced nuclear energy systems (LACANES). The loop has several complex components that cannot be readily characterized with available pressure loss correlations. Among these components is the core, composed of a vessel, a barrel, heaters separated by complex spacers, and the plenum. Due to the complex shape of the core, its pressure loss is comparable to that of the rest of the loop. Detailed CFD simulations employing different CFD codes are used to determine the pressure loss, and it is found that the spacers contribute to nearly 90 percent of the total pressure loss. In the system codes, spacers are usually accounted for; however, due to the lack of correlations for the exact spacer geometry, the accuracy of models relies strongly on assumptions used for modeling spacers. CFD can be used to determine an appropriate correlation. However, application of CFD also requires careful choice of turbulence models and numerical meshes, which are selected based on extensive experience with liquid metal flow simulations for the KALLA lab. In this paper consistent results of CFX and Star-CD are obtained and compared to measured data. Measured data of the pressure loss of the core are obtained with a differential pressure transducer located between the core inlet and outlet at a flow rate of 13.57kg/s.

Experiments to build tera-scale cluster (Tera-scale cluster 개발을 위한 실험)

  • Hong, Jeong-Woo;Park, Hyung-Woo;Lee, Sang-San
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.10a
    • /
    • pp.313-316
    • /
    • 2001
  • At the end of 1999, the TeraCluster project in the KISTI Supercomputing Center was initiated to explore the possibility of PC clusters as a scientific computing platform to replace the Cray T3E system in KISTI by the year 2002. In order to understand whether an application is scalable to tera-flops sized cluster system, running test is inevitable. Extensive performance tests using well-known benchmarking codes with real applications' characteristics in them were carried out with different combinations of CPUs, system boards, network devices. The lessen learned shows the relationships between system performances and varied applications' different needs resulting in promises of How-Tos in building large scale cluster system. The 64/16 node clusters with Alpha EV6(466MHz), Pentium III(667 MHz) i inter-node network of Fast Ethernet. SCI[1] and Myrinet[2] were evaluated. More detailed specifications of the Linux clusters are described in Table 1.

  • PDF

Identification of Microservices to Develop Cloud-Native Applications (클라우드네이티브 애플리케이션 구축을 위한 마이크로서비스 식별 방법)

  • Choi, Okjoo;Kim, Yukyong
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.1
    • /
    • pp.51-58
    • /
    • 2021
  • Microservices are not only developed independently, but can also be run and deployed independently, ensuring more flexible scaling and efficient collaboration in a cloud computing environment. This impact has led to a surge in migrating to microservices-oriented application environments in recent years. In order to introduce microservices, the problem of identifying microservice units in a single application built with a single architecture must first be solved. In this paper, we propose an algorithm-based approach to identify microservices from legacy systems. A graph is generated using the meta-information of the legacy code, and a microservice candidate is extracted by applying a clustering algorithm. Modularization quality is evaluated using metrics for the extracted microservice candidates. In addition, in order to validate the proposed method, candidate services are derived using codes of open software that are widely used for benchmarking, and the level of modularity is evaluated using metrics. It can be identified as a smaller unit of microservice, and as a result, the module quality has improved.