• Title/Summary/Keyword: Uncertainty-quantification

Search Result 172, Processing Time 0.032 seconds

Verification of Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE)

  • Khuwaileh, Bassam;Williams, Brian;Turinsky, Paul;Hartanto, Donny
    • Nuclear Engineering and Technology
    • /
    • v.51 no.4
    • /
    • pp.968-976
    • /
    • 2019
  • This paper presents a number of verification case studies for a recently developed sensitivity/uncertainty code package. The code package, ROMUSE (Reduced Order Modeling based Uncertainty/Sensitivity Estimator) is an effort to provide an analysis tool to be used in conjunction with reactor core simulators, in particular the Virtual Environment for Reactor Applications (VERA) core simulator. ROMUSE has been written in C++ and is currently capable of performing various types of parameter perturbations and associated sensitivity analysis, uncertainty quantification, surrogate model construction and subspace analysis. The current version 2.0 has the capability to interface with the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) code, which gives ROMUSE access to the various algorithms implemented within DAKOTA, most importantly model calibration. The verification study is performed via two basic problems and two reactor physics models. The first problem is used to verify the ROMUSE single physics gradient-based range finding algorithm capability using an abstract quadratic model. The second problem is the Brusselator problem, which is a coupled problem representative of multi-physics problems. This problem is used to test the capability of constructing surrogates via ROMUSE-DAKOTA. Finally, light water reactor pin cell and sodium-cooled fast reactor fuel assembly problems are simulated via SCALE 6.1 to test ROMUSE capability for uncertainty quantification and sensitivity analysis purposes.

McCARD/MIG stochastic sampling calculations for nuclear cross section sensitivity and uncertainty analysis

  • Ho Jin Park
    • Nuclear Engineering and Technology
    • /
    • v.54 no.11
    • /
    • pp.4272-4279
    • /
    • 2022
  • In this study, a cross section stochastic sampling (S.S.) capability is implemented into both the McCARD continuous energy Monte Carlo code and MIG multiple-correlated data sampling code. The ENDF/B-VII.1 covariance data based 30 group cross section sets and the SCALE6 covariance data based 44 group cross section sets are sampled by the MIG code. Through various uncertainty quantification (UQ) benchmark calculations, the McCARD/MIG results are verified to be consistent with the McCARD stand-alone sensitivity/uncertainty (S/U) results and the XSUSA S.S. results. UQ analyses for Three Mile Island Unit 1, Peach Bottom Unit 2, and Kozloduy-6 fuel pin problems are conducted to provide the uncertainties of keff and microscopic and macroscopic cross sections by the McCARD/MIG code system. Moreover, the SNU S/U formulations for uncertainty propagation in a MC depletion analysis are validated through a comparison with the McCARD/MIG S.S. results for the UAM Exercise I-1b burnup benchmark. It is therefore concluded that the SNU formulation based on the S/U method has the capability to accurately estimate the uncertainty propagation in a MC depletion analysis.

Uncertainty analysis of UAM TMI-1 benchmark by STREAM/RAST-K

  • Jaerim Jang;Yunki Jo;Deokjung Lee
    • Nuclear Engineering and Technology
    • /
    • v.56 no.5
    • /
    • pp.1562-1573
    • /
    • 2024
  • This study rigorously examined uncertainty in the TMI-1 benchmark within the Uncertainty Analysis in Modeling (UAM) benchmark suite using the STREAM/RAST-K two-step method. It presents two pivotal advancements in computational techniques: (1) Development of an uncertainty quantification (UQ) module and a specialized library for the pin-based pointwise energy slowing-down method (PSM), and (2) Application of Principal Component Analysis (PCA) for UQ. To evaluate the new computational framework, we conducted verification tests using SCALE 6.2.2. Results demonstrated that STREAM's performance closely matched SCALE 6.2.2, with a negligible uncertainty discrepancy of ±0.0078% in TMI-1 pin cell calculations. To assess the reliability of the PSM covariance library, we performed verification tests, comparing calculations with Calvik's two-term rational approximation (EQ 2-term) covariance library. These calculations included both pin-based and fuel assembly (FA-wise) computations, encompassing hot zero-power and hot full-power operational conditions. The uncertainties calculated using both the EQ 2-term and PSM resonance treatments were consistent, showing a deviation within ±0.054%. Additionally, the data compression process yielded compression ratios of 88.210% and 92.926% for on-the-fly and data-saving approaches, respectively, in TMI fuel assembly calculations. In summary, this study provides a comprehensive explanation of the PCA process used for UQ calculations and offers valuable insights into the robustness and reliability of newly developed computational methods, supported by rigorous verification tests.

Development of uncertainty quantification module for VVER analysis in STREAM/RAST-V two-step method

  • Jaerim Jang;Yunki Jo;Deokjung Lee
    • Nuclear Engineering and Technology
    • /
    • v.56 no.8
    • /
    • pp.3276-3285
    • /
    • 2024
  • This paper introduces the creation of a module for Uncertainty Quantification (UQ) specifically designed for VVER analysis through the implementation of the STREAM/RAST-V two-step approach. The aim was to expand the range of use by developing a UQ module tailored for analyzing VVER. This research presents two innovative computational functionalities: (1) development of a library for the pin-based pointwise energy slowing down method (PSM), and (2) extension of the analysis area to study hexagonal-geometry fuel assemblies. The proposed UQ scheme was evaluated through verification using UAM benchmark, and comparative analysis between codes using SCALE 6.2.2 for. STREAM provides an accuracy comparable to that of SCALE 6.2.2. Additionally, a PSM covariance library was utilized in the calculations, achieving 0.7941% and 0.7907% accuracies in the hot full power and hot zero power calculations, respectively. To assess the UQ sequences in the two-step method, the STREAM/RAST-V calculation scheme was verified using the STREAM lattice code. To conclude, this study furnishes comprehensive insights into the development of the UQ module within the two-step method for VVER analysis, and it validates its performance through utilization of the UAM benchmark.

Analyzing nuclear reactor simulation data and uncertainty with the group method of data handling

  • Radaideh, Majdi I.;Kozlowski, Tomasz
    • Nuclear Engineering and Technology
    • /
    • v.52 no.2
    • /
    • pp.287-295
    • /
    • 2020
  • Group method of data handling (GMDH) is considered one of the earliest deep learning methods. Deep learning gained additional interest in today's applications due to its capability to handle complex and high dimensional problems. In this study, multi-layer GMDH networks are used to perform uncertainty quantification (UQ) and sensitivity analysis (SA) of nuclear reactor simulations. GMDH is utilized as a surrogate/metamodel to replace high fidelity computer models with cheap-to-evaluate surrogate models, which facilitate UQ and SA tasks (e.g. variance decomposition, uncertainty propagation, etc.). GMDH performance is validated through two UQ applications in reactor simulations: (1) low dimensional input space (two-phase flow in a reactor channel), and (2) high dimensional space (8-group homogenized cross-sections). In both applications, GMDH networks show very good performance with small mean absolute and squared errors as well as high accuracy in capturing the target variance. GMDH is utilized afterward to perform UQ tasks such as variance decomposition through Sobol indices, and GMDH-based uncertainty propagation with large number of samples. GMDH performance is also compared to other surrogates including Gaussian processes and polynomial chaos expansions. The comparison shows that GMDH has competitive performance with the other methods for the low dimensional problem, and reliable performance for the high dimensional problem.

Enhancing the radar-based mean areal precipitation forecasts to improve urban flood predictions and uncertainty quantification

  • Nguyen, Duc Hai;Kwon, Hyun-Han;Yoon, Seong-Sim;Bae, Deg-Hyo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.123-123
    • /
    • 2020
  • The present study is aimed to correcting radar-based mean areal precipitation forecasts to improve urban flood predictions and uncertainty analysis of water levels contributed at each stage in the process. For this reason, a long short-term memory (LSTM) network is used to reproduce three-hour mean areal precipitation (MAP) forecasts from the quantitative precipitation forecasts (QPFs) of the McGill Algorithm for Precipitation nowcasting by Lagrangian Extrapolation (MAPLE). The Gangnam urban catchment located in Seoul, South Korea, was selected as a case study for the purpose. A database was established based on 24 heavy rainfall events, 22 grid points from the MAPLE system and the observed MAP values estimated from five ground rain gauges of KMA Automatic Weather System. The corrected MAP forecasts were input into the developed coupled 1D/2D model to predict water levels and relevant inundation areas. The results indicate the viability of the proposed framework for generating three-hour MAP forecasts and urban flooding predictions. For the analysis uncertainty contributions of the source related to the process, the Bayesian Markov Chain Monte Carlo (MCMC) using delayed rejection and adaptive metropolis algorithm is applied. For this purpose, the uncertainty contributions of the stages such as QPE input, QPF MAP source LSTM-corrected source, and MAP input and the coupled model is discussed.

  • PDF

Long-term Simulation and Uncertainty Quantification of Water Temperature in Soyanggang Reservoir due to Climate Change (기후변화에 따른 소양호의 수온 장기 모의 및 불확실성 정량화)

  • Yun, Yeojeong;Park, Hyungseok;Chung, Sewoong;Kim, Yongda;Ohn, Ilsang;Lee, Seoro
    • Journal of Korean Society on Water Environment
    • /
    • v.36 no.1
    • /
    • pp.14-28
    • /
    • 2020
  • Future climate change may affect the hydro-thermal and biogeochemical characteristics of dam reservoirs, the most important water resources in Korea. Thus, scientific projection of the impact of climate change on the reservoir environment, factoring uncertainties, is crucial for sustainable water use. The purpose of this study was to predict the future water temperature and stratification structure of the Soyanggang Reservoir in response to a total of 42 scenarios, combining two climate scenarios, seven GCM models, one surface runoff model, and three wind scenarios of hydrodynamic model, and to quantify the uncertainty of each modeling step and scenario. Although there are differences depending on the scenarios, the annual reservoir water temperature tended to rise steadily. In the RCP 4.5 and 8.5 scenarios, the upper water temperature is expected to rise by 0.029 ℃ (±0.012)/year and 0.048 ℃ (±0.014)/year, respectively. These rise rates are correspond to 88.1 % and 85.7 % of the air temperature rise rate. Meanwhile, the lower water temperature is expected to rise by 0.016 ℃ (±0.009)/year and 0.027 ℃ (±0.010)/year, respectively, which is approximately 48.6 % and 46.3 % of the air temperature rise rate. Additionally, as the water temperatures rises, the stratification strength of the reservoir is expected to be stronger, and the number of days when the temperature difference between the upper and lower layers exceeds 5 ℃ increases in the future. As a result of uncertainty quantification, the uncertainty of the GCM models showed the highest contribution with 55.8 %, followed by 30.8 % RCP scenario, and 12.8 % W2 model.

A Probabilistic Approach to Quantifying Uncertainties in the In-vessel Steam Explosion During Severe Accidents at a Nuclear Power Plant

  • Mun, Ju-Hyun;Kang, Chang-Sun;Park, Gun-Chul
    • Proceedings of the Korean Nuclear Society Conference
    • /
    • 1995.05a
    • /
    • pp.509-516
    • /
    • 1995
  • The uncertainty analysis for the in-vessel steam explosion during severe accidents at a nuclear power plant is performed using a probabilistic approach. This approach consists of four steps; 1) screening, 2) quantification of uncertainty 3) propagation of uncertainty, and 4) output analysis. And the specific methods which satisfy the sub-objectives of each step are prepared and presented. Compared with existing ones, the unique feature of this approach is the improved estimation of uncertainties through quantification, which ensures the defensibility of the resultant failure probability distributions. Using the approach, the containment failure probability due to in-vessel steam explosion is calculated. The results of analysis show that 1) pour diameter is the most dominant factor and slug condensed phase fraction is the least and 2) fraction of core molten is the second most dominant factor, which is identified as distinct feature of this study as compared with previous studies.

  • PDF

Stochastic vibration analysis of functionally graded beams using artificial neural networks

  • Trinh, Minh-Chien;Jun, Hyungmin
    • Structural Engineering and Mechanics
    • /
    • v.78 no.5
    • /
    • pp.529-543
    • /
    • 2021
  • Inevitable source-uncertainties in geometry configuration, boundary condition, and material properties may deviate the structural dynamics from its expected responses. This paper aims to examine the influence of these uncertainties on the vibration of functionally graded beams. Finite element procedures are presented for Timoshenko beams and utilized to generate reliable datasets. A prerequisite to the uncertainty quantification of the beam vibration using Monte Carlo simulation is generating large datasets, that require executing the numerical procedure many times leading to high computational cost. Utilizing artificial neural networks to model beam vibration can be a good approach. Initially, the optimal network for each beam configuration can be determined based on numerical performance and probabilistic criteria. Instead of executing thousands of times of the finite element procedure in stochastic analysis, these optimal networks serve as good alternatives to which the convergence of the Monte Carlo simulation, and the sensitivity and probabilistic vibration characteristics of each beam exposed to randomness are investigated. The simple procedure presented here is efficient to quantify the uncertainty of different stochastic behaviors of composite structures.

Quantitative uncertainty analysis for the climate change impact assessment using the uncertainty delta method (기후변화 영향평가에서의 Uncertainty Delta Method를 활용한 정량적 불확실성 분석)

  • Lee, Jae-Kyoung
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.spc
    • /
    • pp.1079-1089
    • /
    • 2018
  • The majority of existing studies for quantifying uncertainties in climate change impact assessments suggest only the uncertainties of each stage, and not the total uncertainty and its propagation in the whole procedure. Therefore, this study has proposed a new method, the Uncertainty Delta Method (UDM), which can quantify uncertainties using the variances of projections (as the UDM is derived from the first-order Taylor series expansion), to allow for a comprehensive quantification of uncertainty at each stage and also to provide the levels of uncertainty propagation, as follows: total uncertainty, the level of uncertainty increase at each stage, and the percentage of uncertainty at each stage. For quantifying uncertainties at each stage as well as the total uncertainty, all the stages - two emission scenarios (ES), three Global Climate Models (GCMs), two downscaling techniques, and two hydrological models - of the climate change assessment for water resources are conducted. The total uncertainty took 5.45, and the ESs had the largest uncertainty (4.45). Additionally, uncertainties are propagated stage by stage because of their gradual increase: 5.45 in total uncertainty consisted of 4.45 in emission scenarios, 0.45 in climate models, 0.27 in downscaling techniques, and 0.28 in hydrological models. These results indicate the projection of future water resources can be very different depending on which emission scenarios are selected. Moreover, using Fractional Uncertainty Method (FUM) by Hawkins and Sutton (2009), the major uncertainty contributor (emission scenario: FUM uncertainty 0.52) matched with the results of UDM. Therefore, the UDM proposed by this study can support comprehension and appropriate analysis of the uncertainty surrounding the climate change impact assessment, and make possible a better understanding of the water resources projection for future climate change.