• Title/Summary/Keyword: Variance based sensitivity analysis methods

Search Result 20, Processing Time 0.024 seconds

Comparison of Sensitivity Analysis Methods for Building Energy Simulations in Early Design Phases: Once-at-a-time (OAT) vs. Variance-based Methods

  • Kim, Sean Hay
    • KIEAE Journal
    • /
    • v.16 no.2
    • /
    • pp.17-22
    • /
    • 2016
  • Purpose: Sensitivity analysis offers a good guideline for designing energy conscious buildings, which is fitted to a specific building configuration. Sensitivity analysis is, however, still too expensive to be a part of regular design process. The One-at-a-time (OAT) is the most common and simplest sensitivity analysis method. This study aims to propose a reasonable ground that the OAT can be an alternative method for the variance-based method in some early design scenarios, while the variance-based method is known adequate for dealing with nonlinear response and the effect of interactions between input variables, which are most cases in building energy simulations. Method: A test model representing the early design phase is built in the DOE2 energy simulations. Then sensitivity ranks between the OAT and the Variance-based methods are compared at three U.S. sites. Result: Parameters in the upper rank by the OAT do not much differ from those by the Main effect index. Considering design practices that designers would chose the most energy saving design option first, this rank similarity between two methods seems to be acceptable in the early design phase.

Sensitivity and Reliability Analysis of Elate (판 구조물의 감도해석 및 신뢰성해석)

  • 김지호;양영순
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 1991.10a
    • /
    • pp.57-62
    • /
    • 1991
  • For the purpose of developing the method for efficiently calculating the design sensitivity and the reliability for the complicated structure such as ship structure, the probabilistic finite element method is introduced to formulate the deterministic design sensitivity analysis method and incorporated with the second moment reliability methods such as MVFOSM, AFOSM and SORM. Also, the probabilistic design sensitivity analysis needed in the reliability-based design is performed. The reliability analysis is carried out for the initial yielding failure, in which the derivative derived in the deterministic desin sensitivity is used. The present PFEM-based reliability method shows good agreement with Monte Carlo method in terms with the variance of response and the associated probability of failure even at the first or first few iteration steps. The probabilistic design sensitivity analysis evaluates explicitly the contribution of each random variable to probability of failure. Further, the reliability index variation can be easily predicted by the variation of the mean and the variance of the random variables.

  • PDF

Development and Application of a Performance Prediction Model for Home Care Nursing Based on a Balanced Scorecard using the Bayesian Belief Network (Bayesian Belief Network 활용한 균형성과표 기반 가정간호사업 성과예측모델 구축 및 적용)

  • Noh, Wonjung;Seomun, GyeongAe
    • Journal of Korean Academy of Nursing
    • /
    • v.45 no.3
    • /
    • pp.429-438
    • /
    • 2015
  • Purpose: This study was conducted to develop key performance indicators (KPIs) for home care nursing (HCN) based on a balanced scorecard, and to construct a performance prediction model of strategic objectives using the Bayesian Belief Network (BBN). Methods: This methodological study included four steps: establishment of KPIs, performance prediction modeling, development of a performance prediction model using BBN, and simulation of a suggested nursing management strategy. An HCN expert group and a staff group participated. The content validity index was analyzed using STATA 13.0, and BBN was analyzed using HUGIN 8.0. Results: We generated a list of KPIs composed of 4 perspectives, 10 strategic objectives, and 31 KPIs. In the validity test of the performance prediction model, the factor with the greatest variance for increasing profit was maximum cost reduction of HCN services. The factor with the smallest variance for increasing profit was a minimum image improvement for HCN. During sensitivity analysis, the probability of the expert group did not affect the sensitivity. Furthermore, simulation of a 10% image improvement predicted the most effective way to increase profit. Conclusion: KPIs of HCN can estimate financial and non-financial performance. The performance prediction model for HCN will be useful to improve performance.

Design optimization of a nuclear main steam safety valve based on an E-AHF ensemble surrogate model

  • Chaoyong Zong;Maolin Shi;Qingye Li;Fuwen Liu;Weihao Zhou;Xueguan Song
    • Nuclear Engineering and Technology
    • /
    • v.54 no.11
    • /
    • pp.4181-4194
    • /
    • 2022
  • Main steam safety valves are commonly used in nuclear power plants to provide final protections from overpressure events. Blowdown and dynamic stability are two critical characteristics of safety valves. However, due to the parameter sensitivity and multi-parameter features of safety valves, using traditional method to design and/or optimize them is generally difficult and/or inefficient. To overcome these problems, a surrogate model-based valve design optimization is carried out in this study, of particular interest are methods of valve surrogate modeling, valve parameters global sensitivity analysis and valve performance optimization. To construct the surrogate model, Design of Experiments (DoE) and Computational Fluid Dynamics (CFD) simulations of the safety valve were performed successively, thereby an ensemble surrogate model (E-AHF) was built for valve blowdown and stability predictions. With the developed E-AHF model, global sensitivity analysis (GSA) on the valve parameters was performed, thereby five primary parameters that affect valve performance were identified. Finally, the k-sigma method is used to conduct the robust optimization on the valve. After optimization, the valve remains stable, the minimum blowdown of the safety valve is reduced greatly from 13.30% to 2.70%, and the corresponding variance is reduced from 1.04 to 0.65 as well, confirming the feasibility and effectiveness of the optimization method proposed in this paper.

Analyzing nuclear reactor simulation data and uncertainty with the group method of data handling

  • Radaideh, Majdi I.;Kozlowski, Tomasz
    • Nuclear Engineering and Technology
    • /
    • v.52 no.2
    • /
    • pp.287-295
    • /
    • 2020
  • Group method of data handling (GMDH) is considered one of the earliest deep learning methods. Deep learning gained additional interest in today's applications due to its capability to handle complex and high dimensional problems. In this study, multi-layer GMDH networks are used to perform uncertainty quantification (UQ) and sensitivity analysis (SA) of nuclear reactor simulations. GMDH is utilized as a surrogate/metamodel to replace high fidelity computer models with cheap-to-evaluate surrogate models, which facilitate UQ and SA tasks (e.g. variance decomposition, uncertainty propagation, etc.). GMDH performance is validated through two UQ applications in reactor simulations: (1) low dimensional input space (two-phase flow in a reactor channel), and (2) high dimensional space (8-group homogenized cross-sections). In both applications, GMDH networks show very good performance with small mean absolute and squared errors as well as high accuracy in capturing the target variance. GMDH is utilized afterward to perform UQ tasks such as variance decomposition through Sobol indices, and GMDH-based uncertainty propagation with large number of samples. GMDH performance is also compared to other surrogates including Gaussian processes and polynomial chaos expansions. The comparison shows that GMDH has competitive performance with the other methods for the low dimensional problem, and reliable performance for the high dimensional problem.

Characterization of the Spatial Variability of Paper Formation Using a Continuous Wavelet Transform

  • Keller, D.Steven;Luner, Philip;Pawlak, Joel J.
    • Journal of Korea Technical Association of The Pulp and Paper Industry
    • /
    • v.32 no.5
    • /
    • pp.14-25
    • /
    • 2000
  • In this investigation, a wavelet transform analysis was used to decompose beta-radiographic formation images into spectral and spatial components. Conventional formation analysis may use spectral analysis, based on Fourier transformation or variance vs. zone size, to describe the grammage distribution of features such as flocs, streaks and mean fiber orientation. However, these methods have limited utility for the analysis of statistically stationary data sets where variance is not uniform with position, e.g. paper machine CD profiles (especially those that contain streaks). A continuous wavelet transform was used to analyze formation data arrays obtained from radiographic imaging of handsheets and cross machine paper samples. The response of the analytical method to grammage, floc size distribution, mean fiber orientation an sensitivity to feature localization were assessed. From wavelet analysis, the change in scale of grammage variation as a function of position was used to demonstrate regular and isolated differences in the formed structure.

  • PDF

A Study on Envelope Design Variables for Energy Conservation of General Hospital Ward Area by Sensitivity Analysis (민감도 분석을 통한 종합병원 병동부의 에너지 절감 외피 설계요소 도출)

  • Oh, Jihyun;Kwon, Soonjung;Kim, Sunsook
    • Journal of The Korea Institute of Healthcare Architecture
    • /
    • v.23 no.1
    • /
    • pp.7-14
    • /
    • 2017
  • Purpose: Since the large hospitals are one of the most intensive energy users among building types in Korea, it is important to investigate and apply appropriate energy conservation measures. There are many researches on energy conservation measures for HVAC system in hospitals, but only few useful guidelines for envelope design variables were existed. The building envelope is one of the important factors to building energy consumption and patients' comfort. The purpose of this study is to suggest the most influential envelope design variables for each end-use energy demand. Methods: 100 samples were generated by LHS(Latin Hypercube Sampling) method. After energy performance simulation, global sensitivity analysis was performed by the regression method. DesignBuilder, Simlab 2.2 and JEPlus were used in this process. Results: The most influencing variables are SHGC, SHGC and VT for heating, cooling, and lighting, respectively. However, the most influencing variable for total energy demand is WWR(Window to Wall Ratio). The analysis was conducted based on the coefficient of variance results. Implications: The six envelop design variables were ranked according to the end-use energy demand.

Planning Accelerated Degradation Tests: the Case of Gamma Degradation Process (열화가 감마과정을 따르는 경우 가속열화시험의 최적 계획)

  • Lim, Heonsang;Lim, Dae-Eun
    • Journal of Korean Society for Quality Management
    • /
    • v.43 no.2
    • /
    • pp.169-184
    • /
    • 2015
  • Purpose: This paper is concerned with optimally designing accelerated degradation test (ADT) plans based on a gamma process for the degradation model. Methods: By minimizing the asymptotic variance of the MLE of the q-th quantile of the lifetime distribution at the use condition, the test stress levels and the proportion of test units allocated to each stress level are optimally determined. Results: The optimal plans of ADT are developed for various combination of parameters. In addition, a method for determining the sample size is developed, and sensitivity analysis procedures are illustrated with an example. Conclusion: It is important to optimally design ADT based on a gamma process under the condition that a degradation process should be always nonnegative and strictly increasing over time.

Development of the Korean Geriatric Loneliness Scale (KGLS) (한국 노인의 외로움 측정도구 개발)

  • Lee, Si Eun
    • Journal of Korean Academy of Nursing
    • /
    • v.49 no.5
    • /
    • pp.643-654
    • /
    • 2019
  • Purpose: The purpose of this study was to develop and psychometrically test the Korean Geriatric Loneliness Scale (KGLS). Methods: The initial items were based on in-depth interviews with 10 older adults. Psychometric testing was then conducted with 322 community-dwelling older adults aged 65 or older. Content, construct, and criterion-related validity, classification in cutoff point, internal consistency reliability, and test-retest reliability were used for the analysis. Results: Exploratory factor analysis showed three factors, including 15 items explaining 91.6% of the total variance. The three distinct factors were loneliness associated with family relationships (34.3%), social loneliness (32.4%), and a lack of belonging (24.9%). As a result of confirmatory factor analysis, 14 items in the three-factor structure were validated. Receiver operating characteristic analysis demonstrated that the KGLS' cutoff point of 32 was associated with a sensitivity of 71.0%, specificity of 80.2%, and area under the curve of .83. Reliability, as verified by the test-retest intraclass correlation coefficient, was .89, and Cronbach's ${\alpha}$ was .90. Conclusion: As its validity and reliability have been verified through various methods, the KGLS can contribute to assessing loneliness in South Korean older adults.

Combining Adaptive Filtering and IF Flows to Detect DDoS Attacks within a Router

  • Yan, Ruo-Yu;Zheng, Qing-Hua;Li, Hai-Fei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.3
    • /
    • pp.428-451
    • /
    • 2010
  • Traffic matrix-based anomaly detection and DDoS attacks detection in networks are research focus in the network security and traffic measurement community. In this paper, firstly, a new type of unidirectional flow called IF flow is proposed. Merits and features of IF flows are analyzed in detail and then two efficient methods are introduced in our DDoS attacks detection and evaluation scheme. The first method uses residual variance ratio to detect DDoS attacks after Recursive Least Square (RLS) filter is applied to predict IF flows. The second method uses generalized likelihood ratio (GLR) statistical test to detect DDoS attacks after a Kalman filter is applied to estimate IF flows. Based on the two complementary methods, an evaluation formula is proposed to assess the seriousness of current DDoS attacks on router ports. Furthermore, the sensitivity of three types of traffic (IF flow, input link and output link) to DDoS attacks is analyzed and compared. Experiments show that IF flow has more power to expose anomaly than the other two types of traffic. Finally, two proposed methods are compared in terms of detection rate, processing speed, etc., and also compared in detail with Principal Component Analysis (PCA) and Cumulative Sum (CUSUM) methods. The results demonstrate that adaptive filter methods have higher detection rate, lower false alarm rate and smaller detection lag time.