• Title/Summary/Keyword: Variance reduction

Search Result 328, Processing Time 0.027 seconds

Variance Reductin via Adaptive Control Variates(ACV) (Variance Reduction via Adaptive Control Variates (ACV))

  • Lee, Jae-Yeong
    • Journal of the Korea Society for Simulation
    • /
    • v.5 no.1
    • /
    • pp.91-106
    • /
    • 1996
  • Control Variate (CV) is very useful technique for variance reduction in a wide class of queueing network simulations. However, the loss in variance reduction caused by the estimation of the optimum control coefficients is an increasing function of the number of control variables. Therefore, in some situations, it is required to select an optimal set of control variables to maximize the variance reduction . In this paper, we develop the Adaptive Control Variates (ACV) method which selects an optimal set of control variates during the simulation adatively. ACV is useful to maximize the simulation efficiency when we need iterated simulations to find an optimal solution. One such an example is the Simulated Annealing (SA) because, in SA algorithm, we have to repeat in calculating the objective function values at each temperature, The ACV can also be applied to the queueing network optimization problems to find an optimal input parameters (such as service rates) to maximize the throughput rate with a certain cost constraint.

  • PDF

Classification Using Sliced Inverse Regression and Sliced Average Variance Estimation

  • Lee, Hakbae
    • Communications for Statistical Applications and Methods
    • /
    • v.11 no.2
    • /
    • pp.275-285
    • /
    • 2004
  • We explore classification analysis using graphical methods such as sliced inverse regression and sliced average variance estimation based on dimension reduction. Some useful information about classification analysis are obtained by sliced inverse regression and sliced average variance estimation through dimension reduction. Two examples are illustrated, and classification rates by sliced inverse regression and sliced average variance estimation are compared with those by discriminant analysis and logistic regression.

Analysis of inconsistent source sampling in monte carlo weight-window variance reduction methods

  • Griesheimer, David P.;Sandhu, Virinder S.
    • Nuclear Engineering and Technology
    • /
    • v.49 no.6
    • /
    • pp.1172-1180
    • /
    • 2017
  • The application of Monte Carlo (MC) to large-scale fixed-source problems has recently become possible with new hybrid methods that automate generation of parameters for variance reduction techniques. Two common variance reduction techniques, weight windows and source biasing, have been automated and popularized by the consistent adjoint-driven importance sampling (CADIS) method. This method uses the adjoint solution from an inexpensive deterministic calculation to define a consistent set of weight windows and source particles for a subsequent MC calculation. One of the motivations for source consistency is to avoid the splitting or rouletting of particles at birth, which requires computational resources. However, it is not always possible or desirable to implement such consistency, which results in inconsistent source biasing. This paper develops an original framework that mathematically expresses the coupling of the weight window and source biasing techniques, allowing the authors to explore the impact of inconsistent source sampling on the variance of MC results. A numerical experiment supports this new framework and suggests that certain classes of problems may be relatively insensitive to inconsistent source sampling schemes with moderate levels of splitting and rouletting.

An Application of Variance Reduction Technique for Stochastic Network Reliability Evaluation (확률적 네트워크의 신뢰도 평가를 위한 분산 감소기법의 응용)

  • 하경재;김원경
    • Journal of the Korea Society for Simulation
    • /
    • v.10 no.2
    • /
    • pp.61-74
    • /
    • 2001
  • The reliability evaluation of the large scale network becomes very complicate according to the growing size of network. Moreover if the reliability is not constant but follows probability distribution function, it is almost impossible to compute them in theory. This paper studies the network evaluation methods in order to overcome such difficulties. For this an efficient path set algorithm which seeks the path set connecting the start and terminal nodes efficiently is developed. Also, various variance reduction techniques are applied to compute the system reliability to enhance the simulation performance. As a numerical example, a large scale network is given. The comparisons of the path set algorithm and the variance reduction techniques are discussed.

  • PDF

THE VARIANCE ESTIMATORS FOR CALIBRATION ESTIMATOR IN UNIT NONRESPONSE

  • Son, Chang-Kyoon;Jung, Hun-Jo
    • Journal of applied mathematics & informatics
    • /
    • v.9 no.2
    • /
    • pp.869-877
    • /
    • 2002
  • In the presence of unit nonresponse we perform the calibration estimation procedure for the population total corresponding to the levels of auxiliary information and derive the Taylor and the Jackknife variance estimators of it. We study the nonresponse bias reduction and the variance stabilization, and then show the efficiency of the Taylor and the Jackknife variance estimators by simulation study.

A PRACTICAL LOOK AT MONTE CARLO VARIANCE REDUCTION METHODS IN RADIATION SHIELDING

  • Olsher Richard H.
    • Nuclear Engineering and Technology
    • /
    • v.38 no.3
    • /
    • pp.225-230
    • /
    • 2006
  • With the advent of inexpensive computing power over the past two decades, applications of Monte Carlo radiation transport techniques have proliferated dramatically. At Los Alamos, the Monte Carlo codes MCNP5 and MCNPX are used routinely on personal computer platforms for radiation shielding analysis and dosimetry calculations. These codes feature a rich palette of variance reduction (VR) techniques. The motivation of VR is to exchange user efficiency for computational efficiency. It has been said that a few hours of user time often reduces computational time by several orders of magnitude. Unfortunately, user time can stretch into the many hours as most VR techniques require significant user experience and intervention for proper optimization. It is the purpose of this paper to outline VR strategies, tested in practice, optimized for several common radiation shielding tasks, with the hope of reducing user setup time for similar problems. A strategy is defined in this context to mean a collection of MCNP radiation transport physics options and VR techniques that work synergistically to optimize a particular shielding task. Examples are offered in the areas of source definition, skyshine, streaming, and transmission.

Efficiency of Estimation for Parameters by Use of Variance Reduction Techniques (분산감소기법을 이용한 파라미터 추정의 효율성)

  • Kwon Chi-myung
    • Journal of the Korea Society for Simulation
    • /
    • v.14 no.3
    • /
    • pp.129-136
    • /
    • 2005
  • We develop a variance reduction technique applicable in one simulation experiment whose purpose is to estimate the parameters of a first order linear model. This method utilizes the control variates obtained during the course of simulation run under Schruben and Margolin's method (S-M method). The performance of this method is shown to be similar in estimating the main effects, and to be superior to S-M method in estimating the overall mean response in a given model. We consider that a proposed method may yield a better result than S-M method if selected control variates are highly correlated with the response at each design point.

  • PDF

Variance Reduction Techniques of Monte Carlo Simulation for the Power System Reliability Evaluation (대전력 계통의 비지수 함수를 고려한 신뢰도 계산의 시뮬레이션 기법에서의 분산감소법 연구)

  • Kim, Dong-Hyeon;Jung, Young-Soo;Kim, Jin-O
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.887-889
    • /
    • 1996
  • This paper presents Variance Reduction Techniques of the Monte Carlo Simulation considering Non-Exponential Distribution for Power System Reliability Evaluation. Generally, the components consisting of power system are assumed to be exponentially distributed in their state residence time. Sometimes, however, this assumption may cause a lot of errors in the reliability index evaluation. Non-exponential distribution can be approximated by a sum of several Erlangian distributions, whose inverse transform is easily calculated by using composition method. This paper proposes a new approach to deal with the non-exponential distribution and to reduce the simulation time by virtue of Variance Reduction Techniques such as Control Variate and Antithetic Variate.

  • PDF

Estimation of the Noise Variance in Image and Noise Reduction (영상에 포함된 잡음의 분산 추정과 잡음제거)

  • Kim, Yeong-Hwa;Nam, Ji-Ho
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.5
    • /
    • pp.905-914
    • /
    • 2011
  • In the field of image processing, the removal noise contamination from the original image is essential. However, due to various reasons, the occurrence of the noise is practically impossible to prevent completely. Thus, the reduction of the noise contained in images remains important. In this study, we estimate the level of noise variance based on the measurement of the relative strength of the noise, and we propose a noise reduction algorithm that uses a sigma filter. As a result, the proposed statistical noise reduction methodology provides significantly improved results over the usual sigma filtering regardless of the level of the noise variance.

Data Visualization using Linear and Non-linear Dimensionality Reduction Methods

  • Kim, Junsuk;Youn, Joosang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.12
    • /
    • pp.21-26
    • /
    • 2018
  • As the large amount of data can be efficiently stored, the methods extracting meaningful features from big data has become important. Especially, the techniques of converting high- to low-dimensional data are crucial for the 'Data visualization'. In this study, principal component analysis (PCA; linear dimensionality reduction technique) and Isomap (non-linear dimensionality reduction technique) are introduced and applied to neural big data obtained by the functional magnetic resonance imaging (fMRI). First, we investigate how much the physical properties of stimuli are maintained after the dimensionality reduction processes. We moreover compared the amount of residual variance to quantitatively compare the amount of information that was not explained. As result, the dimensionality reduction using Isomap contains more information than the principal component analysis. Our results demonstrate that it is necessary to consider not only linear but also nonlinear characteristics in the big data analysis.