• Title/Summary/Keyword: Stochastic Sampling

Search Result 114, Processing Time 0.028 seconds

Optimal Sampling Plans of Reliability Using the Complex Number Function in the Complex System

  • Oh, Chung Hwan;Lee, Jong Chul;Cho, Nam Ho
    • Journal of Korean Society for Quality Management
    • /
    • v.20 no.1
    • /
    • pp.158-167
    • /
    • 1992
  • This paper represents the new techniques for optimal sampling plans of reliability applying the mathematical complex number(real and imaginary number) in the complex system of reliability. The research formulation represent a mathematical model Which preserves all essential aspects of the main and auxiliary factors of the research objectives. It is important to formule the problem in good agreement with the objective of the research considering the main and auxilary factors which affect the system performance. This model was repeatedly tested to determine the required statistical chatacteristics which in themselves determine the actual and standard distributions. The evaluation programs and techniques are developed for establishing criteria for sampling plans of reliability effectiveness, and the evaluation of system performance was based on the complex stochastic process(derived by the Runge-Kutta method. by kolmogorv's criterion and the transform of a solution to a Sturon-Liouville equation.) The special structure of this mathematical model is exploited to develop the optimal sampling plans of reliability in the complex system.

  • PDF

Importance Sampling Embedded Experimental Frame Design for Efficient Monte Carlo Simulation (효율적인 몬테 칼로 시뮬레이션을 위한 중요 샘플링 기법이 내장된 실험 틀 설계)

  • Seo, Kyung-Min;Song, Hae-Sang
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.4
    • /
    • pp.53-63
    • /
    • 2013
  • This paper presents an importance sampling(IS) embedded experimental frame(EF) design for efficient Monte Carlo (MC) simulation. To achieve IS principles, the proposed EF contains two embedded sub-models, which are classified into Importance Sampler(IS) and Bias Compensator(BC) models. The IS and BC models stand between the existing system model and EF, which leads to enhancement of model reusability. Furthermore, the proposed EF enables to achieve fast stochastic simulation as compared with the crude MC technique. From the abstract two case studies with the utilization of the proposed EF, we can gain interesting experimental results regarding remarkable enhancement of simulation performance. Finally, we expect that this work will serve various content areas for enhancing simulation performance, and besides, it will be utilized as a tool to understand and analyze social phenomena.

Uncertainty quantification of PWR spent fuel due to nuclear data and modeling parameters

  • Ebiwonjumi, Bamidele;Kong, Chidong;Zhang, Peng;Cherezov, Alexey;Lee, Deokjung
    • Nuclear Engineering and Technology
    • /
    • v.53 no.3
    • /
    • pp.715-731
    • /
    • 2021
  • Uncertainties are calculated for pressurized water reactor (PWR) spent nuclear fuel (SNF) characteristics. The deterministic code STREAM is currently being used as an SNF analysis tool to obtain isotopic inventory, radioactivity, decay heat, neutron and gamma source strengths. The SNF analysis capability of STREAM was recently validated. However, the uncertainty analysis is yet to be conducted. To estimate the uncertainty due to nuclear data, STREAM is used to perturb nuclear cross section (XS) and resonance integral (RI) libraries produced by NJOY99. The perturbation of XS and RI involves the stochastic sampling of ENDF/B-VII.1 covariance data. To estimate the uncertainty due to modeling parameters (fuel design and irradiation history), surrogate models are built based on polynomial chaos expansion (PCE) and variance-based sensitivity indices (i.e., Sobol' indices) are employed to perform global sensitivity analysis (GSA). The calculation results indicate that uncertainty of SNF due to modeling parameters are also very important and as a result can contribute significantly to the difference of uncertainties due to nuclear data and modeling parameters. In addition, the surrogate model offers a computationally efficient approach with significantly reduced computation time, to accurately evaluate uncertainties of SNF integral characteristics.

Propagation of radiation source uncertainties in spent fuel cask shielding calculations

  • Ebiwonjumi, Bamidele;Mai, Nhan Nguyen Trong;Lee, Hyun Chul;Lee, Deokjung
    • Nuclear Engineering and Technology
    • /
    • v.54 no.8
    • /
    • pp.3073-3084
    • /
    • 2022
  • The propagation of radiation source uncertainties in spent nuclear fuel (SNF) cask shielding calculations is presented in this paper. The uncertainty propagation employs the depletion and source term outputs of the deterministic code STREAM as input to the transport simulation of the Monte Carlo (MC) codes MCS and MCNP6. The uncertainties of dose rate coming from two sources: nuclear data and modeling parameters, are quantified. The nuclear data uncertainties are obtained from the stochastic sampling of the cross-section covariance and perturbed fission product yields. Uncertainties induced by perturbed modeling parameters consider the design parameters and operating conditions. Uncertainties coming from the two sources result in perturbed depleted nuclide inventories and radiation source terms which are then propagated to the dose rate on the cask surface. The uncertainty analysis results show that the neutron and secondary photon dose have uncertainties which are dominated by the cross section and modeling parameters, while the fission yields have relatively insignificant effect. Besides, the primary photon dose is mostly influenced by the fission yield and modeling parameters, while the cross-section data have a relatively negligible effect. Moreover, the neutron, secondary photon, and primary photon dose can have uncertainties up to about 13%, 14%, and 6%, respectively.

Bayesian Analysis of a Stochastic Beta Model in Korean Stock Markets (확률베타모형의 베이지안 분석)

  • Kho, Bong-Chan;Yae, Seung-Min
    • The Korean Journal of Financial Management
    • /
    • v.22 no.2
    • /
    • pp.43-69
    • /
    • 2005
  • This study provides empirical evidence that the stochastic beta model based on Bayesian analysis outperforms the existing conditional beta model and GARCH model in terms of the estimation accuracy and the explanatory power in the cross-section of stock returns in Korea. Betas estimated by the stochastic beta model explain $30{\sim}50%$ of the cross-sectional variation in stock-returns, whereas other time-varying beta models account for less than 3%. Such a difference in explanatory power across models turns out to come from the fact that the stochastic beta model absorbs the variation due to the market anomalies such as size, BE/ME, and idiosyncratic volatility. These results support the rational asset pricing model in that market anomalies are closely related to the variation of expected returns generated by time-varying betas.

  • PDF

Parallel processing in structural reliability

  • Pellissetti, M.F.
    • Structural Engineering and Mechanics
    • /
    • v.32 no.1
    • /
    • pp.95-126
    • /
    • 2009
  • The present contribution addresses the parallelization of advanced simulation methods for structural reliability analysis, which have recently been developed for large-scale structures with a high number of uncertain parameters. In particular, the Line Sampling method and the Subset Simulation method are considered. The proposed parallel algorithms exploit the parallelism associated with the possibility to simultaneously perform independent FE analyses. For the Line Sampling method a parallelization scheme is proposed both for the actual sampling process, and for the statistical gradient estimation method used to identify the so-called important direction of the Line Sampling scheme. Two parallelization strategies are investigated for the Subset Simulation method: the first one consists in the embarrassingly parallel advancement of distinct Markov chains; in this case the speedup is bounded by the number of chains advanced simultaneously. The second parallel Subset Simulation algorithm utilizes the concept of speculative computing. Speedup measurements in context with the FE model of a multistory building (24,000 DOFs) show the reduction of the wall-clock time to a very viable amount (<10 minutes for Line Sampling and ${\approx}$ 1 hour for Subset Simulation). The measurements, conducted on clusters of multi-core nodes, also indicate a strong sensitivity of the parallel performance to the load level of the nodes, in terms of the number of simultaneously used cores. This performance degradation is related to memory bottlenecks during the modal analysis required during each FE analysis.

Indirect Kalman Filter based Sensor Fusion for Error Compensation of Low-Cost Inertial Sensors and Its Application to Attitude and Position Determination of Small Flying robot (저가 관성센서의 오차보상을 위한 간접형 칼만필터 기반 센서융합과 소형 비행로봇의 자세 및 위치결정)

  • Park, Mun-Soo;Hong, Suk-Kyo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.7
    • /
    • pp.637-648
    • /
    • 2007
  • This paper presents a sensor fusion method based on indirect Kalman filter(IKF) for error compensation of low-cost inertial sensors and its application to the determination of attitude and position of small flying robots. First, the analysis of the measurement error characteristics to zero input is performed, focusing on the bias due to the temperature variation, to derive a simple nonlinear bias model of low-cost inertial sensors. Moreover, from the experimental results that the coefficients of this bias model possess non-deterministic (stochastic) uncertainties, the bias of low-cost inertial sensors is characterized as consisting of both deterministic and stochastic bias terms. Then, IKF is derived to improve long term stability dominated by the stochastic bias error, fusing low-cost inertial sensor measurements compensated by the deterministic bias model with non-inertial sensor measurement. In addition, in case of using intermittent non-inertial sensor measurements due to the unreliable data link, the upper and lower bounds of the state estimation error covariance matrix of discrete-time IKF are analyzed by solving stochastic algebraic Riccati equation and it is shown that they are dependant on the throughput of the data link and sampling period. To evaluate the performance of proposed method, experimental results of IKF for the attitude determination of a small flying robot are presented in comparison with that of extended Kaman filter which compensates only deterministic bias error model.

Stable modal identification for civil structures based on a stochastic subspace algorithm with appropriate selection of time lag parameter

  • Wu, Wen-Hwa;Wang, Sheng-Wei;Chen, Chien-Chou;Lai, Gwolong
    • Structural Monitoring and Maintenance
    • /
    • v.4 no.4
    • /
    • pp.331-350
    • /
    • 2017
  • Based on the alternative stabilization diagram by varying the time lag parameter in the stochastic subspace identification analysis, this study aims to investigate the measurements from several cases of civil structures for extending the applicability of a recently noticed criterion to ensure stable identification results. Such a criterion demands the time lag parameter to be no less than a critical threshold determined by the ratio of the sampling rate to the fundamental system frequency and is firstly validated for its applications with single measurements from stay cables, bridge decks, and buildings. As for multiple measurements, it is found that the predicted threshold works well for the cases of stay cables and buildings, but makes an evident overestimation for the case of bridge decks. This discrepancy is further explained by the fact that the deck vibrations are induced by multiple excitations independently coming from the passing traffic. The cable vibration signals covering the sensor locations close to both the deck and pylon ends of a cable-stayed bridge provide convincing evidences to testify this important discovery.

Measuring the efficiency and determinants of rice production in Myanmar: a translog stochastic frontier approach

  • Wai, Khine Zar;Hong, Seungjee
    • Korean Journal of Agricultural Science
    • /
    • v.48 no.1
    • /
    • pp.59-71
    • /
    • 2021
  • This study investigated the extent to which rice producers from the Ayeyarwaddy Region of Myanmar could improve their productivity if inputs were used efficiently in rice cultivation. To achieve this objective, simple random sampling was used to collect data from 300 rice growers in the study area. Data were analyzed with the translog stochastic frontier approach to understand the production efficiencies. The study further estimated the influencing factors that affect the efficiency levels of rice farmers. The empirical result reveals that the average technical, allocative, and economic efficiencies were at 76.11, 47.85, and 34.15%, respectively. This suggests that there is considerable room for improving rice production by better utilization of the available resources at the current level of technology. This study suggests that strenthening agricultural training programs and adoption of improved rice varieties may reduce overall inefficiencies among rice farmers in Myanmar. Factors like age, household size, education, farming experience, farm size, rice variety, training, and off-farm income have a significant impact on increasing/decreasing farmer's efficiency. Efficiency can be improved by establishing farmer field school programs to increase the scale of operations. The government should encourage young educated people to participate in paddy production and also intervene to reduce input prices and control the quality of seeds.

Islamic Bank Efficiency in Indonesia: Stochastic Frontier Analysis

  • OCTRINA, Fajra;MARIAM, Alia Gantina Siti
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.8 no.1
    • /
    • pp.751-758
    • /
    • 2021
  • This research is conducted to measure the efficiency level of Islamic banking in Indonesia and also to analyze the factors that can affect its efficiency level. This research used a purposive sampling technique to determine the sample size that will be used, with criteria that the bank has been operating since 2010 and consistently published its financial reports during the research period from 2011 until 2019; therefore, the total sample obtained was 11 samples. Analysis for efficiency level is done by using linear programming Stochastic Frontier Analysis (SFA), with test tool in the form of Frontier 4.1 and Eviews9 to find out what factors that affect efficiency. Efficiency test is done by involving input and output, while influence test used bank-specific variables comprising bank size, bank financial ratio, and macro-economy variable. Research result shows that there are only two banks that are almost close to being fully efficient firms, but the result still does not indicate that Islamic bank works efficiently. Results of the influence test show that factors affecting Islamic banking efficiency in Indonesia are bank size, Capital Adequacy Ratio (CAR), Non-Performing Finance (NPF), and Financing to Deposit Ratio (FDR), while other factors are not influential over the study period.