• Title/Summary/Keyword: Stochastic Approximation

Search Result 138, Processing Time 0.024 seconds

Auto Regulated Data Provisioning Scheme with Adaptive Buffer Resilience Control on Federated Clouds

  • Kim, Byungsang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.11
    • /
    • pp.5271-5289
    • /
    • 2016
  • On large-scale data analysis platforms deployed on cloud infrastructures over the Internet, the instability of the data transfer time and the dynamics of the processing rate require a more sophisticated data distribution scheme which maximizes parallel efficiency by achieving the balanced load among participated computing elements and by eliminating the idle time of each computing element. In particular, under the constraints that have the real-time and limited data buffer (in-memory storage) are given, it needs more controllable mechanism to prevent both the overflow and the underflow of the finite buffer. In this paper, we propose an auto regulated data provisioning model based on receiver-driven data pull model. On this model, we provide a synchronized data replenishment mechanism that implicitly avoids the data buffer overflow as well as explicitly regulates the data buffer underflow by adequately adjusting the buffer resilience. To estimate the optimal size of buffer resilience, we exploits an adaptive buffer resilience control scheme that minimizes both data buffer space and idle time of the processing elements based on directly measured sample path analysis. The simulation results show that the proposed scheme provides allowable approximation compared to the numerical results. Also, it is suitably efficient to apply for such a dynamic environment that cannot postulate the stochastic characteristic for the data transfer time, the data processing rate, or even an environment where the fluctuation of the both is presented.

Quantification Analysis Problem using Mean Field Theory in Neural Network (평균장 이론을 이용한 전량화분석 문제의 최적화)

  • Jo, Gwang-Su
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.3
    • /
    • pp.417-424
    • /
    • 1995
  • This paper describes MFT(Mean Field Theory) neural network with continuous with continuous variables is applied to quantification analysis problem. A quantification analysis problem, one of the important problems in statistics, is NP complete and arises in the optimal location of objects in the design space according to the given similarities only. This paper presents a MFT neural network with continuous variables for the quantification problem. Starting with reformulation of the quantification problem to the penalty problem, this paper propose a "one-variable stochastic simulated annealing(one-variable SSA)" based on the mean field approximation. This makes it possible to evaluate of the spin average faster than real value calculating in the MFT neural network with continuous variables. Consequently, some experimental results show the feasibility of this approach to overcome the difficulties to evaluate the spin average value expressed by the integral in such models.ch models.

  • PDF

Mobile Robot Localization and Mapping using a Gaussian Sum Filter

  • Kwok, Ngai Ming;Ha, Quang Phuc;Huang, Shoudong;Dissanayake, Gamini;Fang, Gu
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.3
    • /
    • pp.251-268
    • /
    • 2007
  • A Gaussian sum filter (GSF) is proposed in this paper on simultaneous localization and mapping (SLAM) for mobile robot navigation. In particular, the SLAM problem is tackled here for cases when only bearing measurements are available. Within the stochastic mapping framework using an extended Kalman filter (EKF), a Gaussian probability density function (pdf) is assumed to describe the range-and-bearing sensor noise. In the case of a bearing-only sensor, a sum of weighted Gaussians is used to represent the non-Gaussian robot-landmark range uncertainty, resulting in a bank of EKFs for estimation of the robot and landmark locations. In our approach, the Gaussian parameters are designed on the basis of minimizing the representation error. The computational complexity of the GSF is reduced by applying the sequential probability ratio test (SPRT) to remove under-performing EKFs. Extensive experimental results are included to demonstrate the effectiveness and efficiency of the proposed techniques.

Uncertainty Estimation of AR Model Parameters Using a Bayesian technique (Bayesian 기법을 활용한 AR Model 매개변수의 불확실성 추정)

  • Park, Chan-Young;Park, Jong-Hyeon;Park, Min-Woo;Kwon, Hyun-Han
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2016.05a
    • /
    • pp.280-280
    • /
    • 2016
  • 특정 자료의 시간의 흐름에 따른 예측치를 추정하는 방법으로 AR Model 즉, 자기회귀모형이 많이 사용되고 있다. AR Model은 변수의 현재 값을 과거 값의 함수로 나타내게 되는데, 이런 시계열 분석 모델을 사용할 때 매개변수의 추정 과정이 필수적으로 요구된다. 일반적으로 매개변수를 추정하는 방법에는 확률적근사법(stochastic approximation), 최소제곱법(method of least square), 자기상관법(method of autocorrelation method), 최우도법(method of maximum likelihood) 등이 있다. AR Model에서 가장 많이 사용되는 최우도법은 표본크기가 충분히 클 때 가장 효율적인 방법으로 평가되지만 수치적으로 해를 구하는 과정이 복잡한 경우가 많으며, 해를 구하지 못하는 어려움이 따르기도 한다. 또한 표본 크기가 작을 때 일반적으로 잘 일치하지 않은 결과를 얻게 된다. 우리나라의 강우, 유량 등의 자료는 자료의 수가 적은 경우가 많기 때문에 최우도법을 통한 매개변수 추정 시 불확실성이 내재되어있지만 그것을 정량적으로 제시하는데 한계가 있다. 본 연구에서는 AR Model의 매개변수 추정 시 Bayesian 기법으로 매개변수의 사후분포(posterior distribution)를 제공하여 매개변수의 불확실성 구간을 정량적으로 표현하게 됨으로써, 시계열 분석을 통해 보다 신뢰성 있는 예측치를 얻을 수 있으리라 판단된다.

  • PDF

High rate diffusion-scale approximation for counters with extendable dead time

  • Dubi, Chen;Atar, Rami
    • Nuclear Engineering and Technology
    • /
    • v.51 no.6
    • /
    • pp.1616-1625
    • /
    • 2019
  • Measuring occurrence times of random events, aimed to determine the statistical properties of the governing stochastic process, is a basic topic in science and engineering, and has been the subject of numerous mathematical modeling approaches. Often, true statistical properties deviate from measured properties due to the so called dead time phenomenon, where for a certain time period following detection, the detection system is not operational. Understanding the dead time effect is especially important in radiation measurements, often characterized by high count rates and a non-reducible detector dead time (originating in the physics of particle detection). The effect of dead time can be interpreted as a suitable rarefied sequence of the original time sequence. This paper provides a limit theorem for a high rate (diffusion-scale) counter with extendable (Type II) dead time, where the underlying counting process is a renewal process with finite second moment for the inter-event distribution. The results are very general, in the sense that they refer to a general inter arrival time and a random dead time with general distribution. Following the theoretical results, we will demonstrate the applicability of the results in three applications: serially connected components, multiplicity counting and measurements of aerosol spatial distribution.

Monte Carlo burnup and its uncertainty propagation analyses for VERA depletion benchmarks by McCARD

  • Park, Ho Jin;Lee, Dong Hyuk;Jeon, Byoung Kyu;Shim, Hyung Jin
    • Nuclear Engineering and Technology
    • /
    • v.50 no.7
    • /
    • pp.1043-1050
    • /
    • 2018
  • For an efficient Monte Carlo (MC) burnup analysis, an accurate high-order depletion scheme to consider the nonlinear flux variation in a coarse burnup-step interval is crucial accompanied with an accurate depletion equation solver. In a Seoul National University MC code, McCARD, the high-order depletion schemes of the quadratic depletion method (QDM) and the linear extrapolation/quadratic interpolation (LEQI) method and a depletion equation solver by the Chebyshev rational approximation method (CRAM) have been newly implemented in addition to the existing constant extrapolation/backward extrapolation (CEBE) method using the matrix exponential method (MEM) solver with substeps. In this paper, the quadratic extrapolation/quadratic interpolation (QEQI) method is proposed as a new high-order depletion scheme. In order to examine the effectiveness of the newly-implemented depletion modules in McCARD, four problems in the VERA depletion benchmarks are solved by CEBE/MEM, CEBE/CRAM, LEQI/MEM, QEQI/MEM, and QDM for gadolinium isotopes. From the comparisons, it is shown that the QEQI/MEM predicts ${k_{inf}}^{\prime}s$ most accurately among the test cases. In addition, statistical uncertainty propagation analyses for a VERA pin cell problem are conducted by the sensitivity and uncertainty and the stochastic sampling methods.

Target strength of Antarctic krill and ice krill using the SDWBA model (SDWBA 모델을 이용한 남극 크릴과 아이스 크릴의 반사강도 연구)

  • Wuju, SON;Hyoung Sul, LA;Wooseok, OH;Jongmin, JOO
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.58 no.4
    • /
    • pp.352-358
    • /
    • 2022
  • We explored the frequency response of krill target strength (TS) to understand the Antarctic krill (Euphausia superba) and ice krill (Euphausia crystallorophias) using the stochastic distorted-wave Born approximation (SDWBA) model. The results showed that the distribution of orientation and the fatness factor could significantly impact on the frequency response of TS. Krill TS is clearly depended on acoustic properties, which could affect to estimate the biomass of two krill species. The results provide insight into the importance of understanding TS variation to estimate the Antarctic krill and ice krill biomass, and their ecology related to the environmental features in the Southern Ocean.

An Improved Reliability-Based Design Optimization using Moving Least Squares Approximation (이동최소자승근사법을 이용한 개선된 신뢰도 기반 최적설계)

  • Kang, Soo-Chang;Koh, Hyun-Moo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.1A
    • /
    • pp.45-52
    • /
    • 2009
  • In conventional structural design, deterministic optimization which satisfies codified constraints is performed to ensure safety and maximize economical efficiency. However, uncertainties are inevitable due to the stochastic nature of structural materials and applied loads. Thus, deterministic optimization without considering these uncertainties could lead to unreliable design. Recently, there has been much research in reliability-based design optimization (RBDO) taking into consideration both the reliability and optimization. RBDO involves the evaluation of probabilistic constraint that can be estimated using the RIA (Reliability Index Approach) and the PMA(Performance Measure Approach). It is generally known that PMA is more stable and efficient than RIA. Despite the significant advancement in PMA, RBDO still requires large computation time for large-scale applications. In this paper, A new reliability-based design optimization (RBDO) method is presented to achieve the more stable and efficient algorithm. The idea of the new method is to integrate a response surface method (RSM) with PMA. For the approximation of a limit state equation, the moving least squares (MLS) method is used. Through a mathematical example and ten-bar truss problem, the proposed method shows better convergence and efficiency than other approaches.

A Study on the Nonlinear Deterministic Characteristics of Stock Returns (주식 수익률의 비선형 결정론적 특성에 관한 연구)

  • Chang, Kyung-Chun;Kim, Hyun-Seok
    • The Korean Journal of Financial Management
    • /
    • v.21 no.1
    • /
    • pp.149-181
    • /
    • 2004
  • In this study we perform empirical tests using KOSPI return to investigate the existence of nonlinear characteristics in the generating process of stock returns. There are three categories in empirical tests; the test of nonlinear dependence, nonlinear stochastic process and nonlinear deterministic chaos. According to the analysis of nonlinearity, stock returns are not normally distributed but leptokurtic, and appear to have nonlinear dependence. And it's decided that the nonlinear structure of stock returns can not be completely explained using nonlinear stochastic models of ARCH-type. Nonlinear deterministic chaos system is the feedback system, which the past incidents influence the present, and it is the fractal structure with self-similarity and has the sensitive dependence on initial conditions. To summarize the results of chaos analysis for KOSPI return, it is the persistent time series, which is not IID and has long memory, takes biased random walk, and is estimated to be fractal distribution. Also correlation dimension, as the approximation of fractal dimension, converged stably within 3 and 4, and maximum Lyapunov exponent has positive value. This suggests that chaotic attractor and the sensitive dependence on initial conditions exist in stock returns. These results fit into the characteristics of chaos system. Therefore it's decided that the generating process of stock returns has nonlinear deterministic structure and follow chaotic process.

  • PDF

Chaotic Disaggregation of Daily Rainfall Time Series (카오스를 이용한 일 강우자료의 시간적 분해)

  • Kyoung, Min-Soo;Sivakumar, Bellie;Kim, Hung-Soo;Kim, Byung-Sik
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.9
    • /
    • pp.959-967
    • /
    • 2008
  • Disaggregation techniques are widely used to transform observed daily rainfall values into hourly ones, which serve as important inputs for flood forecasting purposes. However, an important limitation with most of the existing disaggregation techniques is that they treat the rainfall process as a realization of a stochastic process, thus raising questions on the lack of connection between the structure of the models on one hand and the underlying physics of the rainfall process on the other. The present study introduces a nonlinear deterministic (and specifically chaotic) framework to study the dynamic characteristics of rainfall distributions across different temporal scales (i.e. weights between scales), and thus the possibility of rainfall disaggregation. Rainfall data from the Seoul station (recorded by the Korea Meteorological Administration) are considered for the present investigation, and weights between only successively doubled resolutions (i.e., 24-hr to 12-hr, 12-hr to 6-hr, 6-hr to 3-hr) are analyzed. The correlation dimension method is employed to investigate the presence of chaotic behavior in the time series of weights, and a local approximation technique is employed for rainfall disaggregation. The results indicate the presence of chaotic behavior in the dynamics of weights between the successively doubled scales studied. The modeled (disaggregated) rainfall values are found to be in good agreement with the observed ones in their overall matching (e.g. correlation coefficient and low mean square error). While the general trend (rainfall amount and time of occurrence) is clearly captured, an underestimation of the maximum values are found.