• Title/Summary/Keyword: Stochastic Optimization Algorithm

Search Result 189, Processing Time 0.024 seconds

Application of Stochastic Optimization Method to (s, S) Inventory System ((s, S) 재고관리 시스템에 대한 확률최적화 기법의 응용)

  • Chimyung Kwon
    • Journal of the Korea Society for Simulation
    • /
    • v.12 no.2
    • /
    • pp.1-11
    • /
    • 2003
  • In this paper, we focus an optimal policy focus optimal class of (s, S) inventory control systems. To this end, we use the perturbation analysis and apply a stochastic optimization algorithm to minimize the average cost over a period. We obtain the gradients of objective function with respect to ordering amount S and reorder point s via a combined perturbation method. This method uses the infinitesimal perturbation analysis and the smoothed perturbation analysis alternatively according to occurrences of ordering event changes. Our simulation results indicate that the optimal estimates of s and S obtained from a stochastic optimization algorithm are quite accurate. We consider that this may be due to the estimated gradients of little noise from the regenerative system simulation, and their effect on search procedure when we apply the stochastic optimization algorithm. The directions for future study stemming from this research pertain to extension to the more general inventory system with regard to demand distribution, backlogging policy, lead time, and review period. Another directions involves the efficiency of stochastic optimization algorithm related to searching procedure for an improving point of (s, S).

  • PDF

Nonlinear optimization algorithm using monotonically increasing quantization resolution

  • Jinwuk Seok;Jeong-Si Kim
    • ETRI Journal
    • /
    • v.45 no.1
    • /
    • pp.119-130
    • /
    • 2023
  • We propose a quantized gradient search algorithm that can achieve global optimization by monotonically reducing the quantization step with respect to time when quantization is composed of integer or fixed-point fractional values applied to an optimization algorithm. According to the white noise hypothesis states, a quantization step is sufficiently small and the quantization is well defined, the round-off error caused by quantization can be regarded as a random variable with identically independent distribution. Thus, we rewrite the searching equation based on a gradient descent as a stochastic differential equation and obtain the monotonically decreasing rate of the quantization step, enabling the global optimization by stochastic analysis for deriving an objective function. Consequently, when the search equation is quantized by a monotonically decreasing quantization step, which suitably reduces the round-off error, we can derive the searching algorithm evolving from an optimization algorithm. Numerical simulations indicate that due to the property of quantization-based global optimization, the proposed algorithm shows better optimization performance on a search space to each iteration than the conventional algorithm with a higher success rate and fewer iterations.

Design Centering by Genetic Algorithm and Coarse Simulation

  • Jinkoo Lee
    • Korean Journal of Computational Design and Engineering
    • /
    • v.2 no.4
    • /
    • pp.215-221
    • /
    • 1997
  • A new approach in solving design centering problem is presented. Like most stochastic optimization problems, optimal design centering problems have intrinsic difficulties in multivariate intergration of probability density functions. In order to avoid to avoid those difficulties, genetic algorithm and very coarse Monte Carlo simulation are used in this research. The new algorithm performs robustly while producing improved yields. This result implies that the combination of robust optimization methods and approximated simulation schemes would give promising ways for many stochastic optimizations which are inappropriate for mathematical programming.

  • PDF

Large-Scale Phase Retrieval via Stochastic Reweighted Amplitude Flow

  • Xiao, Zhuolei;Zhang, Yerong;Yang, Jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4355-4371
    • /
    • 2020
  • Phase retrieval, recovering a signal from phaseless measurements, is generally considered to be an NP-hard problem. This paper adopts an amplitude-based nonconvex optimization cost function to develop a new stochastic gradient algorithm, named stochastic reweighted phase retrieval (SRPR). SRPR is a stochastic gradient iteration algorithm, which runs in two stages: First, we use a truncated sample stochastic variance reduction algorithm to initialize the objective function. The second stage is the gradient refinement stage, which uses continuous updating of the amplitude-based stochastic weighted gradient algorithm to improve the initial estimate. Because of the stochastic method, each iteration of the two stages of SRPR involves only one equation. Therefore, SRPR is simple, scalable, and fast. Compared with the state-of-the-art phase retrieval algorithm, simulation results show that SRPR has a faster convergence speed and fewer magnitude-only measurements required to reconstruct the signal, under the real- or complex- cases.

A Development of SDS Algorithm for the Improvement of Convergence Simulation (실시간 계산에서 수령속도 개선을 위한 SDS 알고리즘의 개발)

  • Lee, Young-J.;Jang, Yong-H.;Lee, Kwon-S.
    • Proceedings of the KIEE Conference
    • /
    • 1997.07b
    • /
    • pp.699-701
    • /
    • 1997
  • The simulated annealing(SA) algorithm is a stochastic strategy for search of the ground state and a powerful tool for optimization, based on the annealing process used for the crystallization in physical systems. It's main disadvantage is the long convergence time. Therefore, this paper proposes a stochastic algorithm combined with conventional deterministic optimization method to reduce the computation time, which is called SDS(Stochastic-Deterministic-Stochastic) method.

  • PDF

Improved Automatic Lipreading by Stochastic Optimization of Hidden Markov Models (은닉 마르코프 모델의 확률적 최적화를 통한 자동 독순의 성능 향상)

  • Lee, Jong-Seok;Park, Cheol-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.14B no.7
    • /
    • pp.523-530
    • /
    • 2007
  • This paper proposes a new stochastic optimization algorithm for hidden Markov models (HMMs) used as a recognizer of automatic lipreading. The proposed method combines a global stochastic optimization method, the simulated annealing technique, and the local optimization method, which produces fast convergence and good solution quality. We mathematically show that the proposed algorithm converges to the global optimum. Experimental results show that training HMMs by the method yields better lipreading performance compared to the conventional training methods based on local optimization.

A Study on the Stochastic Optimization of Binary-response Experimentation (이항 반응 실험의 확률적 전역최적화 기법연구)

  • Donghoon Lee;Kun-Chul Hwang;Sangil Lee;Won Young Yun
    • Journal of the Korea Society for Simulation
    • /
    • v.32 no.1
    • /
    • pp.23-34
    • /
    • 2023
  • The purpose of this paper is to review global stochastic optimization algorithms(GSOA) in case binary response experimentation is used and to compare the performances of them. GSOAs utilise estimator of probability of success $\^p$ instead of population probability of success p, since p is unknown and only known by its estimator which has stochastic characteristics. Hill climbing algorithm algorithm, simple random search, random search with random restart, random optimization, simulated annealing and particle swarm algorithm as a population based algorithm are considered as global stochastic optimization algorithms. For the purpose of comparing the algorithms, two types of test functions(one is simple uni-modal the other is complex multi-modal) are proposed and Monte Carlo simulation study is done to measure the performances of the algorithms. All algorithms show similar performances for simple test function. Less greedy algorithms such as Random optimization with Random Restart and Simulated Annealing, Particle Swarm Optimization(PSO) based on population show much better performances for complex multi-modal function.

Tolerance Optimization with Markov Chain Process (마르코프 과정을 이용한 공차 최적화)

  • Lee, Jin-Koo
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.13 no.2
    • /
    • pp.81-87
    • /
    • 2004
  • This paper deals with a new approach to tolerance optimization problems. Optimal tolerance allotment problems can be formulated as stochastic optimization problems. Most schemes to solve the stochastic optimization problems have been found to exhibit difficulties in multivariate integration of the probability density function. As a typical example of stochastic optimization the optimal tolerance allotment problem has the same difficulties. In this stochastic model, manufacturing system is represented by Gauss-Markov stochastic process and the manufacturing unit availability is characterized for realistic optimization modeling. The new algorithm performed robustly for a large deviation approximation. A significant reduction in computation time was observed compared to the results obtained in previous studies.

Optimization of Queueing Network by Perturbation Analysis (퍼터베이션 분석을 이용한 대기행렬 네트워크의 최적화)

  • 권치명
    • Journal of the Korea Society for Simulation
    • /
    • v.9 no.2
    • /
    • pp.89-102
    • /
    • 2000
  • In this paper, we consider an optimal allocation of constant service efforts in queueing network to maximize the system throughput. For this purpose, using the perturbation analysis, we apply a stochastic optimization algorithm to two types of queueing systems. Our simulation results indicate that the estimates obtained from a stochastic optimization algorithm for a two-tandem queuing network are very accurate, and those for closed loop manufacturing system are a little apart from the known optimal allocation. We find that as simulation time increases for obtaining a new gradient (performance measure with respect to decision variables) by perturbation algorithm, the estimates tend to be more stable. Thus, we consider that it would be more desirable to have more accurate sensitivity of performance measure by enlarging simulation time rather than more searching steps with less accurate sensitivity. We realize that more experiments on various types of systems are needed to identify such a relationship with regards to stopping rule, the size of moving step, and updating period for sensitivity.

  • PDF

Optimum Design of the Brushless Motor Considering Parameter Tolerance (설계변수 공차를 고려한 브러시리스 모터 출력밀도 최적설계)

  • Son, Byoung-Ook;Lee, Ju
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.9
    • /
    • pp.1600-1604
    • /
    • 2010
  • This paper presents the optimum design of the brushless motor to maximize the output power per weight considering the design parameter tolerance. The optimization is proceeded by commercial software that is adopted the scatter-search algorithm and the characteristic analysis is conducted by FEM. The stochastic optimum design results are compared with those of the deterministic optimization method. We can verify that the results of the stochastic optimization is superior than that of deterministic optimization.