• Title/Summary/Keyword: optimal of convergence

Search Result 1,551, Processing Time 0.028 seconds

Optimal Convergence Rate of Empirical Bayes Tests for Uniform Distributions

  • Liang, Ta-Chen
    • Journal of the Korean Statistical Society
    • /
    • v.31 no.1
    • /
    • pp.33-43
    • /
    • 2002
  • The empirical Bayes linear loss two-action problem is studied. An empirical Bayes test $\delta$$_{n}$ $^{*}$ is proposed. It is shown that $\delta$$_{n}$ $^{*}$ is asymptotically optimal in the sense that its regret converges to zero at a rate $n^{-1}$ over a class of priors and the rate $n^{-1}$ is the optimal rate of convergence of empirical Bayes tests.sts.

OPTIMAL PORTFOLIO CHOICE IN A BINOMIAL-TREE AND ITS CONVERGENCE

  • Jeong, Seungwon;Ahn, Sang Jin;Koo, Hyeng Keun;Ahn, Seryoong
    • East Asian mathematical journal
    • /
    • v.38 no.3
    • /
    • pp.277-292
    • /
    • 2022
  • This study investigates the convergence of the optimal consumption and investment policies in a binomial-tree model to those in the continuous-time model of Merton (1969). We provide the convergence in explicit form and show that the convergence rate is of order ∆t, which is the length of time between consecutive time points. We also show by numerical solutions with realistic parameter values that the optimal policies in the binomial-tree model do not differ significantly from those in the continuous-time model for long-term portfolio management with a horizon over 30 years if rebalancing is done every 6 months.

A Design of Adaptive Equalizer using the Walsh-Block Pulse Functions and the Optimal LMS Algorithms (윌쉬-블록펄스 함수와 최적 LMS알고리즌을 이용한 적응 등화기의 설계)

  • 안두수;김종부
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.41 no.8
    • /
    • pp.914-921
    • /
    • 1992
  • In this paper, we introduce a Walsh network and an LMS algorithm, and show how these can be realized as an adaptive equalizer. The Walsh network is built from a set of Walsh and Block pulse functions. In the LMS algorithm, the convergence factor is an important design parameter because it governs stability and convergence speed, which depend on the proper choice of the convergence facotr. The conventional adaptation techniques use a fixed time constant convergence factor by the method of trial and error. In this paper, we propose an optimal method in the choice of the convergence factor. The proposed algorithm depends on the received signal and the output of the Walsh network in real time.

  • PDF

ON A GENERAL CLASS OF OPTIMAL FOURTH-ORDER MULTIPLE-ROOT FINDERS

  • Kim, Young Ik
    • Journal of the Chungcheong Mathematical Society
    • /
    • v.26 no.3
    • /
    • pp.657-669
    • /
    • 2013
  • A general class of two-point optimal fourth-order methods is proposed for locating multiple roots of a nonlinear equation. We investigate convergence analysis and computational properties for the family. Special and simple cases are considered for real-life applications. Numerical experiments strongly verify the convergence behavior and the developed theory.

PID Type Iterative Learning Control with Optimal Gains

  • Madady, Ali
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.2
    • /
    • pp.194-203
    • /
    • 2008
  • Iterative learning control (ILC) is a simple and effective method for the control of systems that perform the same task repetitively. ILC algorithm uses the repetitiveness of the task to track the desired trajectory. In this paper, we propose a PID (proportional plus integral and derivative) type ILC update law for control discrete-time single input single-output (SISO) linear time-invariant (LTI) systems, performing repetitive tasks. In this approach, the input of controlled system in current cycle is modified by applying the PID strategy on the error achieved between the system output and the desired trajectory in a last previous iteration. The convergence of the presented scheme is analyzed and its convergence condition is obtained in terms of the PID coefficients. An optimal design method is proposed to determine the PID coefficients. It is also shown that under some given conditions, this optimal iterative learning controller can guarantee the monotonic convergence. An illustrative example is given to demonstrate the effectiveness of the proposed technique.

CONVERGENCE OF THE NEWTON'S METHOD FOR AN OPTIMAL CONTROL PROBLEMS FOR NAVIER-STOKES EQUATIONS

  • Choi, Young-Mi;Kim, Sang-Dong;Lee, Hyung-Chun
    • Bulletin of the Korean Mathematical Society
    • /
    • v.48 no.5
    • /
    • pp.1079-1092
    • /
    • 2011
  • We consider the Newton's method for an direct solver of the optimal control problems of the Navier-Stokes equations. We show that the finite element solutions of the optimal control problem for Stoke equations may be chosen as the initial guess for the quadratic convergence of Newton's algorithm applied to the optimal control problem for the Navier-Stokes equations provided there are sufficiently small mesh size h and the moderate Reynold's number.

A CONVERGENCE OF OPTIMAL INVESTMENT STRATEGIES FOR THE HARA UTILITY FUNCTIONS

  • Kim, Jai Heui
    • East Asian mathematical journal
    • /
    • v.31 no.1
    • /
    • pp.91-101
    • /
    • 2015
  • An explicit expression of the optimal investment strategy corresponding to the HARA utility function under the constant elasticity of variance (CEV) model has been given by Jung and Kim [6]. In this paper we give an explicit expression of the optimal solution for the extended logarithmic utility function. And we prove an a.s. convergence of the HARA solutions to the extended logarithmic one.

Γ-CONVERGENCE FOR AN OPTIMAL DESIGN PROBLEM WITH VARIABLE EXPONENT

  • HAMDI ZORGATI
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.27 no.4
    • /
    • pp.296-310
    • /
    • 2023
  • In this paper, we derive the Γ-limit of functionals pertaining to some optimal material distribution problems that involve a variable exponent, as the exponent goes to infinity. In addition, we prove a relaxation result for supremal optimal design functionals with respect to the weak-∗ L(Ω; [0, 1])× W1,p0 (Ω;ℝm) weak topology.

A STUDY OF 2-D RECURSIVE LMS WITH ADAPTIVE CONVERGENCE FACTOR (적응 수렴인자를 갖는 이차원 RLMS에 관한 연구)

  • Chung, Young-Sik
    • Proceedings of the KIEE Conference
    • /
    • 1995.07b
    • /
    • pp.941-943
    • /
    • 1995
  • The convergence of adaptive algorithm depends mainly on the proper choice of the design factor called the covergence factor. In the paper, an optimal convergence factor involved in TRLMS algorithm, which is used to predict the coefficients of the ARMA predictor in ADPCM is presented. It is shown that such an optimal value can be generated by system signals such that the adaptive filter becomes self optimizing in terms of the convergence factor. This algorithm is applied to real image.

  • PDF

Improving the Performances of the Neural Network for Optimization by Optimal Estimation of Initial States (초기값의 최적 설정에 의한 최적화용 신경회로망의 성능개선)

  • 조동현;최흥문
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.8
    • /
    • pp.54-63
    • /
    • 1993
  • This paper proposes a method for improving the performances of the neural network for optimization by an optimal estimation of initial states. The optimal initial state that leads to the global minimum is estimated by using the stochastic approximation. And then the update rule of Hopfield model, which is the high speed deterministic algorithm using the steepest descent rule, is applied to speed up the optimization. The proposed method has been applied to the tavelling salesman problems and an optimal task partition problems to evaluate the performances. The simulation results show that the convergence speed of the proposed method is higher than conventinal Hopfield model. Abe's method and Boltzmann machine with random initial neuron output setting, and the convergence rate to the global minimum is guaranteed with probability of 1. The proposed method gives better result as the problem size increases where it is more difficult for the randomized initial setting to give a good convergence.

  • PDF