• Title/Summary/Keyword: gradient-descent method

Search Result 238, Processing Time 0.035 seconds

Direct Gradient Descent Control and Sontag's Formula on Asymptotic Stability of General Nonlinear Control System

  • Naiborhu J.;Nababan S. M.;Saragih R.;Pranoto I.
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.2
    • /
    • pp.244-251
    • /
    • 2005
  • In this paper, we study the problem of stabilizing a general nonlinear control system by means of gradient descent control method which is a dynamic feedback control law. In this method, the general nonlinear control system can be considered as an affine nonlinear control systems. Then by using Sontag's formula we investigate the stability (asymptotic) of the general nonlinear control system.

Z. Cao's Fuzzy Reasoning Method using Learning Ability (학습기능을 이용한 Z. Cao의 퍼지추론방식)

  • Park, Jin-Hyun;Lee, Tae-Hwan;Choi, Young-Kiu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.9
    • /
    • pp.1591-1598
    • /
    • 2008
  • Z. Cao had proposed NFRM(new fuzzy reasoning method) which infers in detail using relation matrix. In spite of the small inference rules, it shows good performance than mamdani's fuzzy inference method. In this paper, we propose Z. Cao's fuzzy inference method with learning ability which is used a gradient descent method in order to improve the performances. It is hard to determine the relation matrix elements by trial and error method. Because this method is needed many hours and effort. Simulation results are applied nonlinear systems show that the proposed inference method using a gradient descent method has good performances.

GLOBAL CONVERGENCE OF AN EFFICIENT HYBRID CONJUGATE GRADIENT METHOD FOR UNCONSTRAINED OPTIMIZATION

  • Liu, Jinkui;Du, Xianglin
    • Bulletin of the Korean Mathematical Society
    • /
    • v.50 no.1
    • /
    • pp.73-81
    • /
    • 2013
  • In this paper, an efficient hybrid nonlinear conjugate gradient method is proposed to solve general unconstrained optimization problems on the basis of CD method [2] and DY method [5], which possess the following property: the sufficient descent property holds without any line search. Under the Wolfe line search conditions, we proved the global convergence of the hybrid method for general nonconvex functions. The numerical results show that the hybrid method is especially efficient for the given test problems, and it can be widely used in scientific and engineering computation.

Improving the Training Performance of Neural Networks by using Hybrid Algorithm (하이브리드 알고리즘을 이용한 신경망의 학습성능 개선)

  • Kim, Weon-Ook;Cho, Yong-Hyun;Kim, Young-Il;Kang, In-Ku
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2769-2779
    • /
    • 1997
  • This Paper Proposes an efficient method for improving the training performance of the neural networks using a hybrid of conjugate gradient backpropagation algorithm and dynamic tunneling backpropagation algorithm The conjugate gradient backpropagation algorithm, which is the fast gradient algorithm, is applied for high speed optimization. The dynamic tunneling backpropagation algorithm, which is the deterministic method with tunneling phenomenon, is applied for global optimization. Conversing to the local minima by using the conjugate gradient backpropagation algorithm, the new initial point for escaping the local minima is estimated by dynamic tunneling backpropagation algorithm. The proposed method has been applied to the parity check and the pattern classification. The simulation results show that the performance of proposed method is superior to those of gradient descent backpropagtion algorithm and a hybrid of gradient descent and dynamic tunneling backpropagation algorithm, and the new algorithm converges more often to the global minima than gradient descent backpropagation algorithm.

  • PDF

A NEW CLASS OF NONLINEAR CONJUGATE GRADIENT METHOD FOR UNCONSTRAINED OPTIMIZATION MODELS AND ITS APPLICATION IN PORTFOLIO SELECTION

  • Malik, Maulana;Sulaiman, Ibrahim Mohammed;Mamat, Mustafa;Abas, Siti Sabariah;Sukono, Sukono
    • Nonlinear Functional Analysis and Applications
    • /
    • v.26 no.4
    • /
    • pp.811-837
    • /
    • 2021
  • In this paper, we propose a new conjugate gradient method for solving unconstrained optimization models. By using exact and strong Wolfe line searches, the proposed method possesses the sufficient descent condition and global convergence properties. Numerical results show that the proposed method is efficient at small, medium, and large dimensions for the given test functions. In addition, the proposed method was applied to solve practical application problems in portfolio selection.

GLOBAL CONVERGENCE OF A NEW SPECTRAL PRP CONJUGATE GRADIENT METHOD

  • Liu, Jinkui
    • Journal of applied mathematics & informatics
    • /
    • v.29 no.5_6
    • /
    • pp.1303-1309
    • /
    • 2011
  • Based on the PRP method, a new spectral PRP conjugate gradient method has been proposed to solve general unconstrained optimization problems which produce sufficient descent search direction at every iteration without any line search. Under the Wolfe line search, we prove the global convergence of the new method for general nonconvex functions. The numerical results show that the new method is efficient for the given test problems.

Parameter Learning of Dynamic Bayesian Networks using Constrained Least Square Estimation and Steepest Descent Algorithm (제약조건을 갖는 최소자승 추정기법과 최급강하 알고리즘을 이용한 동적 베이시안 네트워크의 파라미터 학습기법)

  • Cho, Hyun-Cheol;Lee, Kwon-Soon;Koo, Kyung-Wan
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.58 no.2
    • /
    • pp.164-171
    • /
    • 2009
  • This paper presents new learning algorithm of dynamic Bayesian networks (DBN) by means of constrained least square (LS) estimation algorithm and gradient descent method. First, we propose constrained LS based parameter estimation for a Markov chain (MC) model given observation data sets. Next, a gradient descent optimization is utilized for online estimation of a hidden Markov model (HMM), which is bi-linearly constructed by adding an observation variable to a MC model. We achieve numerical simulations to prove its reliability and superiority in which a series of non stationary random signal is applied for the DBN models respectively.

Fuzzy Modeling based on FCM Clustering Algorithm (FCM 클러스터링 알고리즘에 기초한 퍼지 모델링)

  • 윤기찬;오성권
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.373-373
    • /
    • 2000
  • In this paper, we propose a fuzzy modeling algorithm which divides the input space more efficiently than convention methods by taking into consideration correlations between components of sample data. The proposed fuzzy modeling algorithm consists of two steps: coarse tuning, which determines consequent parameters approximately using FCRM clustering method, and fine tuning, which adjusts the premise and consequent parameters more precisely by gradient descent algorithm. To evaluate the performance of the proposed fuzzy mode, we use the numerical data of nonlinear function.

  • PDF

Gradient Descent Approach for Value-Based Weighting (점진적 하강 방법을 이용한 속성값 기반의 가중치 계산방법)

  • Lee, Chang-Hwan;Bae, Joo-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.381-388
    • /
    • 2010
  • Naive Bayesian learning has been widely used in many data mining applications, and it performs surprisingly well on many applications. However, due to the assumption that all attributes are equally important in naive Bayesian learning, the posterior probabilities estimated by naive Bayesian are sometimes poor. In this paper, we propose more fine-grained weighting methods, called value weighting, in the context of naive Bayesian learning. While the current weighting methods assign a weight to each attribute, we assign a weight to each attribute value. We investigate how the proposed value weighting effects the performance of naive Bayesian learning. We develop new methods, using gradient descent method, for both value weighting and feature weighting in the context of naive Bayesian. The performance of the proposed methods has been compared with the attribute weighting method and general Naive bayesian, and the value weighting method showed better in most cases.

A NONLINEAR CONJUGATE GRADIENT METHOD AND ITS GLOBAL CONVERGENCE ANALYSIS

  • CHU, AJIE;SU, YIXIAO;DU, SHOUQIANG
    • Journal of applied mathematics & informatics
    • /
    • v.34 no.1_2
    • /
    • pp.157-165
    • /
    • 2016
  • In this paper, we develop a new hybridization conjugate gradient method for solving the unconstrained optimization problem. Under mild assumptions, we get the sufficient descent property of the given method. The global convergence of the given method is also presented under the Wolfe-type line search and the general Wolfe line search. The numerical results show that the method is also efficient.