• Title, Summary, Keyword: Gradient 법

Search Result 817, Processing Time 0.033 seconds

Comparison of Gradient Descent for Deep Learning (딥러닝을 위한 경사하강법 비교)

  • Kang, Min-Jae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.2
    • /
    • pp.189-194
    • /
    • 2020
  • This paper analyzes the gradient descent method, which is the one most used for learning neural networks. Learning means updating a parameter so the loss function is at its minimum. The loss function quantifies the difference between actual and predicted values. The gradient descent method uses the slope of the loss function to update the parameter to minimize error, and is currently used in libraries that provide the best deep learning algorithms. However, these algorithms are provided in the form of a black box, making it difficult to identify the advantages and disadvantages of various gradient descent methods. This paper analyzes the characteristics of the stochastic gradient descent method, the momentum method, the AdaGrad method, and the Adadelta method, which are currently used gradient descent methods. The experimental data used a modified National Institute of Standards and Technology (MNIST) data set that is widely used to verify neural networks. The hidden layer consists of two layers: the first with 500 neurons, and the second with 300. The activation function of the output layer is the softmax function, and the rectified linear unit function is used for the remaining input and hidden layers. The loss function uses cross-entropy error.

Fabrication of MEA using gradient catalyst coating method (Gradient catalyst coating 방법을 이용한 MEA 제조)

  • Kim, Kun-Ho;Kim, Hyoung-Juhn;Lee, Sang-Yeop;Lim, Tae-Hoon;Lee, Kwan-Young
    • 한국신재생에너지학회:학술대회논문집
    • /
    • /
    • pp.325-328
    • /
    • 2006
  • 고분자 전해질 연료전지의 전극을 gradient catalyst coating 방법을 이용하여 제조하였다. 촉매 잉크제조 시 나피온 이오노머의 함침 구성비를 다르게 하여 조성 비율이 다른 gradient 구조를 갖도록 하여 전극을 제조하였다. Anode Cathode의 두 전극을 각각 나피온 함량비가 다른 두 개의 gradient 층구조의 촉매층으로 9:1, 8:2, 7:3, 6:4 비율의 조성비로 성능을 측정하였으며, 전극의 전기화학적 반응 면적을 알아보기 위해 순위전위법을 그리고 분극 저항(Polarization resistance) 변화를 알아보기 위해서는 0.7V에서 임피던스 측정법의 전기화학분석법으로 전극 제조법에 따른 성능변화를 확인하였다. 특히 Gradient catalyst coating 방법을 이용하여 제조한 MEA는 종래 방식의 MEA보다 high current $density(1000mA/cm^2)$이상에서 향상된 성능을 보였다.

  • PDF

Separation of Phospholipids in Step-Gradient Mode (Setp-Gradient Mode를 이용한 인지질의 분리)

  • Lee, Ju Weon;Row, Kyung Ho
    • Applied Chemistry for Engineering
    • /
    • v.8 no.4
    • /
    • pp.694-699
    • /
    • 1997
  • Normal-phase HPLC was used to separate the useful phospholipids, PE, PI and PC in soybean lecithin. The mobile phase used in this experiments were haxane, isopropanol and methanol, the gradient mode was applied as the three components could not be separated by the isocratic mode. To find the optimum separation condition, the concentration profiles of effluents were calculated from the plate theory and the capacity factor in step-gradient mode. From the calculated results, PE was separated with hexane/isopropanol/methanol = 90/5/5vol.% in isocratic mode and PI and PC were resolved in the conditions of 10min gradient time and the second mobile phase of hexane/isopropanol/methanol=50/20/30vol.% in step-gradient mode. The agreement between the calculated concentration profile and experimental data was good, so the methodology developed in this work can be used to obtain the optimum separation condition in gradient mode.

  • PDF

A Development of a Path-Based Traffic Assignment Algorithm using Conjugate Gradient Method (Conjugate Gradient 법을 이용한 경로기반 통행배정 알고리즘의 구축)

  • 강승모;권용석;박창호
    • Journal of Korean Society of Transportation
    • /
    • v.18 no.5
    • /
    • pp.99-107
    • /
    • 2000
  • Path-based assignment(PBA) is valuable to dynamic traffic control and routing in integrated ITS framework. As one of widely studied PBA a1gorithms, Gradient Projection(GP) a1gorithm typically fields rapid convergence to a neighborhood of an optimal solution. But once it comes near a solution, it tends to slow down. To overcome this problem, we develop more efficient path-based assignment algorithm by combining Conjugate Gradient method with GP algorithm. It determines more accurate moving direction near a solution in order to gain a significant advantage in speed of convergence. Also this algorithm is applied to the Sioux-Falls network and verified its efficiency. Then we demonstrate that this type of method is very useful in improving speed of convergence in the case of user equilibrium problem.

  • PDF

Interior Point Methods for Network Problems (An Efficient Conjugate Gradient Method for Interior Point Methods) (네트워크 문제에서 내부점 방법의 활용 (내부점 선형계획법에서 효율적인 공액경사법))

  • 설동렬
    • Journal of the military operations research society of Korea
    • /
    • v.24 no.1
    • /
    • pp.146-156
    • /
    • 1998
  • Cholesky factorization is known to be inefficient to problems with dense column and network problems in interior point methods. We use the conjugate gradient method and preconditioners to improve the convergence rate of the conjugate gradient method. Several preconditioners were applied to LPABO 5.1 and the results were compared with those of CPLEX 3.0. The conjugate gradient method shows to be more efficient than Cholesky factorization to problems with dense columns and network problems. The incomplete Cholesky factorization preconditioner shows to be the most efficient among the preconditioners.

  • PDF

Perceptron-like LVQ : Generalization of LVQ (퍼셉트론 형태의 LVQ : LVQ의 일반화)

  • Song, Geun-Bae;Lee, Haing-Sei
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.1
    • /
    • pp.1-6
    • /
    • 2001
  • In this paper we reanalyze Kohonen‘s learning vector quantizing (LVQ) Learning rule which is based on Hcbb’s learning rule with a view to a gradient descent method. Kohonen's LVQ can be classified into two algorithms according to 6learning mode: unsupervised LVQ(ULVQ) and supervised LVQ(SLVQ). These two algorithms can be represented as gradient descent methods, if target values of output neurons are generated properly. As a result, we see that the LVQ learning method is a special case of a gradient descent method and also that LVQ is represented by a generalized percetron-like LVQ(PLVQ).

  • PDF

Adaptive stochastic gradient method under two mixing heterogenous models (두 이종 혼합 모형에서의 수정된 경사 하강법)

  • Moon, Sang Jun;Jeon, Jong-June
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1245-1255
    • /
    • 2017
  • The online learning is a process of obtaining the solution for a given objective function where the data is accumulated in real time or in batch units. The stochastic gradient descent method is one of the most widely used for the online learning. This method is not only easy to implement, but also has good properties of the solution under the assumption that the generating model of data is homogeneous. However, the stochastic gradient method could severely mislead the online-learning when the homogeneity is actually violated. We assume that there are two heterogeneous generating models in the observation, and propose the a new stochastic gradient method that mitigate the problem of the heterogeneous models. We introduce a robust mini-batch optimization method using statistical tests and investigate the convergence radius of the solution in the proposed method. Moreover, the theoretical results are confirmed by the numerical simulations.

Regularized iterative image resotoration by using method of conjugate gradient with constrain (구속 조건을 사용한 공액 경사법에 의한 정칙화 반복 복원 처리)

  • 김승묵;홍성용;이태홍
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.9
    • /
    • pp.1985-1997
    • /
    • 1997
  • This paper proposed a regularized iterative image restoration by using method of conjugate gradient. Compared with conventional iterative methods, method of conjugate gradient has a merit to converte toward a solution as a super-linear convergence speed. But because of those properties, there are several artifacts like ringing effects and the partial magnification of the noise in the course of restoring the images that are degraded by a defocusing blur and additive noise. So, we proposed the regularized method of conjugate gradient applying constraints. By applying the projectiong constraint and regularization parameter into that method, it is possible to suppress the magnification of the additive noise. As a experimental results, we showed the superior convergence ratio of the proposed mehtod compared with conventional iterative regularized methods.

  • PDF

An Efficient Traning of Multilayer Neural Newtorks Using Stochastic Approximation and Conjugate Gradient Method (확률적 근사법과 공액기울기법을 이용한 다층신경망의 효율적인 학습)

  • 조용현
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.5
    • /
    • pp.98-106
    • /
    • 1998
  • This paper proposes an efficient learning algorithm for improving the training performance of the neural network. The proposed method improves the training performance by applying the backpropagation algorithm of a global optimization method which is a hybrid of a stochastic approximation and a conjugate gradient method. The approximate initial point for f a ~gtl obal optimization is estimated first by applying the stochastic approximation, and then the conjugate gradient method, which is the fast gradient descent method, is applied for a high speed optimization. The proposed method has been applied to the parity checking and the pattern classification, and the simulation results show that the performance of the proposed method is superior to those of the conventional backpropagation and the backpropagation algorithm which is a hyhrid of the stochastic approximation and steepest descent method.

  • PDF

An Analysis of the Optimal Control of Air-Conditioning System with Slab Thermal Storage by the Gradient Method Algorithm (구배법 알고리즘에 의한 슬래브축열의 최적제어 해석)

  • Jung, Jae-Hoon
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.20 no.8
    • /
    • pp.534-540
    • /
    • 2008
  • In this paper, the optimal bang-bang control problem of an air-conditioning system with slab thermal storage was formulated by gradient method. Furthermore, the numeric solution obtained by gradient method algorithm was compared with the analytic solution obtained on the basis of maximum principle. The control variable is changed uncontinuously at the start time of thermal storage operation in an analytic solution. On the other hand, it is showed as a continuous solution in a numeric solution. The numeric solution reproduces the analytic solution when a tolerance for convergence is applied severely. It is conceivable that gradient method is effective in the analysis of the optimal bang-bang control of the large-scale system like an air-conditioning system with slab thermal storage.