• Title/Summary/Keyword: Gradient - descent method.

Search Result 236, Processing Time 0.024 seconds

Gradient Descent Training Method for Optimizing Data Prediction Models (데이터 예측 모델 최적화를 위한 경사하강법 교육 방법)

  • Hur, Kyeong
    • Journal of Practical Engineering Education
    • /
    • v.14 no.2
    • /
    • pp.305-312
    • /
    • 2022
  • In this paper, we focused on training to create and optimize a basic data prediction model. And we proposed a gradient descent training method of machine learning that is widely used to optimize data prediction models. It visually shows the entire operation process of gradient descent used in the process of optimizing parameter values required for data prediction models by applying the differential method and teaches the effective use of mathematical differentiation in machine learning. In order to visually explain the entire operation process of gradient descent, we implement gradient descent SW in a spreadsheet. In this paper, first, a two-variable gradient descent training method is presented, and the accuracy of the two-variable data prediction model is verified by comparison with the error least squares method. Second, a three-variable gradient descent training method is presented and the accuracy of a three-variable data prediction model is verified. Afterwards, the direction of the optimization practice for gradient descent was presented, and the educational effect of the proposed gradient descent method was analyzed through the results of satisfaction with education for non-majors.

Comparison with two Gradient Methods through the application to the Vector Linear Predictor (두가지 gradient 방법의 벡터 선형 예측기에 대한 적용 비교)

  • Shin, Kwang-Kyun;Yang, Seung-In
    • Proceedings of the KIEE Conference
    • /
    • 1987.07b
    • /
    • pp.1595-1597
    • /
    • 1987
  • Two gradient methods, steepest descent method and conjugate gradient descent method, are compar ed through application to vector linear predictors. It is found that the convergence rate of the conju-gate gradient descent method is much faster than that of the steepest descent method.

  • PDF

A Study on the Development of Teaching-Learning Materials for Gradient Descent Method in College AI Mathematics Classes (대학수학 경사하강법(gradient descent method) 교수·학습자료 개발)

  • Lee, Sang-Gu;Nam, Yun;Lee, Jae Hwa
    • Communications of Mathematical Education
    • /
    • v.37 no.3
    • /
    • pp.467-482
    • /
    • 2023
  • In this paper, we present our new teaching and learning materials on gradient descent method, which is widely used in artificial intelligence, available for college mathematics. These materials provide a good explanation of gradient descent method at the level of college calculus, and the presented SageMath code can help students to solve minimization problems easily. And we introduce how to solve least squares problem using gradient descent method. This study can be helpful to instructors who teach various college-level mathematics subjects such as calculus, engineering mathematics, numerical analysis, and applied mathematics.

Comparison of Gradient Descent for Deep Learning (딥러닝을 위한 경사하강법 비교)

  • Kang, Min-Jae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.2
    • /
    • pp.189-194
    • /
    • 2020
  • This paper analyzes the gradient descent method, which is the one most used for learning neural networks. Learning means updating a parameter so the loss function is at its minimum. The loss function quantifies the difference between actual and predicted values. The gradient descent method uses the slope of the loss function to update the parameter to minimize error, and is currently used in libraries that provide the best deep learning algorithms. However, these algorithms are provided in the form of a black box, making it difficult to identify the advantages and disadvantages of various gradient descent methods. This paper analyzes the characteristics of the stochastic gradient descent method, the momentum method, the AdaGrad method, and the Adadelta method, which are currently used gradient descent methods. The experimental data used a modified National Institute of Standards and Technology (MNIST) data set that is widely used to verify neural networks. The hidden layer consists of two layers: the first with 500 neurons, and the second with 300. The activation function of the output layer is the softmax function, and the rectified linear unit function is used for the remaining input and hidden layers. The loss function uses cross-entropy error.

Tuning Method of the Membership Function for FLC using a Gradient Descent Algorithm (Gradient Descent 알고리즘을 이용한 퍼지제어기의 멤버십함수 동조 방법)

  • Choi, Hansoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.12
    • /
    • pp.7277-7282
    • /
    • 2014
  • In this study, the gradient descent algorithm was used for FLC analysis and the algorithm was used to represent the effects of nonlinear parameters, which alter the antecedent and consequence fuzzy variables of FLC. The controller parameters choose the control variable by iteration for gradient descent algorithm. The FLC consists of 7 membership functions, 49 rules and a two inputs - one output system. The system adopted the Min-Max inference method and triangle type membership function with a 13 quantization level.

A Study on the Tensor-Valued Median Filter Using the Modified Gradient Descent Method in DT-MRI (확산텐서자기공명영상에서 수정된 기울기강하법을 이용한 텐서 중간값 필터에 관한 연구)

  • Kim, Sung-Hee;Kwon, Ki-Woon;Park, In-Sung;Han, Bong-Soo;Kim, Dong-Youn
    • Journal of Biomedical Engineering Research
    • /
    • v.28 no.6
    • /
    • pp.817-824
    • /
    • 2007
  • Tractography using Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) is a method to determine the architecture of axonal fibers in the central nervous system by computing the direction of the principal eigenvector in the white matter of the brain. However, the fiber tracking methods suffer from the noise included in the diffusion tensor images that affects the determination of the principal eigenvector. As the fiber tracking progresses, the accumulated error creates a large deviation between the calculated fiber and the real fiber. This problem of the DT-MRI tractography is known mathematically as the ill-posed problem which means that tractography is very sensitive to perturbations by noise. To reduce the noise in DT-MRI measurements, a tensor-valued median filter which is reported to be denoising and structure-preserving in fiber tracking, is applied in the tractography. In this paper, we proposed the modified gradient descent method which converges fast and accurately to the optimal tensor-valued median filter by changing the step size. In addition, the performance of the modified gradient descent method is compared with others. We used the synthetic image which consists of 45 degree principal eigenvectors and the corticospinal tract. For the synthetic image, the proposed method achieved 4.66%, 16.66% and 15.08% less error than the conventional gradient descent method for error measures AE, AAE, AFA respectively. For the corticospinal tract, at iteration number ten the proposed method achieved 3.78%, 25.71 % and 11.54% less error than the conventional gradient descent method for error measures AE, AAE, AFA respectively.

Improving Automobile Insurance Repair Claims Prediction Using Gradient Decent and Location-based Association Rules

  • Seongsu Jeong;Jong Woo Kim
    • Asia pacific journal of information systems
    • /
    • v.34 no.2
    • /
    • pp.565-584
    • /
    • 2024
  • More than 1 million automobile insurance repairs occur per year globally, and the related repair costs add up to astronomical amounts. Insurance companies and repair shops are spending a great deal of money on manpower every year to claim reasonable insurance repair costs. For this reason, promptly predicting insurance claims for vehicles in accidents can help reduce social costs related to auto insurance. Several recent studies have been conducted in auto insurance repair prediction using variables such as photos of vehicle damage. We propose a new model that reflects auto insurance repair characteristics to predict auto insurance repair claims through an association rule method that combines gradient descent and location information. This method searches for the appropriate number of rules by applying the gradient descent method to results generated by association rules and eventually extracting main rules with a distance filter that reflects automobile part location information to find items suitable for insurance repair claims. According to our results, predictive performance could be improved by applying the rule set extracted by the proposed method. Therefore, a model combining the gradient descent method and a location-based association rule method is suitable for predicting auto insurance repair claims.

An Application of the Clustering Threshold Gradient Descent Regularization Method for Selecting Genes in Predicting the Survival Time of Lung Carcinomas

  • Lee, Seung-Yeoun;Kim, Young-Chul
    • Genomics & Informatics
    • /
    • v.5 no.3
    • /
    • pp.95-101
    • /
    • 2007
  • In this paper, we consider the variable selection methods in the Cox model when a large number of gene expression levels are involved with survival time. Deciding which genes are associated with survival time has been a challenging problem because of the large number of genes and relatively small sample size (n<

A new optimization method for improving the performance of neural networks for optimization (최적화용 신경망의 성능개선을 위한 새로운 최적화 기법)

  • 조영현
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.12
    • /
    • pp.61-69
    • /
    • 1997
  • This paper proposes a new method for improving the performances of the neural network for optimization using a hyubrid of gradient descent method and dynamic tunneling system. The update rule of gradient descent method, which has the fast convergence characteristic, is applied for high-speed optimization. The update rule of dynamic tunneling system, which is the deterministic method with a tunneling phenomenon, is applied for global optimization. Having converged to the for escaping the local minima by applying the dynamic tunneling system. The proposed method has been applied to the travelling salesman problems and the optimal task partition problems to evaluate to that of hopfield model using the update rule of gradient descent method.

  • PDF

Perceptron-like LVQ : Generalization of LVQ (퍼셉트론 형태의 LVQ : LVQ의 일반화)

  • Song, Geun-Bae;Lee, Haing-Sei
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.1
    • /
    • pp.1-6
    • /
    • 2001
  • In this paper we reanalyze Kohonen‘s learning vector quantizing (LVQ) Learning rule which is based on Hcbb’s learning rule with a view to a gradient descent method. Kohonen's LVQ can be classified into two algorithms according to 6learning mode: unsupervised LVQ(ULVQ) and supervised LVQ(SLVQ). These two algorithms can be represented as gradient descent methods, if target values of output neurons are generated properly. As a result, we see that the LVQ learning method is a special case of a gradient descent method and also that LVQ is represented by a generalized percetron-like LVQ(PLVQ).

  • PDF