• Title/Summary/Keyword: training parameters

Search Result 1,021, Processing Time 0.034 seconds

A MODIFIED EXTENDED KALMAN FILTER METHOD FOR MULTI-LAYERED NEURAL NETWORK TRAINING

  • KIM, KYUNGSUP;WON, YOOJAE
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.22 no.2
    • /
    • pp.115-123
    • /
    • 2018
  • This paper discusses extended Kalman filter method for solving learning problems of multilayered neural networks. A lot of learning algorithms for deep layered network are sincerely suffered from complex computation and slow convergence because of a very large number of free parameters. We consider an efficient learning algorithm for deep neural network. Extended Kalman filter method is applied to parameter estimation of neural network to improve convergence and computation complexity. We discuss how an efficient algorithm should be developed for neural network learning by using Extended Kalman filter.

Development of Prediction Model for Root Industry Production Process Using Artificial Neural Network (인공신경망을 이용한 뿌리산업 생산공정 예측 모델 개발)

  • Bak, Chanbeom;Son, Hungsun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.34 no.1
    • /
    • pp.23-27
    • /
    • 2017
  • This paper aims to develop a prediction model for the product quality of a casting process. Prediction of the product quality utilizes an artificial neural network (ANN) in order to renovate the manufacturing technology of the root industry. Various aspects of the research on the prediction algorithm for the casting process using an ANN have been investigated. First, the key process parameters have been selected by means of a statistics analysis of the process data. Then, the optimal number of the layers and neurons in the ANN structure is established. Next, feed-forward back propagation and the Levenberg-Marquardt algorithm are selected to be used for training. Simulation of the predicted product quality shows that the prediction is accurate. Finally, the proposed method shows that use of the ANN can be an effective tool for predicting the results of the casting process.

Bayesian Testing for the Equality of K-Exponential Populations (K개 지수분포의 상등에 관한 베이지안 다중검정)

  • Moon, Kyoung-Ae;Kim, Dal-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.12 no.1
    • /
    • pp.41-50
    • /
    • 2001
  • We propose the Bayesian testing for the equality of K-exponential populations means. Specially we use the intrinsic Bayesian factors suggested by Beregr and Perrichi (1996,1998) based on the noninformative priors for the parameters. And, we investigate the usefulness of the proposed Bayesian testing procedures via simulations.

  • PDF

Bayesian Testing for the Equality of Two Lognormal Populations with the fractional Bayes factor (부분 베이즈요인을 이용한 로그정규분포의 상등에 관한 베이지안검정)

  • Moon, Kyoung-Ae;Kim, Dal-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.12 no.1
    • /
    • pp.51-59
    • /
    • 2001
  • We propose the Bayesian testing for the equality of two Lognormal population means. Specially we use the fractional Bayesian factors suggested by O'Hagan (1995) based on the noninformative priors for the parameters. In order to investigate the usefulness of the proposed Bayesian testing procedures, we compare it with classical tests via both real data analysis and simulations.

  • PDF

A study on the speech recognition by HMM based on multi-observation sequence (다중 관측열을 토대로한 HMM에 의한 음성 인식에 관한 연구)

  • 정의봉
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.4
    • /
    • pp.57-65
    • /
    • 1997
  • The purpose of this paper is to propose the HMM (hidden markov model) based on multi-observation sequence for the isolated word recognition. The proosed model generates the codebook of MSVQ by dividing each word into several sections followed by dividing training data into several sections. Then, we are to obtain the sequential value of multi-observation per each section by weighting the vectors of distance form lower values to higher ones. Thereafter, this the sequential with high probability value while in recognition. 146 DDD area names are selected as the vocabularies for the target recognition, and 10LPC cepstrum coefficients are used as the feature parameters. Besides the speech recognition experiments by way of the proposed model, for the comparison with it, the experiments by DP, MSVQ, and genral HMM are made with the same data under the same condition. The experiment results have shown that HMM based on multi-observation sequence proposed in this paper is proved superior to any other methods such as the ones using DP, MSVQ and general HMM models in recognition rate and time.

  • PDF

Design of Multilayer Perceptrons for Pattern Classifications (패턴인식 문제에 대한 다층퍼셉트론의 설계 방법)

  • Oh, Sang-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.99-106
    • /
    • 2010
  • Multilayer perceptrons(MLPs) or feed-forward neural networks are widely applied to many areas based on their function approximation capabilities. When implementing MLPs for application problems, we should determine various parameters and training methods. In this paper, we discuss the design of MLPs especially for pattern classification problems. This discussion includes how to decide the number of nodes in each layer, how to initialize the weights of MLPs, how to train MLPs among various error functions, the imbalanced data problems, and deep architecture.

Structure Minimization using Impact Factor in Neural Networks

  • Seo, Kap-Ho;Song, Jae-Su;Lee, Ju-Jang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.484-484
    • /
    • 2000
  • The problem of determining the proper size of an neural network is recognized to be crucial, especially for its practical implications in such important issues as learning and generalization. Unfortunately, it usually is not obvious what size is best: a system that is too snail will not be able to learn the data while one that is just big enough may learn the slowly and be very sensitive to initial conditions and learning parameters. One popular technique is commonly known as pruning and consists of training a larger than necessary network and then removing unnecessary weights/nodes. In this paper, a new pruning method is developed, based on the penalty-term methods. This method makes the neural network good for the generalization and reduces the retraining time after pruning weights/nodes.

  • PDF

Modeling of a 5-Bar Linkage Robot Manipulator with Joint Flexibility Using Neural Network (신경 회로망을 이용한 유연한 축을 갖는 5절 링크 로봇 메니퓰레이터의 모델링)

  • 이성범;김상우;오세영;이상훈
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.431-431
    • /
    • 2000
  • The modeling of 5-bar linkage robot manipulator dynamics by means of a mathematical and neural architecture is presented. Such a model is applicable to the design of a feedforward controller or adjustment of controller parameters. The inverse model consists of two parts: a mathematical part and a compensation part. In the mathematical part, the subsystems of a 5-bar linkage robot manipulator are constructed by applying Kawato's Feedback-Error-Learning method, and trained by given training data. In the compensation part, MLP backpropagation algorithm is used to compensate the unmodeled dynamics. The forward model is realized from the inverse model using the inverse of inertia matrix and the compensation torque is decoupled in the input torque of the forward model. This scheme can use tile mathematical knowledge of the robot manipulator and analogize the robot characteristics. It is shown that the model is reasonable to be used for design and initial gain tuning of a controller.

  • PDF

A novel visual tracking system with adaptive incremental extreme learning machine

  • Wang, Zhihui;Yoon, Sook;Park, Dong Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.451-465
    • /
    • 2017
  • This paper presents a novel discriminative visual tracking algorithm with an adaptive incremental extreme learning machine. The parameters for an adaptive incremental extreme learning machine are initialized at the first frame with a target that is manually assigned. At each frame, the training samples are collected and random Haar-like features are extracted. The proposed tracker updates the overall output weights for each frame, and the updated tracker is used to estimate the new location of the target in the next frame. The adaptive learning rate for the update of the overall output weights is estimated by using the confidence of the predicted target location at the current frame. Our experimental results indicate that the proposed tracker can manage various difficulties and can achieve better performance than other state-of-the-art trackers.

Spring Flow Prediction affected by Hydro-power Station Discharge using the Dynamic Neuro-Fuzzy Local Modeling System

  • Hong, Timothy Yoon-Seok;White, Paul Albert.
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2007.05a
    • /
    • pp.58-66
    • /
    • 2007
  • This paper introduces the new generic dynamic neuro-fuzzy local modeling system (DNFLMS) that is based on a dynamic Takagi-Sugeno (TS) type fuzzy inference system for complex dynamic hydrological modeling tasks. The proposed DNFLMS applies a local generalization principle and an one-pass training procedure by using the evolving clustering method to create and update fuzzy local models dynamically and the extended Kalman filtering learning algorithm to optimize the parameters of the consequence part of fuzzy local models. The proposed DNFLMS is applied to develop the inference model to forecast the flow of Waikoropupu Springs, located in the Takaka Valley, South Island, New Zealand, and the influence of the operation of the 32 Megawatts Cobb hydropower station on springs flow. It is demonstrated that the proposed DNFLMS is superior in terms of model accuracy, model complexity, and computational efficiency when compared with a multi-layer perceptron trained with the back propagation learning algorithm and well-known adaptive neural-fuzzy inference system, both of which adopt global generalization.

  • PDF