• 제목/요약/키워드: training optimization

검색결과 412건 처리시간 0.028초

A TSK fuzzy model optimization with meta-heuristic algorithms for seismic response prediction of nonlinear steel moment-resisting frames

  • Ebrahim Asadi;Reza Goli Ejlali;Seyyed Arash Mousavi Ghasemi;Siamak Talatahari
    • Structural Engineering and Mechanics
    • /
    • 제90권2호
    • /
    • pp.189-208
    • /
    • 2024
  • Artificial intelligence is one of the efficient methods that can be developed to simulate nonlinear behavior and predict the response of building structures. In this regard, an adaptive method based on optimization algorithms is used to train the TSK model of the fuzzy inference system to estimate the seismic behavior of building structures based on analytical data. The optimization algorithm is implemented to determine the parameters of the TSK model based on the minimization of prediction error for the training data set. The adaptive training is designed on the feedback of the results of previous time steps, in which three training cases of 2, 5, and 10 previous time steps were used. The training data is collected from the results of nonlinear time history analysis under 100 ground motion records with different seismic properties. Also, 10 records were used to test the inference system. The performance of the proposed inference system is evaluated on two 3 and 20-story models of nonlinear steel moment frame. The results show that the inference system of the TSK model by combining the optimization method is an efficient computational method for predicting the response of nonlinear structures. Meanwhile, the multi-vers optimization (MVO) algorithm is more accurate in determining the optimal parameters of the TSK model. Also, the accuracy of the results increases significantly with increasing the number of previous steps.

Slime mold and four other nature-inspired optimization algorithms in analyzing the concrete compressive strength

  • Yinghao Zhao;Hossein Moayedi;Loke Kok Foong;Quynh T. Thi
    • Smart Structures and Systems
    • /
    • 제33권1호
    • /
    • pp.65-91
    • /
    • 2024
  • The use of five optimization techniques for the prediction of a strength-based concrete mixture's best-fit model is examined in this work. Five optimization techniques are utilized for this purpose: Slime Mold Algorithm (SMA), Black Hole Algorithm (BHA), Multi-Verse Optimizer (MVO), Vortex Search (VS), and Whale Optimization Algorithm (WOA). MATLAB employs a hybrid learning strategy to train an artificial neural network that combines least square estimation with backpropagation. Thus, 72 samples are utilized as training datasets and 31 as testing datasets, totaling 103. The multi-layer perceptron (MLP) is used to analyze all data, and results are verified by comparison. For training datasets in the best-fit models of SMA-MLP, BHA-MLP, MVO-MLP, VS-MLP, and WOA-MLP, the statistical indices of coefficient of determination (R2) in training phase are 0.9603, 0.9679, 0.9827, 0.9841 and 0.9770, and in testing phase are 0.9567, 0.9552, 0.9594, 0.9888 and 0.9695 respectively. In addition, the best-fit structures for training for SMA, BHA, MVO, VS, and WOA (all combined with multilayer perceptron, MLP) are achieved when the term population size was modified to 450, 500, 250, 150, and 500, respectively. Among all the suggested options, VS could offer a stronger prediction network for training MLP.

A Global Optimization Method of Radial Basis Function Networks for Function Approximation (함수 근사화를 위한 방사 기저함수 네트워크의 전역 최적화 기법)

  • Lee, Jong-Seok;Park, Cheol-Hoon
    • The KIPS Transactions:PartB
    • /
    • 제14B권5호
    • /
    • pp.377-382
    • /
    • 2007
  • This paper proposes a training algorithm for global optimization of the parameters of radial basis function networks. Since conventional training algorithms usually perform only local optimization, the performance of the network is limited and the final network significantly depends on the initial network parameters. The proposed hybrid simulated annealing algorithm performs global optimization of the network parameters by combining global search capability of simulated annealing and local optimization capability of gradient-based algorithms. Via experiments for function approximation problems, we demonstrate that the proposed algorithm can find networks showing better training and test performance and reduce effects of the initial network parameters on the final results.

Improving the Training Performance of Multilayer Neural Network by Using Stochastic Approximation and Backpropagation Algorithm (확률적 근사법과 후형질과 알고리즘을 이용한 다층 신경망의 학습성능 개선)

  • 조용현;최흥문
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • 제31B권4호
    • /
    • pp.145-154
    • /
    • 1994
  • This paper proposes an efficient method for improving the training performance of the neural network by using a hybrid of a stochastic approximation and a backpropagation algorithm. The proposed method improves the performance of the training by appliying a global optimization method which is a hybrid of a stochastic approximation and a backpropagation algorithm. The approximate initial point for a stochastic approximation and a backpropagation algorihtm. The approximate initial point for fast global optimization is estimated first by applying the stochastic approximation, and then the backpropagation algorithm, which is the fast gradient descent method, is applied for a high speed global optimization. And further speed-up of training is made possible by adjusting the training parameters of each of the output and the hidden layer adaptively to the standard deviation of the neuron output of each layer. The proposed method has been applied to the parity checking and the pattern classification, and the simulation results show that the performance of the proposed method is superior to that of the backpropagation, the Baba's MROM, and the Sun's method with randomized initial point settings. The results of adaptive adjusting of the training parameters show that the proposed method further improves the convergence speed about 20% in training.

  • PDF

Implementation of CNN in the view of mini-batch DNN training for efficient second order optimization (효과적인 2차 최적화 적용을 위한 Minibatch 단위 DNN 훈련 관점에서의 CNN 구현)

  • Song, Hwa Jeon;Jung, Ho Young;Park, Jeon Gue
    • Phonetics and Speech Sciences
    • /
    • 제8권2호
    • /
    • pp.23-30
    • /
    • 2016
  • This paper describes some implementation schemes of CNN in view of mini-batch DNN training for efficient second order optimization. This uses same procedure updating parameters of DNN to train parameters of CNN by simply arranging an input image as a sequence of local patches, which is actually equivalent with mini-batch DNN training. Through this conversion, second order optimization providing higher performance can be simply conducted to train the parameters of CNN. In both results of image recognition on MNIST DB and syllable automatic speech recognition, our proposed scheme for CNN implementation shows better performance than one based on DNN.

Improved Automatic Lipreading by Stochastic Optimization of Hidden Markov Models (은닉 마르코프 모델의 확률적 최적화를 통한 자동 독순의 성능 향상)

  • Lee, Jong-Seok;Park, Cheol-Hoon
    • The KIPS Transactions:PartB
    • /
    • 제14B권7호
    • /
    • pp.523-530
    • /
    • 2007
  • This paper proposes a new stochastic optimization algorithm for hidden Markov models (HMMs) used as a recognizer of automatic lipreading. The proposed method combines a global stochastic optimization method, the simulated annealing technique, and the local optimization method, which produces fast convergence and good solution quality. We mathematically show that the proposed algorithm converges to the global optimum. Experimental results show that training HMMs by the method yields better lipreading performance compared to the conventional training methods based on local optimization.

Supervised Learning Artificial Neural Network Parameter Optimization and Activation Function Basic Training Method using Spreadsheets (스프레드시트를 활용한 지도학습 인공신경망 매개변수 최적화와 활성화함수 기초교육방법)

  • Hur, Kyeong
    • Journal of Practical Engineering Education
    • /
    • 제13권2호
    • /
    • pp.233-242
    • /
    • 2021
  • In this paper, as a liberal arts course for non-majors, we proposed a supervised learning artificial neural network parameter optimization method and a basic education method for activation function to design a basic artificial neural network subject curriculum. For this, a method of finding a parameter optimization solution in a spreadsheet without programming was applied. Through this training method, you can focus on the basic principles of artificial neural network operation and implementation. And, it is possible to increase the interest and educational effect of non-majors through the visualized data of the spreadsheet. The proposed contents consisted of artificial neurons with sigmoid and ReLU activation functions, supervised learning data generation, supervised learning artificial neural network configuration and parameter optimization, supervised learning artificial neural network implementation and performance analysis using spreadsheets, and education satisfaction analysis. In this paper, considering the optimization of negative parameters for the sigmoid neural network and the ReLU neuron artificial neural network, we propose a training method for the four performance analysis results on the parameter optimization of the artificial neural network, and conduct a training satisfaction analysis.

Improved Automatic Lipreading by Multiobjective Optimization of Hidden Markov Models (은닉 마르코프 모델의 다목적함수 최적화를 통한 자동 독순의 성능 향상)

  • Lee, Jong-Seok;Park, Cheol-Hoon
    • The KIPS Transactions:PartB
    • /
    • 제15B권1호
    • /
    • pp.53-60
    • /
    • 2008
  • This paper proposes a new multiobjective optimization method for discriminative training of hidden Markov models (HMMs) used as the recognizer for automatic lipreading. While the conventional Baum-Welch algorithm for training HMMs aims at maximizing the probability of the data of a class from the corresponding HMM, we define a new training criterion composed of two minimization objectives and develop a global optimization method of the criterion based on simulated annealing. The result of a speaker-dependent recognition experiment shows that the proposed method improves performance by the relative error reduction rate of about 8% in comparison to the Baum-Welch algorithm.

Injection Mold Cooling Circuit Optimization by Back-Propagation Algorithm (오류역전파 알고리즘을 이용한 사출성형 금형 냉각회로 최적화)

  • Rhee, B.O.;Tae, J.S.;Choi, J.H.
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • 제18권4호
    • /
    • pp.430-435
    • /
    • 2009
  • The cooling stage greatly affects the product quality in the injection molding process. The cooling system that minimizes temperature variance in the product surface will improve the quality and the productivity of products. The cooling circuit optimization problem that was once solved by a response surface method with 4 design variables. It took too much time for the optimization as an industrial design tool. It is desirable to reduce the optimization time. Therefore, we tried the back-propagation algorithm of artificial neural network(BPN) to find an optimum solution in the cooling circuit design in this research. We tried various ways to select training points for the BPN. The same optimum solution was obtained by applying the BPN with reduced number of training points by the fractional factorial design.

  • PDF

Modified GMM Training for Inexact Observation and Its Application to Speaker Identification

  • Kim, Jin-Young;Min, So-Hee;Na, Seung-You;Choi, Hong-Sub;Choi, Seung-Ho
    • Speech Sciences
    • /
    • 제14권1호
    • /
    • pp.163-174
    • /
    • 2007
  • All observation has uncertainty due to noise or channel characteristics. This uncertainty should be counted in the modeling of observation. In this paper we propose a modified optimization object function of a GMM training considering inexact observation. The object function is modified by introducing the concept of observation confidence as a weighting factor of probabilities. The optimization of the proposed criterion is solved using a common EM algorithm. To verify the proposed method we apply it to the speaker recognition domain. The experimental results of text-independent speaker identification with VidTimit DB show that the error rate is reduced from 14.8% to 11.7% by the modified GMM training.

  • PDF