• Title/Summary/Keyword: Neural network optimization

Search Result 812, Processing Time 0.029 seconds

An Artificial Neural Network for the Optimal Path Planning (최적경로탐색문제를 위한 인공신경회로망)

  • Kim, Wook;Park, Young-Moon
    • Proceedings of the KIEE Conference
    • /
    • 1991.07a
    • /
    • pp.333-336
    • /
    • 1991
  • In this paper, Hopfield & Tank model-like artificial neural network structure is proposed, which can be used for the optimal path planning problems such as the unit commitment problems or the maintenance scheduling problems which have been solved by the dynamic programming method or the branch and bound method. To construct the structure of the neural network, an energy function is defined, of which the global minimum means the optimal path of the problem. To avoid falling into one of the local minima during the optimization process, the simulated annealing method is applied via making the slope of the sigmoid transfer functions steeper gradually while the process progresses. As a result, computer(IBM 386-AT 34MHz) simulations can finish the optimal unit commitment problem with 10 power units and 24 hour periods (1 hour factor) in 5 minites. Furthermore, if the full parallel neural network hardware is contructed, the optimization time will be reduced remarkably.

  • PDF

Design of RBF-based Polynomial Neural Network And Optimization (방사형 기저 함수 기반 다항식 뉴럴네트워크 설계 및 최적화)

  • Kim, Ki-Sang;Jin, Yong-Ha;Oh, Sung-Kwun
    • Proceedings of the KIEE Conference
    • /
    • 2009.07a
    • /
    • pp.1863_1864
    • /
    • 2009
  • 본 연구에서는 복잡한 비선형 모델링 방법인 RBF 뉴럴 네트워크(Radial Basis Function Neural Network)와 PNN(Polynomial Neural Network)을 접목한 새로운 형태의 Radial Basis Function Polynomial Neural Network(RPNN)를 제안한다. RBF 뉴럴 네트워크는 빠른 학습 시간, 일반화 그리고 단순화의 특징으로 비선형 시스템 모델링 등에 적용되고 있으며, PNN은 생성된 노드들 중에서 우수한 결과값을 가진 노드들을 선택함으로써 모델의 근사화 및 일반화에 탁월한 효과를 가진 비선형 모델링 방법이다. 제안된 RPNN모델의 기본적인 구조는 PNN의 형태를 이루고 있으며, 각각의 노드는 RBF 뉴럴 네트워크로 구성하였다. 사용된 RBF 뉴럴 네트워크에서의 커널 함수로는 FCM 클러스터링을 사용하였으며, 각 노드의 후반부는 다항식 구조로 표현하였다. 또한 입력개수, 입력변수, 클러스터의 개수를 PSO알고리즘(Particle Swarm Optimization)을 사용하여 최적화 시켰다. 제안한 모델의 적용 및 유용성을 비교 평가하기 위하여 비선형 데이터를 이용하여 그 우수성을 보인다.

  • PDF

A Study on Optimal Process Design of Hydroforming Process with n Genetic Algorithm and Neural Network (Genetic Algorithm과 Neural Network을 이용한 Tube Hydroforming의 성형공정 최적화에 대한 연구)

  • 양재봉;전병희;오수익
    • Transactions of Materials Processing
    • /
    • v.9 no.6
    • /
    • pp.644-652
    • /
    • 2000
  • Tube hydroforming is recently drawing attention of automotive industries due to its several advantages over conventional methods. It can produce wide range of products such as subframes, engine cradles, and exhaust manifolds with cheaper production cost by reducing overall number of processes. h successful tube hydroforming depends on the reasonable combination of the internal pressure and axial load at the tube ends. This paper deals with the optimal process design of hydroforming process using the genetic algorithm and neural network. An optimization technique is used in order to minimize the tube thickness variation by determining the optimal loading path in the tube expansion forming and the tube T-shape forming process.

  • PDF

Optimization of Culture Conditions and Bench-Scale Production of $_L$-Asparaginase by Submerged Fermentation of Aspergillus terreus MTCC 1782

  • Gurunathan, Baskar;Sahadevan, Renganathan
    • Journal of Microbiology and Biotechnology
    • /
    • v.22 no.7
    • /
    • pp.923-929
    • /
    • 2012
  • Optimization of culture conditions for L-asparaginase production by submerged fermentation of Aspergillus terreus MTCC 1782 was studied using a 3-level central composite design of response surface methodology and artificial neural network linked genetic algorithm. The artificial neural network linked genetic algorithm was found to be more efficient than response surface methodology. The experimental $_L$-asparaginase activity of 43.29 IU/ml was obtained at the optimum culture conditions of temperature $35^{\circ}C$, initial pH 6.3, inoculum size 1% (v/v), agitation rate 140 rpm, and incubation time 58.5 h of the artificial neural network linked genetic algorithm, which was close to the predicted activity of 44.38 IU/ml. Characteristics of $_L$-asparaginase production by A. terreus MTCC 1782 were studied in a 3 L bench-scale bioreactor.

Ensemble techniques and hybrid intelligence algorithms for shear strength prediction of squat reinforced concrete walls

  • Mohammad Sadegh Barkhordari;Leonardo M. Massone
    • Advances in Computational Design
    • /
    • v.8 no.1
    • /
    • pp.37-59
    • /
    • 2023
  • Squat reinforced concrete (SRC) shear walls are a critical part of the structure for both office/residential buildings and nuclear structures due to their significant role in withstanding seismic loads. Despite this, empirical formulae in current design standards and published studies demonstrate a considerable disparity in predicting SRC wall shear strength. The goal of this research is to develop and evaluate hybrid and ensemble artificial neural network (ANN) models. State-of-the-art population-based algorithms are used in this research for hybrid intelligence algorithms. Six models are developed, including Honey Badger Algorithm (HBA) with ANN (HBA-ANN), Hunger Games Search with ANN (HGS-ANN), fitness-distance balance coyote optimization algorithm (FDB-COA) with ANN (FDB-COA-ANN), Averaging Ensemble (AE) neural network, Snapshot Ensemble (SE) neural network, and Stacked Generalization (SG) ensemble neural network. A total of 434 test results of SRC walls is utilized to train and assess the models. The results reveal that the SG model not only minimizes prediction variance but also produces predictions (with R2= 0.99) that are superior to other models.

Shape Optimization of LMR Fuel Assembly Using Radial Basis Neural Network Technique (신경회로망 기법을 사용한 액체금속원자로 봉다발의 형상최적화)

  • Raza, Wasim;Kim, Kwang-Yong
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.31 no.8
    • /
    • pp.663-671
    • /
    • 2007
  • In this work, shape optimization of a wire-wrapped fuel assembly in a liquid metal reactor has been carried out by combining a three-dimensional Reynolds-averaged Navier-Stokes analysis with the radial basis neural network method, a well known surrogate modeling technique for optimization. Sequential Quadratic Programming is used to search the optimal point from the constructed surrogate. Two geometric design variables are selected for the optimization and design space is sampled using Latin Hypercube Sampling. The optimization problem has been defined as a maximization of the objective function, which is as a linear combination of heat transfer and friction loss related terms with a weighing factor. The objective function value is more sensitive to the ratio of the wire spacer diameter to the fuel rod diameter than to the ratio of the wire wrap pitch to the fuel rod diameter. The optimal values of the design variables are obtained by varying the weighting factor.

A Study on The Optimization Method of The Initial Weights in Single Layer Perceptron

  • Cho, Yong-Jun;Lee, Yong-Goo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.2
    • /
    • pp.331-337
    • /
    • 2004
  • In the analysis of massive volume data, a neural network model is a useful tool. To implement the Neural network model, it is important to select initial value. Since the initial values are generally used as random value in the neural network, the convergent performance and the prediction rate of model are not stable. To overcome the drawback a possible method use samples randomly selected from the whole data set. That is, coefficients estimated by logistic regression based on the samples are the initial values.

  • PDF

Neural Network Based On-Line Efficiency Optimization Control of a VVVF-Induction Motor Drive (인공신경망을 이용한 VVVF-유도전동기 시스템의 실시간 운전효율 최적제어)

  • Lee, Seung-Chul;Choy, Ick;Kwon, Soon-Hak;Choi, Ju-Yeop;Song, Joong-Ho
    • The Transactions of the Korean Institute of Power Electronics
    • /
    • v.4 no.2
    • /
    • pp.166-174
    • /
    • 1999
  • On-line efficiency optimization control of an induction motor drive using neural network is important from the v viewpoints of energy saving and controlling a nonlinear system whose charact81istics are not fully known. This paper p presents a neural networklongleftarrowbased on-line efficiency optimization control for an induction motor drive, which adopts an optimal slip an밍J.lar frequency control. In the proposed scheme, a neuro-controller provides minimal loss operating point i in the whole range of the measured input power. Both simulation and experimental results show that a considerable e energy saving is achieved compared with the conventional constant vlf ratio operation.

  • PDF

Learning an Artificial Neural Network Using Dynamic Particle Swarm Optimization-Backpropagation: Empirical Evaluation and Comparison

  • Devi, Swagatika;Jagadev, Alok Kumar;Patnaik, Srikanta
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.2
    • /
    • pp.123-131
    • /
    • 2015
  • Training neural networks is a complex task with great importance in the field of supervised learning. In the training process, a set of input-output patterns is repeated to an artificial neural network (ANN). From those patterns weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. In this paper, a new hybrid algorithm is proposed for global optimization of connection weights in an ANN. Dynamic swarms are shown to converge rapidly during the initial stages of a global search, but around the global optimum, the search process becomes very slow. In contrast, the gradient descent method can achieve faster convergence speed around the global optimum, and at the same time, the convergence accuracy can be relatively high. Therefore, the proposed hybrid algorithm combines the dynamic particle swarm optimization (DPSO) algorithm with the backpropagation (BP) algorithm, also referred to as the DPSO-BP algorithm, to train the weights of an ANN. In this paper, we intend to show the superiority (time performance and quality of solution) of the proposed hybrid algorithm (DPSO-BP) over other more standard algorithms in neural network training. The algorithms are compared using two different datasets, and the results are simulated.

Speeding Up Neural Network-Based Face Detection Using Swarm Search

  • Sugisaka, Masanori;Fan, Xinjian
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1334-1337
    • /
    • 2004
  • This paper presents a novel method to speed up neural network (NN) based face detection systems. NN-based face detection can be viewed as a classification and search problem. The proposed method formulates the search problem as an integer nonlinear optimization problem (INLP) and expands the basic particle swarm optimization (PSO) to solve it. PSO works with a population of particles, each representing a subwindow in an input image. The subwindows are evaluated by how well they match a NN-based face filter. A face is indicated when the filter response of the best particle is above a given threshold. To achieve better performance, the influence of PSO parameter settings on the search performance was investigated. Experiments show that with fine-adjusted parameters, the proposed method leads to a speedup of 94 on 320${\times}$240 images compared to the traditional exhaustive search method.

  • PDF