• 제목/요약/키워드: Neural Networks Node Optimization

검색결과 18건 처리시간 0.029초

다항식 뉴럴 네트워크의 최적화: 진화론적 방법 (Optimization of Polynomial Neural Networks: An Evolutionary Approach)

  • 김동원;박귀태
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제52권7호
    • /
    • pp.424-433
    • /
    • 2003
  • Evolutionary design related to the optimal design of Polynomial Neural Networks (PNNs) structure for model identification of complex and nonlinear system is studied in this paper. The PNN structure is consisted of layers and nodes like conventional neural networks but is not fixed and can be changable according to the system environments. three types of polynomials such as linear, quadratic, and modified quadratic is used in each node that is connected with various kinds of multi-variable inputs. Inputs and order of polynomials in each node are very important element for the performance of model. In most cases these factors are decided by the background information and trial and error of designer. For the high reliability and good performance of the PNN, the factors must be decided according to a logical and systematic way. In the paper evolutionary algorithm is applied to choose the optimal input variables and order. Evolutionary (genetic) algorithm is a random search optimization technique. The evolved PNN with optimally chosen input variables and order is not fixed in advance but becomes fully optimized automatically during the identification process. Gas furnace and pH neutralization processes are used in conventional PNN version are modeled. It shows that the designed PNN architecture with evolutionary structure optimization can produce the model with higher accuracy than previous PNN and other works.

다항식 뉴럴 네트워크의 최적화 : 진화론적 방법 (Optimization of Polynomial Neural Networks: An Evolutionary Approach)

  • 김동원;박귀태
    • 대한전기학회논문지:전기물성ㆍ응용부문C
    • /
    • 제52권7호
    • /
    • pp.424-424
    • /
    • 2003
  • Evolutionary design related to the optimal design of Polynomial Neural Networks (PNNs) structure for model identification of complex and nonlinear system is studied in this paper. The PNN structure is consisted of layers and nodes like conventional neural networks but is not fixed and can be changable according to the system environments. three types of polynomials such as linear, quadratic, and modified quadratic is used in each node that is connected with various kinds of multi-variable inputs. Inputs and order of polynomials in each node are very important element for the performance of model. In most cases these factors are decided by the background information and trial and error of designer. For the high reliability and good performance of the PNN, the factors must be decided according to a logical and systematic way. In the paper evolutionary algorithm is applied to choose the optimal input variables and order. Evolutionary (genetic) algorithm is a random search optimization technique. The evolved PNN with optimally chosen input variables and order is not fixed in advance but becomes fully optimized automatically during the identification process. Gas furnace and pH neutralization processes are used in conventional PNN version are modeled. It shows that the designed PNN architecture with evolutionary structure optimization can produce the model with higher accuracy than previous PNN and other works.

Adaptive learning based on bit-significance optimization of the Hopfield model and its electro-optical implementation for correlated images

  • Lee, Soo-Young
    • 한국광학회:학술대회논문집
    • /
    • 한국광학회 1989년도 제4회 파동 및 레이저 학술발표회 4th Conference on Waves and lasers 논문집 - 한국광학회
    • /
    • pp.85-88
    • /
    • 1989
  • Introducing and optimizing it-significance to the Hopfield model, ten highly correlated binary images, i.e., numbers "0" to "9", are successfully stored and retrieved in a 6x8 node system. Unlike many other neural networks models, this model has stronger error correction capability for correlated images such as "6", "8", "3", and "9". the bit-significance optimization is regarded as an adaptive learning process based on least-mean-square error algorithm, and may be implemented with another neural nets optimizer. A design for electro-optic implementation including the adaptive optimization networks is also introduced.uding the adaptive optimization networks is also introduced.

  • PDF

Evolutionary Design Methodology of Fuzzy Set-based Polynomial Neural Networks with the Information Granule

  • Roh Seok-Beom;Ahn Tae-Chon;Oh Sung-Kwun
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2005년도 춘계학술대회 학술발표 논문집 제15권 제1호
    • /
    • pp.301-304
    • /
    • 2005
  • In this paper, we propose a new fuzzy set-based polynomial neuron (FSPN) involving the information granule, and new fuzzy-neural networks - Fuzzy Set based Polynomial Neural Networks (FSPNN). We have developed a design methodology (genetic optimization using Genetic Algorithms) to find the optimal structure for fuzzy-neural networks that expanded from Group Method of Data Handling (GMDH). It is the number of input variables, the order of the polynomial, the number of membership functions, and a collection of the specific subset of input variables that are the parameters of FSPNN fixed by aid of genetic optimization that has search capability to find the optimal solution on the solution space. We have been interested in the architecture of fuzzy rules that mimic the real world, namely sub-model (node) composing the fuzzy-neural networks. We adopt fuzzy set-based fuzzy rules as substitute for fuzzy relation-based fuzzy rules and apply the concept of Information Granulation to the proposed fuzzy set-based rules.

  • PDF

최적화문제를 위한 신경회로망의 Global Convergence (Global Convergence of Neural Networks for Optimization)

  • 강민제
    • 한국지능시스템학회논문지
    • /
    • 제11권4호
    • /
    • pp.325-330
    • /
    • 2001
  • 최적화문제에 사용되는 신경회로망을 회로레벨에서 시뮬레이션을 해보면, 알고리즘레벨에서 시뮬레이션한 결과와 많이 다름을 체험한다. 즉, 이런 신경회로망의 출력값들은 시간이 흐름에 따라 점근적으로 수렴하나, 입력단의 값들은 입력단에 부수적으로 연결되어 있는 컨덕턴스의 값에 따라 수렴여부도 달라지고, 또한 시스템의 성능도 변함을 안다. 이 논문에서는 입력단에 시스템의 안정도를 위해 부수적으로 연결된 컨덕턴스의 값에 따라 시스템의 수렴여부를 입력단과 출력단에서 분석하였으며, 에너지함수의 수렴점들이 이들 컨덕턴스의 값에 따라 성분이 변함을 분석하였다.

  • PDF

데이터 중심 다항식 확장형 RBF 신경회로망의 설계 및 최적화 (Design of Data-centroid Radial Basis Function Neural Network with Extended Polynomial Type and Its Optimization)

  • 오성권;김영훈;박호성;김정태
    • 전기학회논문지
    • /
    • 제60권3호
    • /
    • pp.639-647
    • /
    • 2011
  • In this paper, we introduce a design methodology of data-centroid Radial Basis Function neural networks with extended polynomial function. The two underlying design mechanisms of such networks involve K-means clustering method and Particle Swarm Optimization(PSO). The proposed algorithm is based on K-means clustering method for efficient processing of data and the optimization of model was carried out using PSO. In this paper, as the connection weight of RBF neural networks, we are able to use four types of polynomials such as simplified, linear, quadratic, and modified quadratic. Using K-means clustering, the center values of Gaussian function as activation function are selected. And the PSO-based RBF neural networks results in a structurally optimized structure and comes with a higher level of flexibility than the one encountered in the conventional RBF neural networks. The PSO-based design procedure being applied at each node of RBF neural networks leads to the selection of preferred parameters with specific local characteristics (such as the number of input variables, a specific set of input variables, and the distribution constant value in activation function) available within the RBF neural networks. To evaluate the performance of the proposed data-centroid RBF neural network with extended polynomial function, the model is experimented with using the nonlinear process data(2-Dimensional synthetic data and Mackey-Glass time series process data) and the Machine Learning dataset(NOx emission process data in gas turbine plant, Automobile Miles per Gallon(MPG) data, and Boston housing data). For the characteristic analysis of the given entire dataset with non-linearity as well as the efficient construction and evaluation of the dynamic network model, the partition of the given entire dataset distinguishes between two cases of Division I(training dataset and testing dataset) and Division II(training dataset, validation dataset, and testing dataset). A comparative analysis shows that the proposed RBF neural networks produces model with higher accuracy as well as more superb predictive capability than other intelligent models presented previously.

공간 탐색 최적화 알고리즘을 이용한 K-Means 클러스터링 기반 다항식 방사형 기저 함수 신경회로망: 설계 및 비교 해석 (K-Means-Based Polynomial-Radial Basis Function Neural Network Using Space Search Algorithm: Design and Comparative Studies)

  • 김욱동;오성권
    • 제어로봇시스템학회논문지
    • /
    • 제17권8호
    • /
    • pp.731-738
    • /
    • 2011
  • In this paper, we introduce an advanced architecture of K-Means clustering-based polynomial Radial Basis Function Neural Networks (p-RBFNNs) designed with the aid of SSOA (Space Search Optimization Algorithm) and develop a comprehensive design methodology supporting their construction. In order to design the optimized p-RBFNNs, a center value of each receptive field is determined by running the K-Means clustering algorithm and then the center value and the width of the corresponding receptive field are optimized through SSOA. The connections (weights) of the proposed p-RBFNNs are of functional character and are realized by considering three types of polynomials. In addition, a WLSE (Weighted Least Square Estimation) is used to estimate the coefficients of polynomials (serving as functional connections of the network) of each node from output node. Therefore, a local learning capability and an interpretability of the proposed model are improved. The proposed model is illustrated with the use of nonlinear function, NOx called Machine Learning dataset. A comparative analysis reveals that the proposed model exhibits higher accuracy and superb predictive capability in comparison to some previous models available in the literature.

신경망의 노드 가지치기를 위한 유전 알고리즘 (Genetic Algorithm for Node P겨ning of Neural Networks)

  • 허기수;오일석
    • 전자공학회논문지CI
    • /
    • 제46권2호
    • /
    • pp.65-74
    • /
    • 2009
  • 신경망의 구조를 최적화하기 위해서는 노드 또는 연결을 잘라내는 가지치기 방법과 노드를 추가해 나가는 구조 증가 방법이 있다. 이 논문은 신경망의 구조 최적화를 위해 가지치기 방법을 사용하며, 최적의 노드 가지치기를 찾기 위해 유전 알고리즘을 사용한다. 기존 연구에서는 입력층과 은닉층의 노드를 따로 최적화 대상으로 삼았다 우리는 두 층의 노드를 하나의 염색체에 표현하여 동시 최적화를 꾀하였다. 자식은 부모의 가중치를 상속받는다 학습을 위해서는 기존의 오류 역전파 알고리즘을 사용한다. 실험은 UCI Machine Learning Repository에서 제공한 다양한 데이터를 사용하였다. 실험 결과 신경망 노드 가지치기 비율이 평균 $8{\sim}25%$에서 좋은 성능을 얻을 수 있었다. 또한 다른 가지치기 및 구조 증가 알고리즘과의 교차검증에 대한 t-검정 결과 그들에 비해 우수한 성능을 보였다.

안정성을 고려한 동적 신경망의 최적화와 비선형 시스템 제어기 설계 (Optimization of Dynamic Neural Networks Considering Stability and Design of Controller for Nonlinear Systems)

  • 유동완;전순용;서보혁
    • 제어로봇시스템학회논문지
    • /
    • 제5권2호
    • /
    • pp.189-199
    • /
    • 1999
  • This paper presents an optimization algorithm for a stable Self Dynamic Neural Network(SDNN) using genetic algorithm. Optimized SDNN is applied to a problem of controlling nonlinear dynamical systems. SDNN is dynamic mapping and is better suited for dynamical systems than static forward neural network. The real-time implementation is very important, and thus the neuro controller also needs to be designed such that it converges with a relatively small number of training cycles. SDW has considerably fewer weights than DNN. Since there is no interlink among the hidden layer. The object of proposed algorithm is that the number of self dynamic neuron node and the gradient of activation functions are simultaneously optimized by genetic algorithms. To guarantee convergence, an analytic method based on the Lyapunov function is used to find a stable learning for the SDNN. The ability and effectiveness of identifying and controlling a nonlinear dynamic system using the proposed optimized SDNN considering stability is demonstrated by case studies.

  • PDF

입자 군집 최적화 알고리즘 기반 다항식 신경회로망의 설계 (Design of Particle Swarm Optimization-based Polynomial Neural Networks)

  • 박호성;김기상;오성권
    • 전기학회논문지
    • /
    • 제60권2호
    • /
    • pp.398-406
    • /
    • 2011
  • In this paper, we introduce a new architecture of PSO-based Polynomial Neural Networks (PNN) and discuss its comprehensive design methodology. The conventional PNN is based on a extended Group Method of Data Handling (GMDH) method, and utilized the polynomial order (viz. linear, quadratic, and modified quadratic) as well as the number of node inputs fixed (selected in advance by designer) at Polynomial Neurons located in each layer through a growth process of the network. Moreover it does not guarantee that the conventional PNN generated through learning results in the optimal network architecture. The PSO-based PNN results in a structurally optimized structure and comes with a higher level of flexibility that the one encountered in the conventional PNN. The PSO-based design procedure being applied at each layer of PNN leads to the selection of preferred PNs with specific local characteristics (such as the number of input variables, input variables, and the order of the polynomial) available within the PNN. In the sequel, two general optimization mechanisms of the PSO-based PNN are explored: the structural optimization is realized via PSO whereas in case of the parametric optimization we proceed with a standard least square method-based learning. To evaluate the performance of the PSO-based PNN, the model is experimented with using Gas furnace process data, and pH neutralization process data. For the characteristic analysis of the given entire data with non-linearity and the construction of efficient model, the given entire system data is partitioned into two type such as Division I(Training dataset and Testing dataset) and Division II(Training dataset, Validation dataset, and Testing dataset). A comparative analysis shows that the proposed PSO-based PNN is model with higher accuracy as well as more superb predictive capability than other intelligent models presented previously.