• Title/Summary/Keyword: Rule-based Neurofuzzy Networks

Search Result 8, Processing Time 0.021 seconds

Genetically Optimized Neurofuzzy Networks: Analysis and Design (진화론적 최적 뉴로퍼지 네트워크: 해석과 설계)

  • 박병준;김현기;오성권
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.8
    • /
    • pp.561-570
    • /
    • 2004
  • In this paper, new architectures and comprehensive design methodologies of Genetic Algorithms(GAs) based Genetically optimized Neurofuzzy Networks(GoNFN) are introduced, and a series of numeric experiments are carried out. The proposed GoNFN is based on the rule-based Neurofuzzy Networks(NFN) with the extended structure of the premise and the consequence parts of fuzzy rules being formed within the networks. The premise part of the fuzzy rules are designed by using space partitioning in terms of fuzzy sets defined in individual variables. In the consequence part of the fuzzy rules, three different forms of the regression polynomials such as constant, linear and quadratic are taken into consideration. The structure and parameters of the proposed GoNFN are optimized by GAs. GAs being a global optimization technique determines optimal parameters in a vast search space. But it cannot effectively avoid a large amount of time-consuming iteration because GAs finds optimal parameters by using a given space. To alleviate the problems, the dynamic search-based GAs is introduced to lead to rapidly optimal convergence over a limited region or a boundary condition. In a nutshell, the objective of this study is to develop a general design methodology o GAs-based GoNFN modeling, come up a logic-based structure of such model and propose a comprehensive evolutionary development environment in which the optimization of the model can be efficiently carried out both at the structural as well as parametric level for overall optimization by utilizing the separate or consecutive tuning technology. To evaluate the performance of the proposed GoNFN, the models are experimented with the use of several representative numerical examples.

Advanced Self-organizing Neural Networks with Fuzzy Polynomial Neurons : Analysis and Design

  • Oh, Sung-Kwun;Lee , Dong-Yoon
    • KIEE International Transaction on Systems and Control
    • /
    • v.12D no.1
    • /
    • pp.12-17
    • /
    • 2002
  • We propose a new category of neurofuzzy networks- Self-organizing Neural Networks(SONN) with fuzzy polynomial neurons(FPNs) and discuss a comprehensive design methodology supporting their development. Two kinds of SONN architectures, namely a basic SONN and a modified SONN architecture are dicussed. Each of them comes with two types such as the generic and the advanced type. SONN dwells on the ideas of fuzzy rule-based computing and neural networks. Simulation involves a series of synthetic as well as experimental data used across various neurofuzzy systems. A comparative analysis is included as well.

  • PDF

The Analysis and Design of Advanced Neurofuzzy Polynomial Networks (고급 뉴로퍼지 다항식 네트워크의 해석과 설계)

  • Park, Byeong-Jun;O, Seong-Gwon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.3
    • /
    • pp.18-31
    • /
    • 2002
  • In this study, we introduce a concept of advanced neurofuzzy polynomial networks(ANFPN), a hybrid modeling architecture combining neurofuzzy networks(NFN) and polynomial neural networks(PNN). These networks are highly nonlinear rule-based models. The development of the ANFPN dwells on the technologies of Computational Intelligence(Cl), namely fuzzy sets, neural networks and genetic algorithms. NFN contributes to the formation of the premise part of the rule-based structure of the ANFPN. The consequence part of the ANFPN is designed using PNN. At the premise part of the ANFPN, NFN uses both the simplified fuzzy inference and error back-propagation learning rule. The parameters of the membership functions, learning rates and momentum coefficients are adjusted with the use of genetic optimization. As the consequence structure of ANFPN, PNN is a flexible network architecture whose structure(topology) is developed through learning. In particular, the number of layers and nodes of the PNN are not fixed in advance but is generated in a dynamic way. In this study, we introduce two kinds of ANFPN architectures, namely the basic and the modified one. Here the basic and the modified architecture depend on the number of input variables and the order of polynomial in each layer of PNN structure. Owing to the specific features of two combined architectures, it is possible to consider the nonlinear characteristics of process system and to obtain the better output performance with superb predictive ability. The availability and feasibility of the ANFPN are discussed and illustrated with the aid of two representative numerical examples. The results show that the proposed ANFPN can produce the model with higher accuracy and predictive ability than any other method presented previously.

Design of Hybrid Architecture of Neurofuzzy Polynomial Networks (뉴로퍼지 다항식 네트워크의 하이브리드 구조 설계)

  • Park, Byoung-Jun;Park, Ho-Sung;Oh, Sung-Kwun;Jang, Sung-Whan
    • Proceedings of the KIEE Conference
    • /
    • 2001.11c
    • /
    • pp.424-427
    • /
    • 2001
  • In this study, we introduce a concept of neurofuzzy polynomial networks (NFPN), a hybrid modeling architecture combining neurofuzzy networks (NFN) and polynomial neural networks(PNN). NFN contributes to the formation of the premise part of the rule-based structure of the NFPN. The consequence part of the NFPN is designed using PNN. The parameters of the membership functions, learning rates and momentum coefficients are adjusted with the use of genetic optimization. We introduce two kinds of NFPN architectures, namely the basic and the modified one. Owing to the specific features of two combined architectures, it is possible to consider the nonlinear characteristics of process system and to obtain the better output performance with superb predictive ability.

  • PDF

Genetically Optimized Hybrid Fuzzy Neural Networks Based on Linear Fuzzy Inference Rules

  • Oh Sung-Kwun;Park Byoung-Jun;Kim Hyun-Ki
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.2
    • /
    • pp.183-194
    • /
    • 2005
  • In this study, we introduce an advanced architecture of genetically optimized Hybrid Fuzzy Neural Networks (gHFNN) and develop a comprehensive design methodology supporting their construction. A series of numeric experiments is included to illustrate the performance of the networks. The construction of gHFNN exploits fundamental technologies of Computational Intelligence (CI), namely fuzzy sets, neural networks, and genetic algorithms (GAs). The architecture of the gHFNNs results from a synergistic usage of the genetic optimization-driven hybrid system generated by combining Fuzzy Neural Networks (FNN) with Polynomial Neural Networks (PNN). In this tandem, a FNN supports the formation of the premise part of the rule-based structure of the gHFNN. The consequence part of the gHFNN is designed using PNNs. We distinguish between two types of the linear fuzzy inference rule-based FNN structures showing how this taxonomy depends upon the type of a fuzzy partition of input variables. As to the consequence part of the gHFNN, the development of the PNN dwells on two general optimization mechanisms: the structural optimization is realized via GAs whereas in case of the parametric optimization we proceed with a standard least square method-based learning. To evaluate the performance of the gHFNN, the models are experimented with a representative numerical example. A comparative analysis demonstrates that the proposed gHFNN come with higher accuracy as well as superb predictive capabilities when comparing with other neurofuzzy models.

Fuzzy and Polynomial Neuron Based Novel Dynamic Perceptron Architecture (퍼지 및 다항식 뉴론에 기반한 새로운 동적퍼셉트론 구조)

  • Kim, Dong-Won;Park, Ho-Sung;Oh, Sung-Kwun
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2762-2764
    • /
    • 2001
  • In this study, we introduce and investigate a class of dynamic perceptron architectures, discuss a comprehensive design methodology and carry out a series of numeric experiments. The proposed dynamic perceptron architectures are called as Polynomial Neural Networks(PNN). PNN is a flexible neural architecture whose topology is developed through learning. In particular, the number of layers of the PNN is not fixed in advance but is generated on the fly. In this sense, PNN is a self-organizing network. PNN has two kinds of networks, Polynomial Neuron(FPN)-based and Fuzzy Polynomial Neuron(FPN)-based networks, according to a polynomial structure. The essence of the design procedure of PN-based Self-organizing Polynomial Neural Networks(SOPNN) dwells on the Group Method of Data Handling (GMDH) [1]. Each node of the SOPNN exhibits a high level of flexibility and realizes a polynomial type of mapping (linear, quadratic, and cubic) between input and output variables. FPN-based SOPNN dwells on the ideas of fuzzy rule-based computing and neural networks. Simulations involve a series of synthetic as well as experimental data used across various neurofuzzy systems. A detailed comparative analysis is included as well.

  • PDF

A Study on the Adaptive Polynomial Neuro-Fuzzy Networks Architecture (적응 다항식 뉴로-퍼지 네트워크 구조에 관한 연구)

  • Oh, Sung-Kwun;Kim, Dong-Won
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.50 no.9
    • /
    • pp.430-438
    • /
    • 2001
  • In this study, we introduce the adaptive Polynomial Neuro-Fuzzy Networks(PNFN) architecture generated from the fusion of fuzzy inference system and PNN algorithm. The PNFN dwells on the ideas of fuzzy rule-based computing and neural networks. Fuzzy inference system is applied in the 1st layer of PNFN and PNN algorithm is employed in the 2nd layer or higher. From these the multilayer structure of the PNFN is constructed. In order words, in the Fuzzy Inference System(FIS) used in the nodes of the 1st layer of PNFN, either the simplified or regression polynomial inference method is utilized. And as the premise part of the rules, both triangular and Gaussian like membership function are studied. In the 2nd layer or higher, PNN based on GMDH and regression polynomial is generated in a dynamic way, unlike in the case of the popular multilayer perceptron structure. That is, the PNN is an analytic technique for identifying nonlinear relationships between system's inputs and outputs and is a flexible network structure constructed through the successive generation of layers from nodes represented in partial descriptions of I/O relatio of data. The experiment part of the study involves representative time series such as Box-Jenkins gas furnace data used across various neurofuzzy systems and a comparative analysis is included as well.

  • PDF

Multiobjective Space Search Optimization and Information Granulation in the Design of Fuzzy Radial Basis Function Neural Networks

  • Huang, Wei;Oh, Sung-Kwun;Zhang, Honghao
    • Journal of Electrical Engineering and Technology
    • /
    • v.7 no.4
    • /
    • pp.636-645
    • /
    • 2012
  • This study introduces an information granular-based fuzzy radial basis function neural networks (FRBFNN) based on multiobjective optimization and weighted least square (WLS). An improved multiobjective space search algorithm (IMSSA) is proposed to optimize the FRBFNN. In the design of FRBFNN, the premise part of the rules is constructed with the aid of Fuzzy C-Means (FCM) clustering while the consequent part of the fuzzy rules is developed by using four types of polynomials, namely constant, linear, quadratic, and modified quadratic. Information granulation realized with C-Means clustering helps determine the initial values of the apex parameters of the membership function of the fuzzy neural network. To enhance the flexibility of neural network, we use the WLS learning to estimate the coefficients of the polynomials. In comparison with ordinary least square commonly used in the design of fuzzy radial basis function neural networks, WLS could come with a different type of the local model in each rule when dealing with the FRBFNN. Since the performance of the FRBFNN model is directly affected by some parameters such as e.g., the fuzzification coefficient used in the FCM, the number of rules and the orders of the polynomials present in the consequent parts of the rules, we carry out both structural as well as parametric optimization of the network. The proposed IMSSA that aims at the simultaneous minimization of complexity and the maximization of accuracy is exploited here to optimize the parameters of the model. Experimental results illustrate that the proposed neural network leads to better performance in comparison with some existing neurofuzzy models encountered in the literature.