• Title/Summary/Keyword: Neural-Networks

Search Result 4,817, Processing Time 0.033 seconds

Circumstance Adaptability of Competitive Learning Neural Networks (경쟁학습 신경망의 환경 적응성)

  • Choi, Doo-Il;Park, Yang-Su
    • Proceedings of the KIEE Conference
    • /
    • 1997.11a
    • /
    • pp.591-593
    • /
    • 1997
  • When input circumstance is changed abrubtly, many nodes of Competitive Learning Neural Networks far from new input vector may never win, and therefore never learn. Various techniques to prevent these phenomena have been reported. We proposed a new technique based on Self Creating and Organizing Neural Networks, and which is compared to Self Organizing Feature Map and Frequency Sensitive Neural Networks.

  • PDF

DELAY-DEPENDENT GLOBAL ASYMPTOTIC STABILITY ANALYSIS OF DELAYED CELLULAR NEURAL NETWORKS

  • Yang, Yitao;Zhang, Yuejin
    • Journal of applied mathematics & informatics
    • /
    • v.28 no.3_4
    • /
    • pp.583-596
    • /
    • 2010
  • In this paper, the problem of delay-dependent stability analysis for cellular neural networks systems with time-varying delays was considered. By using a new Lyapunov-Krasovskii function, delay-dependant stability conditions of the delayed cellular neural networks systems are proposed in terms of linear matrix inequalities (LMIs). Examples are provided to demonstrate the reduced conservatism of the proposed stability results.

Genetically Optimized Hybrid Fuzzy Neural Networks Based on Linear Fuzzy Inference Rules

  • Oh Sung-Kwun;Park Byoung-Jun;Kim Hyun-Ki
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.2
    • /
    • pp.183-194
    • /
    • 2005
  • In this study, we introduce an advanced architecture of genetically optimized Hybrid Fuzzy Neural Networks (gHFNN) and develop a comprehensive design methodology supporting their construction. A series of numeric experiments is included to illustrate the performance of the networks. The construction of gHFNN exploits fundamental technologies of Computational Intelligence (CI), namely fuzzy sets, neural networks, and genetic algorithms (GAs). The architecture of the gHFNNs results from a synergistic usage of the genetic optimization-driven hybrid system generated by combining Fuzzy Neural Networks (FNN) with Polynomial Neural Networks (PNN). In this tandem, a FNN supports the formation of the premise part of the rule-based structure of the gHFNN. The consequence part of the gHFNN is designed using PNNs. We distinguish between two types of the linear fuzzy inference rule-based FNN structures showing how this taxonomy depends upon the type of a fuzzy partition of input variables. As to the consequence part of the gHFNN, the development of the PNN dwells on two general optimization mechanisms: the structural optimization is realized via GAs whereas in case of the parametric optimization we proceed with a standard least square method-based learning. To evaluate the performance of the gHFNN, the models are experimented with a representative numerical example. A comparative analysis demonstrates that the proposed gHFNN come with higher accuracy as well as superb predictive capabilities when comparing with other neurofuzzy models.

Automatic Expansion of ConceptNet by Using Neural Tensor Networks (신경 텐서망을 이용한 컨셉넷 자동 확장)

  • Choi, Yong Seok;Lee, Gyoung Ho;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.549-554
    • /
    • 2016
  • ConceptNet is a common sense knowledge base which is formed in a semantic graph whose nodes represent concepts and edges show relationships between concepts. As it is difficult to make knowledge base integrity, a knowledge base often suffers from incompleteness problem. Therefore the quality of reasoning performed over such knowledge bases is sometimes unreliable. This work presents neural tensor networks which can alleviate the problem of knowledge bases incompleteness by reasoning new assertions and adding them into ConceptNet. The neural tensor networks are trained with a collection of assertions extracted from ConceptNet. The input of the networks is two concepts, and the output is the confidence score, telling how possible the connection between two concepts is under a specified relationship. The neural tensor networks can expand the usefulness of ConceptNet by increasing the degree of nodes. The accuracy of the neural tensor networks is 87.7% on testing data set. Also the neural tensor networks can predict a new assertion which does not exist in ConceptNet with an accuracy 85.01%.

Isolated Digit Recognition Combined with Recurrent Neural Prediction Models and Chaotic Neural Networks (회귀예측 신경모델과 카오스 신경회로망을 결합한 고립 숫자음 인식)

  • Kim, Seok-Hyun;Ryeo, Ji-Hwan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.6
    • /
    • pp.129-135
    • /
    • 1998
  • In this paper, the recognition rate of isolated digits has been improved using the multiple neural networks combined with chaotic recurrent neural networks and MLP. Generally, the recognition rate has been increased from 1.2% to 2.5%. The experiments tell that the recognition rate is increased because MLP and CRNN(chaotic recurrent neural network) compensate for each other. Besides this, the chaotic dynamic properties have helped more in speech recognition. The best recognition rate is when the algorithm combined with MLP and chaotic multiple recurrent neural network has been used. However, in the respect of simple algorithm and reliability, the multiple neural networks combined with MLP and chaotic single recurrent neural networks have better properties. Largely, MLP has very good recognition rate in korean digits "il", "oh", while the chaotic recurrent neural network has best recognition in "young", "sam", "chil".

  • PDF

A Study on the Symmetric Neural Networks and Their Applications (대칭 신경회로망과 그 응용에 관한 연구)

  • 나희승;박영진
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.16 no.7
    • /
    • pp.1322-1331
    • /
    • 1992
  • The conventional neural networks are built without considering the underlying structure of the problems. Hence, they usually contain redundant weights and require excessive training time. A novel neural network structure is proposed for symmetric problems, which alleviate some of the aforementioned drawback of the conventional neural networks. This concept is expanded to that of the constrained neural network which may be applied to general structured problems. Because these neural networks can not be trained by the conventional training algorithm, which destroys the weight structure of the neural networks, a proper training algorithm is suggested. The illustrative examples are shown to demonstrate the applicability of the proposed idea.

Training Artificial Neural Networks and Convolutional Neural Networks using WFSO Algorithm (WFSO 알고리즘을 이용한 인공 신경망과 합성곱 신경망의 학습)

  • Jang, Hyun-Woo;Jung, Sung Hoon
    • Journal of Digital Contents Society
    • /
    • v.18 no.5
    • /
    • pp.969-976
    • /
    • 2017
  • This paper proposes the learning method of an artificial neural network and a convolutional neural network using the WFSO algorithm developed as an optimization algorithm. Since the optimization algorithm searches based on a number of candidate solutions, it has a drawback in that it is generally slow, but it rarely falls into the local optimal solution and it is easy to parallelize. In addition, the artificial neural networks with non-differentiable activation functions can be trained and the structure and weights can be optimized at the same time. In this paper, we describe how to apply WFSO algorithm to artificial neural network learning and compare its performances with error back-propagation algorithm in multilayer artificial neural networks and convolutional neural networks.

Evaluation of existing bridges using neural networks

  • Molina, Augusto V.;Chou, Karen C.
    • Structural Engineering and Mechanics
    • /
    • v.13 no.2
    • /
    • pp.187-209
    • /
    • 2002
  • The infrastructure system in the United States has been aging faster than the resource available to restore them. Therefore decision for allocating the resources is based in part on the condition of the structural system. This paper proposes to use neural network to predict the overall rating of the structural system because of the successful applications of neural network to other fields which require a "symptom-diagnostic" type relationship. The goal of this paper is to illustrate the potential of using neural network in civil engineering applications and, particularly, in bridge evaluations. Data collected by the Tennessee Department of Transportation were used as "test bed" for the study. Multi-layer feed forward networks were developed using the Levenberg-Marquardt training algorithm. All the neural networks consisted of at least one hidden layer of neurons. Hyperbolic tangent transfer functions were used in the first hidden layer and log-sigmoid transfer functions were used in the subsequent hidden and output layers. The best performing neural network consisted of three hidden layers. This network contained three neurons in the first hidden layer, two neurons in the second hidden layer and one neuron in the third hidden layer. The neural network performed well based on a target error of 10%. The results of this study indicate that the potential for using neural networks for the evaluation of infrastructure systems is very good.

Self-organized Distributed Networks for Precise Modelling of a System (시스템의 정밀 모델링을 위한 자율분산 신경망)

  • Kim, Hyong-Suk;Choi, Jong-Soo;Kim, Sung-Joong
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.11
    • /
    • pp.151-162
    • /
    • 1994
  • A new neural network structure called Self-organized Distributed Networks (SODN) is proposed for developing the neural network-based multidimensional system models. The learning with the proposed networks is fast and precise. Such properties are caused from the local learning mechanism. The structure of the networks is combination of dual networks such as self-organized networks and multilayered local networks. Each local networks learns only data in a sub-region. Large number of memory requirements and low generalization capability for the untrained region, which are drawbacks of conventional local network learning, are overcomed in the proposed networks. The simulation results of the proposed networks show better performance than the standard multilayer neural networks and the Radial Basis function(RBF) networks.

  • PDF

Design of Incremental FCM-based Recursive RBF Neural Networks Pattern Classifier for Big Data Processing (빅 데이터 처리를 위한 증분형 FCM 기반 순환 RBF Neural Networks 패턴 분류기 설계)

  • Lee, Seung-Cheol;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.6
    • /
    • pp.1070-1079
    • /
    • 2016
  • In this paper, the design of recursive radial basis function neural networks based on incremental fuzzy c-means is introduced for processing the big data. Radial basis function neural networks consist of condition, conclusion and inference phase. Gaussian function is generally used as the activation function of the condition phase, but in this study, incremental fuzzy clustering is considered for the activation function of radial basis function neural networks, which could effectively do big data processing. In the conclusion phase, the connection weights of networks are given as the linear function. And then the connection weights are calculated by recursive least square estimation. In the inference phase, a final output is obtained by fuzzy inference method. Machine Learning datasets are employed to demonstrate the superiority of the proposed classifier, and their results are described from the viewpoint of the algorithm complexity and performance index.