• Title/Summary/Keyword: Dynamic Neuron

Search Result 81, Processing Time 0.027 seconds

Identification and Control of Nonlinear System Using Dynamic Neural Model with State Parameter Representation (상태변수 표현을 가진 동적 신경망을 이용한 비선형 시스템의 식별과 제어)

  • Park, Seong-Wook;Seo, Bo-Hyeok
    • Proceedings of the KIEE Conference
    • /
    • 1995.11a
    • /
    • pp.157-160
    • /
    • 1995
  • Neural networks potentially offer a general framework for modeling and control of nonlinear systems. The conventional neural network models are a parody of biological neural structures, and have very slow learning. In order to emulate some, dynamic functions, such as learning and adaption, and to better reflect the dynamics of biological neurons, M.M.Gupta and D.H.Rao have developed a 'dynamic neural model'(DNU). Proposed neural unit model is to introduce some dynamics to the neuron transfer function, such that the neuron activity depends on internal states. Numerical examples are presented for a model system. Those case studies showed that the proposed DNU is so useful in practical sense.

  • PDF

Supervised Competitive Learning Neural Network with Flexible Output Layer

  • Cho, Seong-won
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.7
    • /
    • pp.675-679
    • /
    • 2001
  • In this paper, we present a new competitive learning algorithm called Dynamic Competitive Learning (DCL). DCL is a supervised learning method that dynamically generates output neurons and initializes automatically the weight vectors from training patterns. It introduces a new parameter called LOG (Limit of Grade) to decide whether an output neuron is created or not. If the class of at least one among the LOG number of nearest output neurons is the same as the class of the present training pattern, then DCL adjusts the weight vector associated with the output neuron to learn the pattern. If the classes of all the nearest output neurons are different from the class of the training pattern, a new output neuron is created and the given training pattern is used to initialize the weight vector of the created neuron. The proposed method is significantly different from the previous competitive learning algorithms in the point that the selected neuron for learning is not limited only to the winner and the output neurons are dynamically generated during the learning process. In addition, the proposed algorithm has a small number of parameters, which are easy to be determined and applied to real-world problems. Experimental results for pattern recognition of remote sensing data and handwritten numeral data indicate the superiority of DCL in comparison to the conventional competitive learning methods.

  • PDF

Competitive Learning Neural Network with Dynamic Output Neuron Generation (동적으로 출력 뉴런을 생성하는 경쟁 학습 신경회로망)

  • 김종완;안제성;김종상;이흥호;조성원
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.9
    • /
    • pp.133-141
    • /
    • 1994
  • Conventional competitive learning algorithms compute the Euclidien distance to determine the winner neuron out of all predetermined output neurons. In such cases, there is a drawback that the performence of the learning algorithm depends on the initial reference(=weight) vectors. In this paper, we propose a new competitive learning algorithm that dynamically generates output neurons. The proposed method generates output neurons by dynamically changing the class thresholds for all output neurons. We compute the similarity between the input vector and the reference vector of each output neuron generated. If the two are similar, the reference vector is adjusted to make it still more like the input vector. Otherwise, the input vector is designated as the reference vector of a new outputneuron. Since the reference vectors of output neurons are dynamically assigned according to input pattern distribution, the proposed method gets around the phenomenon that learning is early determined due to redundant output neurons. Experiments using speech data have shown the proposed method to be superior to existint methods.

  • PDF

EEG model by statistical mechanics of neocortical interaction

  • Park, J.M.;Whang, M.C.;Bae, B.H.;Kim, S.Y.;Kim, C.J.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.16 no.2
    • /
    • pp.15-27
    • /
    • 1997
  • Brain potential is described using the mesocolumnar activity defined by averaged firings of excitatory and inhibitory neuron of neocortex. Lagrangian is constructed based on SMNI(Statistical Mechanics of Neocortical Interaction) and then Euler Lagrange equation is obtained. Excitatory neuron firing is assumed to be amplitude- modulated dominantly by the sum of two modes of frequency .omega. and 2 .omega. . Time series of this neuron firing is calculated numerically by Euler Lagrangian equation. I .omega. L related to low frequency distribution of power spectrum, I .omega. H hight frequency, and Sd(standard deviation) were introduced for the effective extraction of the dynamic property in the simulated brain potential. The relative behavior of I .omega. L, I .omega. H, and Sd was found by parameters .epsilon. and .gamma. related to nonlinearity and harmonics respectively. Experimental I .omega L, I .omega. H, and Sd were obtained from EEG of human in rest state and of canine in deep sleep state and were compared with theoretical ones.

  • PDF

Developing Artificial Neurons Using Carbon Nanotubes Smart Composites (탄소나노튜브 스마트 복합소재를 이용한 인공뉴런 개발 연구)

  • Kang, In-Pil;Baek, Woon-Kyung;Choi, Gyeong-Rak;Jung, Joo-Young
    • Proceedings of the KSME Conference
    • /
    • 2007.05a
    • /
    • pp.136-141
    • /
    • 2007
  • This paper introduces an artificial neuron which is a nano composite continuous sensor. The continuous nano sensor is fabricated as a thin and narrow polymer film sensor that is made of carbon nanotubes composites with a PMMA or a silicone matrix. The sensor can be embedded onto a structure like a neuron in a human body and it can detect deteriorations of the structure. The electrochemical impedance and dynamic strain response of the neuron change due to deterioration of the structure where the sensor is located. A network of the long nano sensor can form a structural neural system to provide large area coverage and an assurance of the operational health of a structure without the need for actuators and complex wave propagation analyses that are used with other methods. The artificial neuron is expected to effectively detect damage in large complex structures including composite helicopter blades and composite aircraft and vehicles.

  • PDF

A Study on Human-Robot Interface based on Imitative Learning using Computational Model of Mirror Neuron System (Mirror Neuron System 계산 모델을 이용한 모방학습 기반 인간-로봇 인터페이스에 관한 연구)

  • Ko, Kwang-Enu;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.565-570
    • /
    • 2013
  • The mirror neuron regions which are distributed in cortical area handled a functionality of intention recognition on the basis of imitative learning of an observed action which is acquired from visual-information of a goal-directed action. In this paper an automated intention recognition system is proposed by applying computational model of mirror neuron system to the human-robot interaction system. The computational model of mirror neuron system is designed by using dynamic neural networks which have model input which includes sequential feature vector set from the behaviors from the target object and actor and produce results as a form of motor data which can be used to perform the corresponding intentional action through the imitative learning and estimation procedures of the proposed computational model. The intention recognition framework is designed by a system which has a model input from KINECT sensor and has a model output by calculating the corresponding motor data within a virtual robot simulation environment on the basis of intention-related scenario with the limited experimental space and specified target object.

Analysis of Dynamical State Transition of Cyclic Connection Neural Networks with Binary Synaptic Weights (이진화된 결합하중을 갖는 순환결합형 신경회로망의 동적 상태천이 해석)

  • 박철영
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.5
    • /
    • pp.76-85
    • /
    • 1999
  • The intuitive understanding of the dynamic pattern generation in asymmetric networks may be useful for developing models of dynamic information processing. In this paper, dynamic behavior of the cyclic connection neural network, in which each neuron is connected only to its nearest neurons with binary synaptic weights of $\pm$ 1, has been investigated. Simulation results show that dynamic behavior of the network can be classified into only three categories: fixed points, limit cycles with basin and limit cycles with no basin. Furthermore, the number and the type of limit cycles generated by the networks have been derived through analytical method. The sufficient conditions for a state vector of $n$-neuron network to produce a limit cycle of $n$- or 2$n$-period are also given. The results show that the estimated number of limit cycles is an exponential function of $n$. On the basis of this study, cyclic connection neural network may be capable of storing a large number of dynamic information.

  • PDF

Dynamical Properties of Ring Connection Neural Networks and Its Application (환상결합 신경회로망의 동적 성질과 응용)

  • 박철영
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.4 no.1
    • /
    • pp.68-76
    • /
    • 1999
  • The intuitive understanding of the dynamic pattern generation in asymmetric networks may be useful for developing models of dynamic information processing. In this paper, dynamic behavior of the ring connection neural network in which each neuron is only to its nearest neurons with binary synaptic weights of ±1, has been inconnected vestigated Simulation results show that dynamic behavior of the network can be classified into only three categories: fixed points, limit cycles with basin and limit cycles with no basin. Furthermore, the number and the type of limit cycles generated by the networks have been derived through analytical method. The sufficient conditions for a state vector of n-neuron network to produce a limit cycle of n- or 2n-period are also given The results show that the estimated number of limit cycle is an exponential function of n. On the basis of this study, cyclic connection neural network may be capable of storing a large number of dynamic information.

  • PDF

Dynamic Extension of Genetic Tree Maps (유전 목 지도의 동적 확장)

  • Ha, seong-Wook;Kwon, Kee-Hang;Kang, Dae-Seong
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.6
    • /
    • pp.386-395
    • /
    • 2002
  • In this paper, we suggest dynamic genetic tree-maps(DGTM) using optimal features on recognizing data. The DGTM uses the genetic algorithm about the importance of features rarely considerable on conventional neural networks and introduces GTM(genetic tree-maps) using tree structure according of the priority of features. Hence, we propose the extended formula, DGTM(dynamic GTM) has dynamic functions to separate and merge the neuron of neural network along the similarity of features.

Optimization of Dynamic Neural Networks Considering Stability and Design of Controller for Nonlinear Systems (안정성을 고려한 동적 신경망의 최적화와 비선형 시스템 제어기 설계)

  • 유동완;전순용;서보혁
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.2
    • /
    • pp.189-199
    • /
    • 1999
  • This paper presents an optimization algorithm for a stable Self Dynamic Neural Network(SDNN) using genetic algorithm. Optimized SDNN is applied to a problem of controlling nonlinear dynamical systems. SDNN is dynamic mapping and is better suited for dynamical systems than static forward neural network. The real-time implementation is very important, and thus the neuro controller also needs to be designed such that it converges with a relatively small number of training cycles. SDW has considerably fewer weights than DNN. Since there is no interlink among the hidden layer. The object of proposed algorithm is that the number of self dynamic neuron node and the gradient of activation functions are simultaneously optimized by genetic algorithms. To guarantee convergence, an analytic method based on the Lyapunov function is used to find a stable learning for the SDNN. The ability and effectiveness of identifying and controlling a nonlinear dynamic system using the proposed optimized SDNN considering stability is demonstrated by case studies.

  • PDF