• Title/Summary/Keyword: neuron network

Search Result 292, Processing Time 0.028 seconds

Design of a Neural Chip for Classifying Iris Flowers based on CMOS Analog Neurons

  • Choi, Yoon-Jin;Lee, Eun-Min;Jeong, Hang-Geun
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.5
    • /
    • pp.284-288
    • /
    • 2019
  • A calibration-free analog neuron circuit is proposed as a viable alternative to the power hungry digital neuron in implementing a deep neural network. The conventional analog neuron requires calibrations because a voltage-mode link is used between the soma and the synapse, which results in significant uncertainty in terms of current mapping. In this work, a current-mode link is used to establish a robust link between the soma and the synapse against the variations in the process and interconnection impedances. The increased hardware owing to the adoption of the current-mode link is estimated to be manageable because the number of neurons in each layer of the neural network is typically bounded. To demonstrate the utility of the proposed analog neuron, a simple neural network with $4{\times}7{\times}3$ architecture has been designed for classifying iris flowers. The chip is now under fabrication in 0.35 mm CMOS technology. Thus, the proposed true current-mode analog neuron can be a practical option in realizing power-efficient neural networks for edge computing.

Charted Depth Interpolation: Neuron Network Approaches

  • Shi, Chaojian
    • Journal of Navigation and Port Research
    • /
    • v.28 no.7
    • /
    • pp.629-634
    • /
    • 2004
  • Continuous depth data are often required in applications of both onboard systems and maritime simulation. But data available are usually discrete and irregularly distributed. Based on the neuron network technique, methods of interpolation to the charted depth are suggested in this paper. Two algorithms based on Levenberg-Marquardt back-propaganda and radial-basis function networks are investigated respectively. A dynamic neuron network system is developed which satisfies both real time and mass processing applications. Using hyperbolic paraboloid and typical chart area, effectiveness of the algorithms is tested and error analysis presented. Special process in practical applications such as partition of lager areas, normalization and selection of depth contour data are also illustrated.

Charted Depth Interpolation: Neuron Network Approaches

  • Chaojian, Shi
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2004.08a
    • /
    • pp.37-44
    • /
    • 2004
  • Continuous depth data are often required in applications of both onboard systems and maritime simulation. But data available are usually discrete and irregularly distributed. Based on the neuron network technique, methods of interpolation to the charted depth are suggested in this paper. Two algorithms based on Levenberg-Marquardt back-propaganda and radial-basis function networks are investigated respectively. A dynamic neuron network system is developed which satisfies both real time and mass processing applications. Using hyperbolic paraboloid and typical chart area, effectiveness of the algorithms is tested and error analysis presented. Special process in practical applications such as partition of lager areas, normalization and selection of depth contour data are also illustrated.

  • PDF

Division of Working Area using Hopfield Network (Hopfield Network을 이용한 작업영역 분할)

  • 차영엽;최범식
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.160-160
    • /
    • 2000
  • An optimization approach is used to solve the division problem of working area, and a cost function is defined to represent the constraints on the solution, which is then mapped onto the Hopfield neural network for minimization. Each neuron in the network represents a possible combination among many components. Division is achieved by initializing each neuron that represents a possible combination and then allowing the network settle down into a stable state. The network uses the initialized inputs and the compatibility measures among components in order to divide working area.

  • PDF

Evolving Neural Network Controller for Stabilization of Inverted Pendulum System (도립 진자 시스템의 안정화를 위한 진화형 신경회로망 제어기)

  • Sim, Yeong-Jin;Lee, Jun-Tak
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.49 no.3
    • /
    • pp.157-163
    • /
    • 2000
  • In this paper, an Evolving Neural Network Controller(ENNC) which its structure and its connection weights are optimized simultaneously by Real Variable Elitist Genetic Algoithm(RVEGA) was presented for stabilization of an Inverter Pendulum(IP) system with nonlinearity. This proposed ENNC was described by a simple genetic chromosome. And the deletion of neuron, the determinations of input or output neuron, the deleted neuron and the activation functions types are given according to the various flag types. Therefore, the connection weights, its structure and the neuron types in the given ENNC can be optimized by the proposed evolution strategy. Through the simulations, we showed that the finally acquired optimal ENNC was successfully applied to the stabilization control of an IP system.

  • PDF

Using Higher Order Neuron on the Supervised Learning Machine of Kohonen Feature Map (고차 뉴런을 이용한 교사 학습기의 Kohonen Feature Map)

  • Jung, Jong-Soo;Hagiwara, Masafumi
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.5
    • /
    • pp.277-282
    • /
    • 2003
  • In this paper we propose Using Higher Order Neuron on the Supervised Learning Machine of the Kohonen Feature Map. The architecture of proposed model adopts the higher order neuron in the input layer of Kohonen Feature Map as a Supervised Learning Machine. It is able to estimate boundary on input pattern space because or the higher order neuron. However, it suffers from a problem that the number of neuron weight increases because of the higher order neuron in the input layer. In this time, we solved this problem by placing the second order neuron among the higher order neuron. The feature of the higher order neuron can be mapped similar inputs on the Kohonen Feature Map. It also is the network with topological mapping. We have simulated the proposed model in respect of the recognition rate by XOR problem, discrimination of 20 alphabet patterns, Mirror Symmetry problem, and numerical letters Pattern Problem.

A neuron computer model embedded Lukasiewicz' implication

  • Kobata, Kenji;Zhu, Hanxi;Aoyama, Tomoo;Yoshihara, Ikuo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.449-449
    • /
    • 2000
  • Many researchers have studied architectures for non-Neumann's computers because of escaping its bottleneck. To avoid the bottleneck, a neuron-based computer has been developed. The computer has only neurons and their connections, which are constructed of the learning. But still it has information processing facilities, and at the same time, it is like as a simplified brain to make inference; it is called "neuron-computer". No instructions are considered in any neural network usually; however, to complete complex processing on restricted computing resources, the processing must be reduced to primitive actions. Therefore, we introduce the instructions to the neuron-computer, in which the most important function is implications. There is an implication represented by binary-operators, but general implications for multi-value or fuzzy logics can't be done. Therefore, we need to use Lukasiewicz' operator at least. We investigated a neuron-computer having instructions for general implications. If we use the computer, the effective inferences base on multi-value logic is executed rapidly in a small logical unit.

  • PDF

SYNCHRONIZATION OF UNIDIRECTIONAL RING STRUCTURED IDENTICAL FITZHUGH-NAGUMO NETWORK UNDER IONIC AND EXTERNAL ELECTRICAL STIMULATIONS

  • Ibrahim, Malik Muhammad;Jung, Il Hyo
    • East Asian mathematical journal
    • /
    • v.36 no.5
    • /
    • pp.547-554
    • /
    • 2020
  • Synchronization of unidirectional identical FitzHugh-Nagumo systems coupled in a ring structure under ionic and external electrical stimulations is investigated. In this network, each neuron is only connected and transmit signals to its next neuron via synaptic strength called gapjunctions. Adaptive control theory and Lyapunov stability theory are used to propose a unique control scheme with necessary and sufficient conditions which guarantee the synchronization of the neuronal network. Finally, the effectiveness of the proposed scheme is shown through numerical simulations.

Conversion Tools of Spiking Deep Neural Network based on ONNX (ONNX기반 스파이킹 심층 신경망 변환 도구)

  • Park, Sangmin;Heo, Junyoung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.2
    • /
    • pp.165-170
    • /
    • 2020
  • The spiking neural network operates in a different mechanism than the existing neural network. The existing neural network transfers the output value to the next neuron via an activation function that does not take into account the biological mechanism for the input value to the neuron that makes up the neural network. In addition, there have been good results using deep structures such as VGGNet, ResNet, SSD and YOLO. spiking neural networks, on the other hand, operate more like the biological mechanism of real neurons than the existing activation function, but studies of deep structures using spiking neurons have not been actively conducted compared to in-depth neural networks using conventional neurons. This paper proposes the method of loading an deep neural network model made from existing neurons into a conversion tool and converting it into a spiking deep neural network through the method of replacing an existing neuron with a spiking neuron.

DESIGN OF CONTROLLER FOR NONLINEAR SYSTEM USING DYNAMIC NEURAL METWORKS

  • Park, Seong-Wook;Seo, Bo-Hyeok
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1995.10a
    • /
    • pp.60-64
    • /
    • 1995
  • The conventional neural network models are a parody of biological neural structures, and have very slow learning. In order to emulate some dynamic functions, such as learning and adaption, and to better reflect the dynamics of biological neurons, M.M. Gupta and D.H. Rao have developed a 'dynamic neural model'(DNU). Proposed neural unit model is to introduce some dynamics to the neuron transfer function, such that the neuron activity depends on internal states. Integrating an dynamic elementry processor within the neuron allows the neuron to act dynamic response Numerical examples are presented for a model system. Those case studies showed that the proposed DNU is so useful in practical sense.

  • PDF