• Title/Summary/Keyword: Multi Neural Network

Search Result 1,196, Processing Time 0.03 seconds

A Study on the Digital Implementation of Multi-layered Neural Networks for Pattern Recognition (패턴인식을 위한 다층 신경망의 디지털 구현에 관한 연구)

  • 박영석
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.2
    • /
    • pp.111-118
    • /
    • 2001
  • In this paper, in order to implement the multi-layered perceptron neural network using pure digital logic circuit model, we propose the new logic neuron structure, the digital canonical multi-layered logic neural network structure, and the multi-stage multi-layered logic neural network structure for pattern recognition applications. And we show that the proposed approach provides an incremental additive learning algorithm, which is very simple and effective.

  • PDF

A multi-modal neural network using Chebyschev polynomials

  • Ikuo Yoshihara;Tomoyuki Nakagawa;Moritoshi Yasunaga;Abe, Ken-ichi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.250-253
    • /
    • 1998
  • This paper presents a multi-modal neural network composed of a preprocessing module and a multi-layer neural network module in order to enhance the nonlinear characteristics of neural network. The former module is based on spectral method using Chebyschev polynomials and transforms input data into spectra. The latter module identifies the system using the spectra generated by the preprocessing module. The omnibus numerical experiments show that the method is applicable to many a nonlinear dynamic system in the real world, and that preprocessing using Chebyschev polynomials reduces the number of neurons required for the multi-layer neural network.

  • PDF

Tuning the Architecture of Neural Networks for Multi-Class Classification (다집단 분류 인공신경망 모형의 아키텍쳐 튜닝)

  • Jeong, Chulwoo;Min, Jae H.
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.38 no.1
    • /
    • pp.139-152
    • /
    • 2013
  • The purpose of this study is to claim the validity of tuning the architecture of neural network models for multi-class classification. A neural network model for multi-class classification is basically constructed by building a series of neural network models for binary classification. Building a neural network model, we are required to set the values of parameters such as number of hidden nodes and weight decay parameter in advance, which draws special attention as the performance of the model can be quite different by the values of the parameters. For better performance of the model, it is absolutely necessary to have a prior process of tuning the parameters every time the neural network model is built. Nonetheless, previous studies have not mentioned the necessity of the tuning process or proved its validity. In this study, we claim that we should tune the parameters every time we build the neural network model for multi-class classification. Through empirical analysis using wine data, we show that the performance of the model with the tuned parameters is superior to those of untuned models.

Channel Allocation Using Gradual Neural Network For Multi-User OFDM Systems (다중 사용자 OFDM시스템에서 Gradual Neural Network를 이용한 채널 할당)

  • Moon, Eun-Jin;Lee, Chang-Wook;Jeon, Gi-J.
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.240-242
    • /
    • 2004
  • A channel allocation algorithm of multi-user OFDM(orthogonal frequency division multiplexing) system is presented. The proposed algorithm is to reduce the complexity of the system, using the GNN(gradual neural network) with gradual expansion scheme and the algorithm attempts to allocate channel with good channel gain to each user. The method has lower computational complexity and less iteration than other algorithms.

  • PDF

A multi-layed neural network learning procedure and generating architecture method for improving neural network learning capability (다층신경망의 학습능력 향상을 위한 학습과정 및 구조설계)

  • 이대식;이종태
    • Korean Management Science Review
    • /
    • v.18 no.2
    • /
    • pp.25-38
    • /
    • 2001
  • The well-known back-propagation algorithm for multi-layered neural network has successfully been applied to pattern c1assification problems with remarkable flexibility. Recently. the multi-layered neural network is used as a powerful data mining tool. Nevertheless, in many cases with complex boundary of classification, the successful learning is not guaranteed and the problems of long learning time and local minimum attraction restrict the field application. In this paper, an Improved learning procedure of multi-layered neural network is proposed. The procedure is based on the generalized delta rule but it is particular in the point that the architecture of network is not fixed but enlarged during learning. That is, the number of hidden nodes or hidden layers are increased to help finding the classification boundary and such procedure is controlled by entropy evaluation. The learning speed and the pattern classification performance are analyzed and compared with the back-propagation algorithm.

  • PDF

Modular Neural Network Using Recurrent Neural Network (궤환 신경회로망을 사용한 모듈라 네트워크)

  • 최우경;김성주;서재용;전흥태
    • Proceedings of the IEEK Conference
    • /
    • 2003.07d
    • /
    • pp.1565-1568
    • /
    • 2003
  • In this paper, we propose modular network to solve difficult and complex problems that are seldom solved with multi-layer neural network. The structure of modular neural network in researched by Jacobs and Jordan is selected in this paper. Modular network consists of several expert networks and a gating network which is composed of single-layer neural network or multi-layer neural network. We propose modular network structure using recurrent neural network, since the state of the whole network at a particular time depends on an aggregate of previous states as well as on the current input. Finally, we show excellence of the proposed network compared with modular network.

  • PDF

Speech Recognition of Multi-Syllable Words Using Soft Computing Techniques (소프트컴퓨팅 기법을 이용한 다음절 단어의 음성인식)

  • Lee, Jong-Soo;Yoon, Ji-Won
    • Transactions of the Society of Information Storage Systems
    • /
    • v.6 no.1
    • /
    • pp.18-24
    • /
    • 2010
  • The performance of the speech recognition mainly depends on uncertain factors such as speaker's conditions and environmental effects. The present study deals with the speech recognition of a number of multi-syllable isolated Korean words using soft computing techniques such as back-propagation neural network, fuzzy inference system, and fuzzy neural network. Feature patterns for the speech recognition are analyzed with 12th order thirty frames that are normalized by the linear predictive coding and Cepstrums. Using four models of speech recognizer, actual experiments for both single-speakers and multiple-speakers are conducted. Through this study, the recognizers of combined fuzzy logic and back-propagation neural network and fuzzy neural network show the better performance in identifying the speech recognition.

In-Process Monitoring of Chatter Vibration using Multiple Neural Network(II) (복합 신경회로망을 이용한 채터진동의 인프로세스 감시(II))

  • Kim, Jeong-Suk;Kang, Myeong-Chang;Park, Cheol
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.12 no.12
    • /
    • pp.100-108
    • /
    • 1995
  • The In-process minitoring of the chatter vibration is necessarily required to an automatic manufacturing system. In this study, we constructed a multi-sensing system using tool dynamoneter, accelerometer and AE(Acoustic Emission) sensor for a more credible detection of chatter vibration. And a new approach using a multiple neural network to extract the features of multi-sensor for the recognition chatter vibration is proposed. With the Back-propagation training process, the neural network memorize and classify the features of multi-sensor signals. As a result, it is shown by multiple neural network that the chatter vibration can be monitored accurately, and it can be widely used in practical unmanned system.

  • PDF

LSTM Network with Tracking Association for Multi-Object Tracking

  • Farhodov, Xurshedjon;Moon, Kwang-Seok;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.10
    • /
    • pp.1236-1249
    • /
    • 2020
  • In a most recent object tracking research work, applying Convolutional Neural Network and Recurrent Neural Network-based strategies become relevant for resolving the noticeable challenges in it, like, occlusion, motion, object, and camera viewpoint variations, changing several targets, lighting variations. In this paper, the LSTM Network-based Tracking association method has proposed where the technique capable of real-time multi-object tracking by creating one of the useful LSTM networks that associated with tracking, which supports the long term tracking along with solving challenges. The LSTM network is a different neural network defined in Keras as a sequence of layers, where the Sequential classes would be a container for these layers. This purposing network structure builds with the integration of tracking association on Keras neural-network library. The tracking process has been associated with the LSTM Network feature learning output and obtained outstanding real-time detection and tracking performance. In this work, the main focus was learning trackable objects locations, appearance, and motion details, then predicting the feature location of objects on boxes according to their initial position. The performance of the joint object tracking system has shown that the LSTM network is more powerful and capable of working on a real-time multi-object tracking process.

Recurrent Based Modular Neural Network

  • Yon, Jung-Heum;Park, Woo-Kyung;Kim, Yong-Min;Jeon, Hong-Tae
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.694-697
    • /
    • 2003
  • In this paper, we propose modular network to solve difficult and complex problems that are seldom solved with Multi-Layer Neural Network(MLNN). The structure of Modular Neural Network(MNN) in researched by Jacobs and jordan is selected in this paper. Modular network consists of several Expert Networks(EN) and a Gating Network(CN) which is composed of single-layer neural network(SLNN) or multi-layer neural network. We propose modular network structure using Recurrent Neural Network(RNN), since the state of the whole network at a particular time depends on aggregate of previous states as well as on the current input. Finally, we show excellence of the proposed network compared with modular network.

  • PDF