• Title/Summary/Keyword: 신경망 회로

Search Result 1,013, Processing Time 0.031 seconds

뉴로-퍼지 회로망

  • 이민호;박철훈;이수영
    • ICROS
    • /
    • v.1 no.3
    • /
    • pp.83-91
    • /
    • 1995
  • 이 글에서는 신경회로망의 장점과 퍼지논리의 장점을 최대한 이용하며 각각의 단점을 보완하는 뉴로-퍼지 융합 기술과 현재 연구의 흐름을 간단히 살펴보았다. 비구조적인 정보 뿐만 아니라 구조적인 정보까지도 신경회로망의 영역 안에서 처리할 수 있는 새로운 뉴로-퍼지 회로망을 소개하였다. 소개한 뉴로-퍼지 회로망은 비퍼지화와 비퍼지화에 의해 발생하는 오차를 잘 보상할 수 있을 뿐만 아니라, 최적의 입출력 퍼지 소속 함수의 중심점과 모양을 찾을 수 있는 장점이 있다. 또한, 그 특성을 알지 못하는 임의의 비선형 동적 시스템에서 입출력 데이터만 얻을 수 있으며 시스템을 모델할 수 있는 퍼지 규칙을 언어적인 방법과 수치적인 방법으로 표현할 수 있으며 간단한 예제를 통한 시뮬레이션 결과를 보였다. 소개한 뉴로-퍼지 회로망을 이용하여 뉴로-퍼지 제어기를 구성할 수도 있으며, 또한 시스템의 역 퍼지 규칙을 찾는데 이용할 수도 있다. 향후 보다 우수한 일반화 성능을 가질 수 있는 뉴로-퍼지 회로망의 개발이 필요하며, 충분한 입출력 데이터를 얻는 방법의 연구도 필요하다.

  • PDF

Isolated Digit Recognition Combined with Recurrent Neural Prediction Models and Chaotic Neural Networks (회귀예측 신경모델과 카오스 신경회로망을 결합한 고립 숫자음 인식)

  • Kim, Seok-Hyun;Ryeo, Ji-Hwan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.6
    • /
    • pp.129-135
    • /
    • 1998
  • In this paper, the recognition rate of isolated digits has been improved using the multiple neural networks combined with chaotic recurrent neural networks and MLP. Generally, the recognition rate has been increased from 1.2% to 2.5%. The experiments tell that the recognition rate is increased because MLP and CRNN(chaotic recurrent neural network) compensate for each other. Besides this, the chaotic dynamic properties have helped more in speech recognition. The best recognition rate is when the algorithm combined with MLP and chaotic multiple recurrent neural network has been used. However, in the respect of simple algorithm and reliability, the multiple neural networks combined with MLP and chaotic single recurrent neural networks have better properties. Largely, MLP has very good recognition rate in korean digits "il", "oh", while the chaotic recurrent neural network has best recognition in "young", "sam", "chil".

  • PDF

Robust Speed Control of AC Permanent Magnet Synchronous Motor using RBF Neural Network (RBF 신경회로망을 이용한 교류 동기 모터의 강인 속도 제어)

  • 김은태;이성열
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.40 no.4
    • /
    • pp.243-250
    • /
    • 2003
  • In this paper, the speed controller of permanent-magnet synchronous motor (PMSM) using the RBF neural (NN) disturbance observer is proposed. The suggested controller is designed using the input-output feedback linearization technique for the nominal model of PMSM and incorporates the RBF NN disturbance observer to compensate for the system uncertainties. Because the RBF NN disturbance observer which estimates the variation of a system parameter and a load torque is employed, the proposed algorithm is robust against the uncertainties of the system. Finally, the computer simulation is carried out to verify the effectiveness of the proposed method.

Optimal ATM Traffic Shaping Method Using the Backpropagation Neural Network (신경회로망을 이용한 최적의 ATM 트래픽 형태 제어 방법)

  • 한성일;이배호
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1996.10a
    • /
    • pp.215-218
    • /
    • 1996
  • ATM망은 실제로 이용 가능한 대역폭 이상을 할당하는 통계적 다중화(statistical multiplexing) 기법을 사용하므로 망을 통한 트래픽 흐름을 적절히 관리하지 못하면 혼잡(congestion), 셀 손실, 망의 성능 저하 등을 야기하게 된다. 이러한 상황을 예방하고 셀의 도착 시간 버스트(burstiness)를 줄이며 셀 손실 특성을 개선하여 망의 성능을 증가시키기 위하여, 트래픽의 형태 제어 방법을 제안한다. 트래픽 형태 제어 파라미터 값의 역전파 신경망을 적용하여 예측되며, 이 예측된 값들에 의해 형태 제어 방법을 수행한다. 제안된 형태 제어 기법의 성능은 Poisson 트래픽 입력에 대한 컴퓨터 시뮬레이션에 의해 얻어지며, 멀티플렉서에서의 최대 버퍼 크기를 측정하여 성능을 평가하였다.

  • PDF

The Capacity of Core-Net : Multi-Level 2-Layer Neural Networks (2층 다단 신경망회로 코어넷의 처리용량에 관한 연구)

  • Park, Jong-Jun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.8
    • /
    • pp.2098-2115
    • /
    • 1999
  • One of the unsolved problems in Neural Networks is the interpretation of hidden layers. This paper defines the Core-Net which has an input(p levels) and an output(q levels) with 2-layers as a basic circuit of neural network. In have suggested an equation, {{{{ {a}_{p,q} = {{q}^{2}} over {2}p(p-1)- { q} over {2 } (3 { p}^{2 } -7p+2)+ { p}^{2 }-3p+2}}}}, whichs ws the capacity of the Core-Net and have proved it by using the mathematical induction. It has been also shown that some of the problems with hidden layers can be solved by using the Core-Net and using simulation of an example.

  • PDF

A Study on Compression of Connections in Deep Artificial Neural Networks (인공신경망의 연결압축에 대한 연구)

  • Ahn, Heejune
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.22 no.5
    • /
    • pp.17-24
    • /
    • 2017
  • Recently Deep-learning, Technologies using Large or Deep Artificial Neural Networks, have Shown Remarkable Performance, and the Increasing Size of the Network Contributes to its Performance Improvement. However, the Increase in the Size of the Neural Network Leads to an Increase in the Calculation Amount, which Causes Problems Such as Circuit Complexity, Price, Heat Generation, and Real-time Restriction. In This Paper, We Propose and Test a Method to Reduce the Number of Network Connections by Effectively Pruning the Redundancy in the Connection and Showing the Difference between the Performance and the Desired Range of the Original Neural Network. In Particular, we Proposed a Simple Method to Improve the Performance by Re-learning and to Guarantee the Desired Performance by Allocating the Error Rate per Layer in Order to Consider the Difference of each Layer. Experiments have been Performed on a Typical Neural Network Structure such as FCN (full connection network) and CNN (convolution neural network) Structure and Confirmed that the Performance Similar to that of the Original Neural Network can be Obtained by Only about 1/10 Connection.

New application of Neural Network for DC motor speed control (직류전동기의 속도제어를 위한 신경회로망의 새로운 적용)

  • 박왈서
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.18 no.2
    • /
    • pp.63-67
    • /
    • 2004
  • We know that Neural Network is in use in many control fields. In time of using as controller, Neural Network controller is needed to learning by Input-output pattern. But in many times of control field. we can not get Input-output pattern of Neural Network controller. As a method solving this problem, in this paper, we try New control method that output node of Neural Network bringing control object. Such a New control method application, we can solve the data taking problem to Neural Network controller Input-output. The effectiveness of proposed control algorithm is verified by simulation results of DC servo motor.

Design Method for an MLP Neural Network Which Minimizes the Effect by the Quantization of the Weights and the Neuron Outputs (가중치 뉴런 출력의 양자화 영향을 최소화하는 다층퍼셉트론 신경망 설계 방법)

  • Gwon, O-Jun;Bang, Seung-Yang
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.12
    • /
    • pp.1383-1392
    • /
    • 1999
  • 이미 학습된 다층퍼셉트론 신경망을 디지털 VLSI 기술을 사용하여 하드웨어로 구현할 경우 신경망의 가중치 및 뉴런 출력들을 양자화해야 하는 문제가 발생한다. 이러한 신경망 변수들의 양자화는 결과적으로 주어진 입력에 대한 신경망의 최종 출력에서의 왜곡을 초래한다. 본 논문에서는 먼저 이러한 양자화로 인한 신경망 출력에서의 왜곡을 통계적으로 분석하였다. 분석 결과에 의하면 입력패턴 각 성분의 제곱들의 합과 가중치의 크기들이 양자화 영향에 주로 기여하는 것으로 나타났다. 이러한 분석 결과를 이용하여 양자화를 위한 정밀도가 주어졌을 때, 양자화 영향이 최소화된 다층퍼셉트론 신경망을 설계하는 방법을 제시하였다. 그리고 제안된 방법에 의해 얻은 신경망과 오류역전파 학습방법에 의하여 얻은 신경망의 성능을 비교함으로써 제안된 방법의 효율성을 입증하였다. 실험결과는 낮은 양자화 정밀도에서도 제안된 방법이 더 좋은 성능을 보였다.Abstract When we implement a multilayer perceptron with the digital VLSI technology, we generally have to quantize the weights and the neuron outputs. These quantizations eventually cause distortion in the output of the network for a given input. In this paper first we made a statistical analysis about the effect caused by the quantization on the output of the network. The analysis revealed that the sum of the squared input components and the sizes of the weights are the major factors which contribute to the quantization effect. We present a design method for an MLP which minimizes the quantization effect when the precision of the quantization is given. In order to show the effectiveness of the proposed method, we developed a network by our method and compared it with the one developed by the regular backpropagation. We could confirm that the network developed by our method performs better even with a low precision of the quantization.

Hybrid Word-Character Neural Network Model for the Improvement of Document Classification (문서 분류의 개선을 위한 단어-문자 혼합 신경망 모델)

  • Hong, Daeyoung;Shim, Kyuseok
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1290-1295
    • /
    • 2017
  • Document classification, a task of classifying the category of each document based on text, is one of the fundamental areas for natural language processing. Document classification may be used in various fields such as topic classification and sentiment classification. Neural network models for document classification can be divided into two categories: word-level models and character-level models that treat words and characters as basic units respectively. In this study, we propose a neural network model that combines character-level and word-level models to improve performance of document classification. The proposed model extracts the feature vector of each word by combining information obtained from a word embedding matrix and information encoded by a character-level neural network. Based on feature vectors of words, the model classifies documents with a hierarchical structure wherein recurrent neural networks with attention mechanisms are used for both the word and the sentence levels. Experiments on real life datasets demonstrate effectiveness of our proposed model.