• Title/Summary/Keyword: Neural Network Learning

Search Result 4,228, Processing Time 0.036 seconds

An Immune-Fuzzy Neural Network For Dynamic System

  • Kim, Dong-Hwa;Cho, Jae-Hoon
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.10a
    • /
    • pp.303-308
    • /
    • 2004
  • Fuzzy logic, neural network, fuzzy-neural network play an important as the key technology of linguistic modeling for intelligent control and decision making in complex systems. The fuzzy-neural network (FNN) learning represents one of the most effective algorithms to build such linguistic models. This paper proposes learning approach of fuzzy-neural network by immune algorithm. The proposed learning model is presented in an immune based fuzzy-neural network (FNN) form which can handle linguistic knowledge by immune algorithm. The learning algorithm of an immune based FNN is composed of two phases. The first phase used to find the initial membership functions of the fuzzy neural network model. In the second phase, a new immune algorithm based optimization is proposed for tuning of membership functions and structure of the proposed model.

  • PDF

Fuzzy Neural Network Model Using Asymmetric Fuzzy Learning Rates (비대칭 퍼지 학습률을 이용한 퍼지 신경회로망 모델)

  • Kim Yong-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.7
    • /
    • pp.800-804
    • /
    • 2005
  • This paper presents a fuzzy learning rule which is the fuzzified version of LVQ(Learning Vector Quantization). This fuzzy learning rule 3 uses fuzzy learning rates. instead of the traditional learning rates. LVQ uses the same learning rate regardless of correctness of classification. But, the new fuzzy learning rule uses the different learning rates depending on whether classification is correct or not. The new fuzzy learning rule is integrated into the improved IAFC(Integrated Adaptive Fuzzy Clustering) neural network. The improved IAFC neural network is both stable and plastic. The iris data set is used to compare the performance of the supervised IAFC neural network 3 with the performance of backprogation neural network. The results show that the supervised IAFC neural network 3 is better than backpropagation neural network.

A multi-layed neural network learning procedure and generating architecture method for improving neural network learning capability (다층신경망의 학습능력 향상을 위한 학습과정 및 구조설계)

  • 이대식;이종태
    • Korean Management Science Review
    • /
    • v.18 no.2
    • /
    • pp.25-38
    • /
    • 2001
  • The well-known back-propagation algorithm for multi-layered neural network has successfully been applied to pattern c1assification problems with remarkable flexibility. Recently. the multi-layered neural network is used as a powerful data mining tool. Nevertheless, in many cases with complex boundary of classification, the successful learning is not guaranteed and the problems of long learning time and local minimum attraction restrict the field application. In this paper, an Improved learning procedure of multi-layered neural network is proposed. The procedure is based on the generalized delta rule but it is particular in the point that the architecture of network is not fixed but enlarged during learning. That is, the number of hidden nodes or hidden layers are increased to help finding the classification boundary and such procedure is controlled by entropy evaluation. The learning speed and the pattern classification performance are analyzed and compared with the back-propagation algorithm.

  • PDF

A Robust Nonlinear Control Using the Neural Network Model on System Uncertainty (시스템의 불확실성에 대한 신경망 모델을 통한 강인한 비선형 제어)

  • 이수영;정명진
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.43 no.5
    • /
    • pp.838-847
    • /
    • 1994
  • Although there is an analytical proof of modeling capability of the neural network, the convergency error in nonlinearity modeling is inevitable, since the steepest descent based practical larning algorithms do not guarantee the convergency of modeling error. Therefore, it is difficult to apply the neural network to control system in critical environments under an on-line learning scheme. Although the convergency of modeling error of a neural network is not guatranteed in the practical learning algorithms, the convergency, or boundedness of tracking error of the control system can be achieved if a proper feedback control law is combined with the neural network model to solve the problem of modeling error. In this paper, the neural network is introduced for compensating a system uncertainty to control a nonlinear dynamic system. And for suppressing inevitable modeling error of the neural network, an iterative neural network learning control algorithm is proposed as a virtual on-line realization of the Adaptive Variable Structure Controller. The efficiency of the proposed control scheme is verified from computer simulation on dynamics control of a 2 link robot manipulator.

  • PDF

Development and Speed Comparison of Convolutional Neural Network Using CUDA (CUDA를 이용한 Convolutional Neural Network의 구현 및 속도 비교)

  • Ki, Cheol-min;Cho, Tai-Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.335-338
    • /
    • 2017
  • Currently Artificial Inteligence and Deep Learning are social issues, and These technologies are applied to various fields. A good method among the various algorithms in Artificial Inteligence is Convolutional Neural Network. Convolutional Neural Network is a form that adds convolution layers that extracts features by convolution operation on a general neural network method. If you use Convolutional Neural Network as small amount of data, or if the structure of layers is not complicated, you don't have to pay attention to speed. But the learning time is long as the size of the learning data is large and the structure of layers is complicated. So, GPU-based parallel processing is a lot. In this paper, we developed Convolutional Neural Network using CUDA and Learning speed is faster and more efficient than the method using the CPU.

  • PDF

Introduction to convolutional neural network using Keras; an understanding from a statistician

  • Lee, Hagyeong;Song, Jongwoo
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.6
    • /
    • pp.591-610
    • /
    • 2019
  • Deep Learning is one of the machine learning methods to find features from a huge data using non-linear transformation. It is now commonly used for supervised learning in many fields. In particular, Convolutional Neural Network (CNN) is the best technique for the image classification since 2012. For users who consider deep learning models for real-world applications, Keras is a popular API for neural networks written in Python and also can be used in R. We try examine the parameter estimation procedures of Deep Neural Network and structures of CNN models from basics to advanced techniques. We also try to figure out some crucial steps in CNN that can improve image classification performance in the CIFAR10 dataset using Keras. We found that several stacks of convolutional layers and batch normalization could improve prediction performance. We also compared image classification performances with other machine learning methods, including K-Nearest Neighbors (K-NN), Random Forest, and XGBoost, in both MNIST and CIFAR10 dataset.

Supervised Learning Artificial Neural Network Parameter Optimization and Activation Function Basic Training Method using Spreadsheets (스프레드시트를 활용한 지도학습 인공신경망 매개변수 최적화와 활성화함수 기초교육방법)

  • Hur, Kyeong
    • Journal of Practical Engineering Education
    • /
    • v.13 no.2
    • /
    • pp.233-242
    • /
    • 2021
  • In this paper, as a liberal arts course for non-majors, we proposed a supervised learning artificial neural network parameter optimization method and a basic education method for activation function to design a basic artificial neural network subject curriculum. For this, a method of finding a parameter optimization solution in a spreadsheet without programming was applied. Through this training method, you can focus on the basic principles of artificial neural network operation and implementation. And, it is possible to increase the interest and educational effect of non-majors through the visualized data of the spreadsheet. The proposed contents consisted of artificial neurons with sigmoid and ReLU activation functions, supervised learning data generation, supervised learning artificial neural network configuration and parameter optimization, supervised learning artificial neural network implementation and performance analysis using spreadsheets, and education satisfaction analysis. In this paper, considering the optimization of negative parameters for the sigmoid neural network and the ReLU neuron artificial neural network, we propose a training method for the four performance analysis results on the parameter optimization of the artificial neural network, and conduct a training satisfaction analysis.

Realization of a neural network controller by using iterative learning control (반복학습 제어를 사용한 신경회로망 제어기의 구현)

  • 최종호;장태정;백석찬
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10a
    • /
    • pp.230-235
    • /
    • 1992
  • We propose a method of generating data to train a neural network controller. The data can be prepared directly by an iterative learning technique which repeatedly adjusts the control input to improve the tracking quality of the desired trajectory. Instead of storing control input data in memory as in iterative learning control, the neural network stores the mapping between the control input and the desired output. We apply this concept to the trajectory control of a two link robot manipulator with a feedforward neural network controller and a feedback linear controller. Simulation results show good generalization of the neural network controller.

  • PDF

Interworking technology of neural network and data among deep learning frameworks

  • Park, Jaebok;Yoo, Seungmok;Yoon, Seokjin;Lee, Kyunghee;Cho, Changsik
    • ETRI Journal
    • /
    • v.41 no.6
    • /
    • pp.760-770
    • /
    • 2019
  • Based on the growing demand for neural network technologies, various neural network inference engines are being developed. However, each inference engine has its own neural network storage format. There is a growing demand for standardization to solve this problem. This study presents interworking techniques for ensuring the compatibility of neural networks and data among the various deep learning frameworks. The proposed technique standardizes the graphic expression grammar and learning data storage format using the Neural Network Exchange Format (NNEF) of Khronos. The proposed converter includes a lexical, syntax, and parser. This NNEF parser converts neural network information into a parsing tree and quantizes data. To validate the proposed system, we verified that MNIST is immediately executed by importing AlexNet's neural network and learned data. Therefore, this study contributes an efficient design technique for a converter that can execute a neural network and learned data in various frameworks regardless of the storage format of each framework.

Sensorless Speed Control of Direct Current Motor by Neural Network (신경회로망을 이용한 직류전동기의 센서리스 속도제어)

  • 강성주;오세진;김종수
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.28 no.1
    • /
    • pp.90-97
    • /
    • 2004
  • DC motor requires a rotor speed sensor for accurate speed control. The speed sensors such as resolvers and encoders are used as speed detectors. but they increase cost and size of the motor and restrict the industrial drive applications. So in these days. many Papers have reported on the sensorless operation or DC motor(3)-(5). This paper Presents a new sensorless strategy using neural networks(6)-(8). Neural network structure has three layers which are input layer. hidden layer and output layer. The optimal neural network structure was tracked down by trial and error and it was found that 4-16-1 neural network has given suitable results for the instantaneous rotor speed. Also. learning method is very important in neural network. Supervised learning methods(8) are typically used to train the neural network for learning the input/output pattern presented. The back-propagation technique adjusts the neural network weights during training. The rotor speed is gained by weights and four inputs to the neural network. The experimental results were found satisfactory in both the independency on machine parameters and the insensitivity to the load condition.