• Title/Summary/Keyword: neural network.

Search Result 11,766, Processing Time 0.038 seconds

Intelligent Control by Immune Network Algorithm Based Auto-Weight Function Tuning

  • Kim, Dong-Hwa
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.120.2-120
    • /
    • 2002
  • In this paper auto-tuning scheme of weight function in the neural networks has been suggested by immune algorithm for nonlinear process. A number of structures of the neural networks are considered as learning methods for control system. A general view is provided that they are the special cases of either the membership functions or the modification of network structure in the neural networks. On the other hand, since the immune network system possesses a self organizing and distributed memory, it is thus adaptive to its external environment and allows a PDP (parallel distributed processing) network to complete patterns against the environmental situation. Also. It can provi..

  • PDF

Identification of fuzzy rule and implementation of fuzzy controller using neural network (신경회로망을 이용한 퍼지 제어규칙의 추정 및 퍼지 제어기의 구현)

  • 전용성;박상배;이균경
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.856-860
    • /
    • 1991
  • This paper proposes a modified fuzzy controller using a neural network. This controller can automatically identify expert's control rules and tune membership functions utilizing expert's control data. Identificaton capability of the fuzzy controller is examined using simple numerical data. The results show that the network in this paper can identify nonlinear systems more precisely than conventional fuzzy controller using neural network.

  • PDF

Nonlinear System Modeling Based on Multi-Backpropagation Neural Network (다중 역전파 신경회로망을 이용한 비선형 시스템의 모델링)

  • Baeg, Jae-Huyk;Lee, Jung-Moon
    • Journal of Industrial Technology
    • /
    • v.16
    • /
    • pp.197-205
    • /
    • 1996
  • In this paper, we propose a new neural architecture. We synthesize the architecture from a combination of structures known as MRCCN (Multi-resolution Radial-basis Competitive and Cooperative Network) and BPN (Backpropagation Network). The proposed neural network is able to improve the learning speed of MRCCN and the mapping capability of BPN. The ability and effectiveness of identifying a ninlinear dynamic system using the proposed architecture will be demonstrated by computer simulation.

  • PDF

Classification of remotely sensed images using fuzzy neural network (퍼지 신경회로망을 이용한 원격감지 영상의 분류)

  • 이준재;황석윤;김효성;이재욱;서용수
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.3
    • /
    • pp.150-158
    • /
    • 1998
  • This paper describes the classification of remotely sensed image data using fuzzy neural network, whose algorithm was obtained by replacing real numbers used for inputs and outputs in the standard back propagation algorithm with fuzzy numbers. In the proposed method, fuzzy patterns, generated based on the histogram ofeach category for the training data, are put into the fuzzy neural network with real numbers. The results show that the generalization and appoximation are better than that ofthe conventional network in determining the complex boundary of patterns.

  • PDF

Force Controller of the Redundant Manipulator using Seural Network (Redundant 매니퓰레이터의 force 제어를 위한 신경 회로망 제어기)

  • 이기응;조현찬;전홍태;이홍기
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10a
    • /
    • pp.13-17
    • /
    • 1990
  • In this paper we propose the force controller using a neural network for a redundant manipulator. Jacobian transpose matrix of a redundant manipulator constructed by a neural network is trained by using a feedback torque as an error signal. If the neural network is sufficiently trained well, the kinematic inaccuracy of a manipulator is automatically compensated. The effectiveness of the proposed controller is demonstrated by computer simulation using a three-link planar robot.

  • PDF

Control of Nonlinear System using WAVENET (WAVENET을 이용한 비선형 시스템의 제어)

  • Park, Doo-Hwan;Kim, Kyung-Yup;Lee, Joon-Tark
    • Proceedings of the Korean Society of Marine Engineers Conference
    • /
    • 2005.06a
    • /
    • pp.257-261
    • /
    • 2005
  • The helicopter system is non-linear and complex. Futhermore, because of absence of accurate mathematical model, it is difficult accurately to control its attitude. therefore, we propose a WAVENET control technique to control efficiently its elevation angle and azimuth one. Wavelet neural network(WAVENET) can construct systematically initial neural network as applying wavelet theory to feedforward network. It is proved through computer simulation that WAVENET has more excellent approximation capability than existing neural network. The simulation results using MATLAB are introduced.

  • PDF

FMS scheduling through artificial neural network (인공 뉴럴 네트워크에 의한 FMS 일정관리)

  • 양정문;문기주;김정자
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.18 no.34
    • /
    • pp.99-106
    • /
    • 1995
  • Recently, neural network is recognized as a new approach to solve jobshop scheduling problems in manufacturing system. Scheduling problem is known to be a difficult combinational explosive problem with domain-dependence variations in general. In addition, the needs to achieve a good performance in flexible manufacturing system increase the dimensions of decision complexity. Therefore, mathematical approach to solve realistic problems could be failed to find optimal or optimal-trending. In this paper a technique with neural network for jobs grouping by job-attributes and Gaussian machine network for generating to near-optimal sequence is presented.

  • PDF

Improved Deep Learning Algorithm

  • Kim, Byung Joo
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.8 no.2
    • /
    • pp.119-127
    • /
    • 2018
  • Training a very large deep neural network can be painfully slow and prone to overfitting. Many researches have done for overcoming the problem. In this paper, a combination of early stopping and ADAM based deep neural network was presented. This form of deep network is useful for handling the big data because it automatically stop the training before overfitting occurs. Also generalization ability is better than pure deep neural network model.

Arabic Text Recognition with Harakat Using Deep Learning

  • Ashwag, Maghraby;Esraa, Samkari
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.1
    • /
    • pp.41-46
    • /
    • 2023
  • Because of the significant role that harakat plays in Arabic text, this paper used deep learning to extract Arabic text with its harakat from an image. Convolutional neural networks and recurrent neural network algorithms were applied to the dataset, which contained 110 images, each representing one word. The results showed the ability to extract some letters with harakat.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.