• Title/Summary/Keyword: recurrent neural networks

Search Result 285, Processing Time 0.034 seconds

Neural Predictive Coding for Text Compression Using GPGPU (GPGPU를 활용한 인공신경망 예측기반 텍스트 압축기법)

  • Kim, Jaeju;Han, Hwansoo
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.3
    • /
    • pp.127-132
    • /
    • 2016
  • Several methods have been proposed to apply artificial neural networks to text compression in the past. However, the networks and targets are both limited to the small size due to hardware capability in the past. Modern GPUs have much better calculation capability than CPUs in an order of magnitude now, even though CPUs have become faster. It becomes possible now to train greater and complex neural networks in a shorter time. This paper proposed a method to transform the distribution of original data with a probabilistic neural predictor. Experiments were performed on a feedforward neural network and a recurrent neural network with gated-recurrent units. The recurrent neural network model outperformed feedforward network in compression rate and prediction accuracy.

Complexity Control Method of Chaos Dynamics in Recurrent Neural Networks

  • Sakai, Masao;Honma, Noriyasu;Abe, Kenichi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.494-494
    • /
    • 2000
  • This paper demonstrates that the largest Lyapunov exponent $\lambda$ of recurrent neural networks can be controlled by a gradient method. The method minimizes a square error $e_{\lambda}=(\lambda-\lambda^{obj})^2$ where $\lambda^{obj}$ is desired exponent. The $\lambda$ can be given as a function of the network parameters P such as connection weights and thresholds of neurons' activation. Then changes of parameters to minimize the error are given by calculating their gradients $\partial\lambda/\partialP$. In a previous paper, we derived a control method of $\lambda$via a direct calculation of $\partial\lambda/\partialP$ with a gradient collection through time. This method however is computationally expensive for large-scale recurrent networks and the control is unstable for recurrent networks with chaotic dynamics. Our new method proposed in this paper is based on a stochastic relation between the complexity $\lambda$ and parameters P of the networks configuration under a restriction. Then the new method allows us to approximate the gradient collection in a fashion without time evolution. This approximation requires only $O(N^2)$ run time while our previous method needs $O(N^{5}T)$ run time for networks with N neurons and T evolution. Simulation results show that the new method can realize a "stable" control for larege-scale networks with chaotic dynamics.

  • PDF

Robot Trajectory Control using Prefilter Type Chaotic Neural Networks Compensator (Prefilter 형태의 카오틱 신경망을 이용한 로봇 경로 제어)

  • 강원기;최운하김상희
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.263-266
    • /
    • 1998
  • This paper propose a prefilter type inverse control algorithm using chaotic neural networks. Since the chaotic neural networks show robust characteristics in approximation and adaptive learning for nonlinear dynamic system, the chaotic neural networks are suitable for controlling robotic manipulators. The structure of the proposed prefilter type controller compensate velocity of the PD controller. To estimate the proposed controller, we implemented to the Cartesian space control of three-axis PUMA robot and compared the final result with recurrent neural network(RNN) controller.

  • PDF

Complexity Control Method of Chaos Dynamics in Recurrent Neural Networks

  • Sakai, Masao;Homma, Noriyasu;Abe, Kenichi
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.4 no.2
    • /
    • pp.124-129
    • /
    • 2002
  • This paper demonstrates that the largest Lyapunov exponent λ of recurrent neural networks can be controlled efficiently by a stochastic gradient method. An essential core of the proposed method is a novel stochastic approximate formulation of the Lyapunov exponent λ as a function of the network parameters such as connection weights and thresholds of neural activation functions. By a gradient method, a direct calculation to minimize a square error (λ - λ$\^$obj/)$^2$, where λ$\^$obj/ is a desired exponent value, needs gradients collection through time which are given by a recursive calculation from past to present values. The collection is computationally expensive and causes unstable control of the exponent for networks with chaotic dynamics because of chaotic instability. The stochastic formulation derived in this paper gives us an approximation of the gradients collection in a fashion without the recursive calculation. This approximation can realize not only a faster calculation of the gradient, but also stable control for chaotic dynamics. Due to the non-recursive calculation. without respect to the time evolutions, the running times of this approximation grow only about as N$^2$ compared to as N$\^$5/T that is of the direct calculation method. It is also shown by simulation studies that the approximation is a robust formulation for the network size and that proposed method can control the chaos dynamics in recurrent neural networks efficiently.

A Controlled Neural Networks of Nonlinear Modeling with Adaptive Construction in Various Conditions (다변 환경 적응형 비선형 모델링 제어 신경망)

  • Kim, Jong-Man;Sin, Dong-Yong
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2004.07b
    • /
    • pp.1234-1238
    • /
    • 2004
  • A Controlled neural networks are proposed in order to measure nonlinear environments in adaptive and in realtime. The structure of it is similar to recurrent neural networks: a delayed output as the input and a delayed error between tile output of plant and neural networks as a bias input. In addition, we compute the desired value of hidden layer by an optimal method instead of transfering desired values by backpropagation and each weights are updated by RLS(Recursive Least Square). Consequently, this neural networks are not sensitive to initial weights and a learning rate, and have a faster convergence rate than conventional neural networks. This new neural networks is Error Estimated Neural Networks. We can estimate nonlinear models in realtime by the proposed networks and control nonlinear models. To show the performance of this one, we have various experiments. And this controller call prove effectively to be control in the environments of various systems.

  • PDF

Nonlinear Neural Networks for Vehicle Modeling Control Algorithm based on 7-Depth Sensor Measurements (7자유도 센서차량모델 제어를 위한 비선형신경망)

  • Kim, Jong-Man;Kim, Won-Sop;Sin, Dong-Yong
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2008.06a
    • /
    • pp.525-526
    • /
    • 2008
  • For measuring nonlinear Vehicle Modeling based on 7-Depth Sensor, the neural networks are proposed m adaptive and in realtime. The structure of it is similar to recurrent neural networks; a delayed output as the input and a delayed error between the output of plant and neural networks as a bias input. In addition, we compute the desired value of hidden layer by an optimal method instead of transfering desired values by backpropagation and each weights are updated by RLS(Recursive Least Square). Consequently, this neural networks are not sensitive to initial weights and a learning rate, and have a faster convergence rate than conventional neural networks. This new neural networks is Error Estimated Neural Networks. We can estimate nonlinear models in realtime by the proposed networks and control nonlinear models.

  • PDF

Load Frequency Control using Parameter Self-Tuning fuzzy Controller (파라미터 자기조정 퍼지제어기를 이용한 부하주파수제어)

  • 탁한호;추연규
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.50-59
    • /
    • 1998
  • This paper presents stabilization and adaptive control of flexible single link robot manipulator system by self-recurrent neural networks that is one of the neural networks and is effective in nonlinear control. The architecture of neural networks is a modified model of self-recurrent structure which has a hidden layer. The self-recurrent neural networks can be used to approximate any continuous function to any desired degree of accuracy and the weights are updated by feedback-error learning algorithm. When a flexible manipulator is rotated by a motor through the fixed end, transverse vibration may occur. The motor toroque should be controlled in such a way that the motor rotates by a specified angle, while simultaneously stabilizing vibration of the flexible manipuators so that it is arresed as soon as possible at the end of rotation. Accurate vibration control of lightweight manipulator during the large changes in configuration common to robotic tasks requires dynamic models that describe both the rigid body motions, as well as the flexural vibrations. Therefore, a dynamic models for a flexible single link robot manipulator is derived, and then a comparative analysis was made with linear controller through an simulation and experiment. The results are proesented to illustrate thd advantages and imporved performance of the proposed adaptive control ove the conventional linear controller.

  • PDF

Speech Recognition Using Recurrent Neural Prediction Models (회귀신경예측 모델을 이용한 음성인식)

  • 류제관;나경민;임재열;성경모;안성길
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.11
    • /
    • pp.1489-1495
    • /
    • 1995
  • In this paper, we propose recurrent neural prediction models (RNPM), recurrent neural networks trained as a nonlinear predictor of speech, as a new connectionist model for speech recognition. RNPM modulates its mapping effectively by internal representation, and it requires no time alignment algorithm. Therefore, computational load at the recognition stage is reduced substantially compared with the well known predictive neural networks (PNN), and the size of the required memory is much smaller. And, RNPM does not suffer from the problem of deciding the time varying target function. In the speaker dependent and independent speech recognition experiments under the various conditions, the proposed model was comparable in recognition performance to the PNN, while retaining the above merits that PNN doesn't have.

  • PDF

Development of Basic Practice Cases for Recurrent Neural Networks (순환신경망 기초 실습 사례 개발)

  • Kyeong Hur
    • Journal of Practical Engineering Education
    • /
    • v.14 no.3
    • /
    • pp.491-498
    • /
    • 2022
  • In this paper, as a liberal arts course for non-major students, a case study of recurrent neural network SW practice, which is essential for designing a basic recurrent neural network subject curriculum, was developed. The developed SW practice case focused on understanding the operation principle of the recurrent neural network, and used a spreadsheet to check the entire visualized operation process. The developed recurrent neural network practice case consisted of creating supervised text completion training data, implementing the input layer, hidden layer, state layer (context node), and output layer in sequence, and testing the performance of the recurrent neural network on text data. The recurrent neural network practice case developed in this paper automatically completes words with various numbers of characters. Using the proposed recurrent neural network practice case, it is possible to create an artificial intelligence SW practice case that automatically completes by expanding the maximum number of characters constituting Korean or English words in various ways. Therefore, it can be said that the utilization of this case of basic practice of recurrent neural network is high.

Neural Network Design for Spatio-temporal Pattern Recognition (시공간패턴인식 신경회로망의 설계)

  • Lim, Chung-Soo;Lee, Chong-Ho
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.11
    • /
    • pp.1464-1471
    • /
    • 1999
  • This paper introduces complex-valued competitive learning neural network for spatio-temporal pattern recognition. There have been quite a few neural networks for spatio-temporal pattern recognition. Among them, recurrent neural network, TDNN, and avalanche model are acknowledged as standard neural network paradigms for spatio-temporal pattern recognition. Recurrent neural network has complicated learning rules and does not guarantee convergence to global minima. TDNN requires too many neurons, and can not be regarded to deal with spatio-temporal pattern basically. Grossberg's avalanche model is not able to distinguish long patterns, and has to be indicated which layer is to be used in learning. In order to remedy drawbacks of the above networks, unsupervised competitive learning using complex umber is proposed. Suggested neural network also features simultaneous recognition, time-shift invariant recognition, stable categorizing, and learning rate modulation. The network is evaluated by computer simulation with randomly generated patterns.

  • PDF