• Title/Summary/Keyword: recursive neural network

Search Result 70, Processing Time 0.028 seconds

Estimation of Aerodynamic Coefficients for a Skid-to-Turn Missile using Neural Network and Recursive Least Square (신경회로망과 순환최소자승법을 이용한 Skid-to-Turn 미사일의 공력 파라미터 추정)

  • Kim, Yun-Hwan;Park, Kyun-Bub;Song, Yong-Kyu;Hwang, Ick-Ho;Choi, Dong-Kyun
    • Journal of the Korean Society for Aviation and Aeronautics
    • /
    • v.20 no.4
    • /
    • pp.7-13
    • /
    • 2012
  • This paper is to estimate aerodynamic coefficients needed to determine the missiles' controller design and stability from simulation data of Skid-to-Turn missile. Method of determining aerodynamic coefficients is to apply Neural Network and Recursive Least Square and results were compared and researched. Also analysing actual flight test data was considered and sensor noise was added. Estimate parameter of data with sensor noise added and estimated performance and reliability for both methods that did not need initial values. Both Neural Network and Recursive Least Square methods showed excellent estimate results without adding the noise and with noise added Neural Network method showed better estimate results.

Identification of suspension systems using error self recurrent neural network and development of sliding mode controller (오차 자기 순환 신경회로망을 이용한 현가시스템 인식과 슬라이딩 모드 제어기 개발)

  • 송광현;이창구;김성중
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.625-628
    • /
    • 1997
  • In this paper the new neural network and sliding mode suspension controller is proposed. That neural network is error self-recurrent neural network. For fast on-line learning, this paper use recursive least squares method. A new neural networks converges considerably faster than the backpropagation algorithm and has advantages of being less affected by the poor initial weights and learning rate. The controller for suspension systems is designed according to sliding mode technique based on new proposed neural network.

  • PDF

Single Image Super-resolution using Recursive Residual Architecture Via Dense Skip Connections (고밀도 스킵 연결을 통한 재귀 잔차 구조를 이용한 단일 이미지 초해상도 기법)

  • Chen, Jian;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.633-642
    • /
    • 2019
  • Recently, the convolution neural network (CNN) model at a single image super-resolution (SISR) have been very successful. The residual learning method can improve training stability and network performance in CNN. In this paper, we propose a SISR using recursive residual network architecture by introducing dense skip connections for learning nonlinear mapping from low-resolution input image to high-resolution target image. The proposed SISR method adopts a method of the recursive residual learning to mitigate the difficulty of the deep network training and remove unnecessary modules for easier to optimize in CNN layers because of the concise and compact recursive network via dense skip connection method. The proposed method not only alleviates the vanishing-gradient problem of a very deep network, but also get the outstanding performance with low complexity of neural network, which allows the neural network to perform training, thereby exhibiting improved performance of SISR method.

Single Image Super Resolution Reconstruction Based on Recursive Residual Convolutional Neural Network

  • Cao, Shuyi;Wee, Seungwoo;Jeong, Jechang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.98-101
    • /
    • 2019
  • At present, deep convolutional neural networks have made a very important contribution in single-image super-resolution. Through the learning of the neural networks, the features of input images are transformed and combined to establish a nonlinear mapping of low-resolution images to high-resolution images. Some previous methods are difficult to train and take up a lot of memory. In this paper, we proposed a simple and compact deep recursive residual network learning the features for single image super resolution. Global residual learning and local residual learning are used to reduce the problems of training deep neural networks. And the recursive structure controls the number of parameters to save memory. Experimental results show that the proposed method improved image qualities that occur in previous methods.

  • PDF

Pole-Zero Assignment Self-Tuning Controller Using Neural Network (신경회로망 기법을 이용한 극-영점 배치 자기 동조 제어기)

  • 구영모;이윤섭;장석호;우광방
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.40 no.2
    • /
    • pp.183-191
    • /
    • 1991
  • This paper develops a pole-zero assignment self-tuning regulator utilizing the method of a neural network in the plant parameter estimation. An approach to parameter estimation of the plant with a Hopfield neural network model is proposed, and the control characteristics of the plant are evaluated by means of a simulation for a second-order linear time invariant plant. The results obtained with those of Exponentially Weighted Recursive Least Squares(EWRLS) method are also shown.

Self-tuning optimal control of an active suspension using a neural network

  • Lee, Byung-Yun;Kim, Wan-Il;Won, Sangchul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.295-298
    • /
    • 1996
  • In this paper, a self-tuning optimal control algorithm is proposed to retain the optimal performance of an active suspension system, when the vehicle has some time varying parameters and parameter uncertainties. We consider a 2 DOF time-varying quarter car model which has the parameter variation of sprung mass, suspension spring constant and suspension damping constant. Instead of solving algebraic riccati equation on line, we propose a neural network approach as an alternative. The optimal feedback gains obtained from the off line computation, according to parameter variations, are used as the neural network training data. When the active suspension system is on, the parameters are identified by the recursive least square method and the trained neural network controller designer finds the proper optimal feedback gains. The simulation results are represented and discussed.

  • PDF

A Study on Volumetric Shrinkage of Injection Molded Part by Neural Network (신경회로망을 이용한 사출성형품의 체적수축률에 관한 연구)

  • Min, Byeong-Hyeon
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.11
    • /
    • pp.224-233
    • /
    • 1999
  • The quality of injection molded parts is affected by the variables such as materials, design variables of part and mold, molding machine, and processing conditions. It is difficult to consider all the variables at the same time to predict the quality. In this paper neural network was applied to analyze the relationship between processing conditions and volumetric shrinkage of part. Engineering plastic gear was used for the study, and the learning data was extracted by the simulation software like Moldflow. Results of neural network was good agreement with simulation results. Nonlinear regression model was formulated using the test data of 3,125 obtained from neural network, Optimal processing conditions were calculated to minimize the volumetric shrinkage of molded part by the application of RQP(Recursive Quadratic Programming) algorithm.

  • PDF

Control of Chaos Dynamics in Jordan Recurrent Neural Networks

  • Jin, Sang-Ho;Kenichi, Abe
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.43.1-43
    • /
    • 2001
  • We propose two control methods of the Lyapunov exponents for Jordan-type recurrent neural networks. Both the two methods are formulated by a gradient-based learning method. The first method is derived strictly from the definition of the Lyapunov exponents that are represented by the state transition of the recurrent networks. The first method can control the complete set of the exponents, called the Lyapunov spectrum, however, it is computationally expensive because of its inherent recursive way to calculate the changes of the network parameters. Also this recursive calculation causes an unstable control when, at least, one of the exponents is positive, such as the largest Lyapunov exponent in the recurrent networks with chaotic dynamics. To improve stability in the chaotic situation, we propose a non recursive formulation by approximating ...

  • PDF

Complexity Control Method of Chaos Dynamics in Recurrent Neural Networks

  • Sakai, Masao;Homma, Noriyasu;Abe, Kenichi
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.4 no.2
    • /
    • pp.124-129
    • /
    • 2002
  • This paper demonstrates that the largest Lyapunov exponent λ of recurrent neural networks can be controlled efficiently by a stochastic gradient method. An essential core of the proposed method is a novel stochastic approximate formulation of the Lyapunov exponent λ as a function of the network parameters such as connection weights and thresholds of neural activation functions. By a gradient method, a direct calculation to minimize a square error (λ - λ$\^$obj/)$^2$, where λ$\^$obj/ is a desired exponent value, needs gradients collection through time which are given by a recursive calculation from past to present values. The collection is computationally expensive and causes unstable control of the exponent for networks with chaotic dynamics because of chaotic instability. The stochastic formulation derived in this paper gives us an approximation of the gradients collection in a fashion without the recursive calculation. This approximation can realize not only a faster calculation of the gradient, but also stable control for chaotic dynamics. Due to the non-recursive calculation. without respect to the time evolutions, the running times of this approximation grow only about as N$^2$ compared to as N$\^$5/T that is of the direct calculation method. It is also shown by simulation studies that the approximation is a robust formulation for the network size and that proposed method can control the chaos dynamics in recurrent neural networks efficiently.

Parallel Type Neural Network for Direct Control Method of Nonlinear System (비선형 시스템의 직접제어방식을 위한 병렬형 신경회로망)

  • 김주웅;정성부;서원호;엄기환
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2000.05a
    • /
    • pp.406-409
    • /
    • 2000
  • We propose the modified neural network which are paralleled to control nonlinear systems. The proposed method is a direct control method to use inverse model of the plant. Nonlinear systems are divided into two parts; linear part and nonlinear part, and it is controlled by RLS method and recursive multi-layer neural network with each other. We simulate to verify the performance of the proposed method and are compared with conventional direct neural network control method. The proposed control method is improved the control performance than the conventional method.

  • PDF