• 제목/요약/키워드: Dynamic Learning Control

검색결과 353건 처리시간 0.037초

신경회로를 이용한 6축 로보트의 역동력학적 토크제어 (Inverse Dynamic Torque Control of a Six-Jointed Robot Arm Using Neural networks)

  • 오세영;조문정;문영주
    • 대한전기학회논문지
    • /
    • 제40권8호
    • /
    • pp.816-824
    • /
    • 1991
  • It is well known that dynamic control is needed for fast and accurate control. Neural networks are ideal for representing the strongly nonlinear relationship in the dynamic equations including complex unmodeled effects. It thus creates many advantages over conventional methods such as simple, fast and accurate control through neural network's inherent learning and massive parallelism. In this paper, dynamic control of the full six degrees of freedom of an industrial robot arm will be presented using neural networks. Moreover, through application to a real robot the usefulness of neurocontrol is demonstrated. The back propagation and feedback-error learning is used to train the neurocontroller. Simulated control of a PUMA 560 arm demonstrates that it moves at high speed with good accuracy and generalizes over untrained trajectories as well as adapt to unforseen load changes and sensor noise.

퍼지 반복 학습제어기를 이용한 동적 플랜트 제어 (Fuzzy iterative learning controller for dynamic plants)

  • 유학모;이연정
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 한국자동제어학술회의논문집(국내학술편); 포항공과대학교, 포항; 24-26 Oct. 1996
    • /
    • pp.499-502
    • /
    • 1996
  • In this paper, we propose a fuzzy iterative learning controller(FILC). It can control fully unknown dynamic plants through iterative learning. To design learning controllers based on the steepest descent method, it is one of the difficult problems to identify the change of plant output with respect to the change of control input(.part.e/.part.u). To solve this problem, we propose a method as follows: first, calculate .part.e/.part.u using a similarity measure and information in consecutive time steps, then adjust the fuzzy logic controller(FLC) using the sign of .part.e/.part..u. As learning process is iterated, the value of .part.e/.part.u is reinforced. Proposed FILC has the simple architecture compared with previous other controllers. Computer simulations for an inverted pendulum system were conducted to verify the performance of the proposed FILC.

  • PDF

A Navigation System for Mobile Robot

  • 장원량;정길도
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2009년도 정보 및 제어 심포지움 논문집
    • /
    • pp.118-120
    • /
    • 2009
  • In this paper, we present the Q-learning method for adaptive traffic signal control on the basis of multi-agent technology. The structure is composed of sixphase agents and one intersection agent. Wireless communication network provides the possibility of the cooperation of agents. As one kind of reinforcement learning, Q-learning is adopted as the algorithm of the control mechanism, which can acquire optical control strategies from delayed reward; furthermore, we adopt dynamic learning method instead of static method, which is more practical. Simulation result indicates that it is more effective than traditional signal system.

  • PDF

안정성을 고려한 동적 신경망의 최적화와 비선형 시스템 제어기 설계 (Optimization of Dynamic Neural Networks Considering Stability and Design of Controller for Nonlinear Systems)

  • 유동완;전순용;서보혁
    • 제어로봇시스템학회논문지
    • /
    • 제5권2호
    • /
    • pp.189-199
    • /
    • 1999
  • This paper presents an optimization algorithm for a stable Self Dynamic Neural Network(SDNN) using genetic algorithm. Optimized SDNN is applied to a problem of controlling nonlinear dynamical systems. SDNN is dynamic mapping and is better suited for dynamical systems than static forward neural network. The real-time implementation is very important, and thus the neuro controller also needs to be designed such that it converges with a relatively small number of training cycles. SDW has considerably fewer weights than DNN. Since there is no interlink among the hidden layer. The object of proposed algorithm is that the number of self dynamic neuron node and the gradient of activation functions are simultaneously optimized by genetic algorithms. To guarantee convergence, an analytic method based on the Lyapunov function is used to find a stable learning for the SDNN. The ability and effectiveness of identifying and controlling a nonlinear dynamic system using the proposed optimized SDNN considering stability is demonstrated by case studies.

  • PDF

대규모 선형 시스템에서의 비집중 반복 학습제어 (Decentralized Iterative Learning Control in Large Scale Linear Dynamic Systems)

  • 황중환;;오상록
    • 대한전기학회논문지
    • /
    • 제39권10호
    • /
    • pp.1098-1107
    • /
    • 1990
  • Decentralized iterative learning control methods are presented for a class of large scale interconnected linear dynamic systems, in which iterative learning controller in each subsystem operates on its local subsystem exclusively with no exchange of information between subsystems. Suffcient conditions for convergence of the algorithms are given and numerical examples are illustrated to show the validity of the algorithms. In particular, the algorithms are useful for the systems having large uncertainty of inter-connected terms.

  • PDF

A general dynamic iterative learning control scheme with high-gain feedback

  • Kuc, Tae-Yong;Nam, Kwanghee
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1989년도 한국자동제어학술회의논문집; Seoul, Korea; 27-28 Oct. 1989
    • /
    • pp.1140-1145
    • /
    • 1989
  • A general dynamic iterative learning control scheme is proposed for a class of nonlinear systems. Relying on stabilizing high-gain feedback loop, it is possible to show the existence of Cauchy sequence of feedforward control input error with iteration numbers, which results in a uniform convergance of system state trajectory to the desired one.

  • PDF

피드백 오차 학습법을 이용한 궤적추종제어

  • 성형수;이호걸
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 1994년도 추계학술대회 논문집
    • /
    • pp.466-471
    • /
    • 1994
  • To make a dynamic system a given desired motion trajectory, a new feedback error learning scheme is proposed which is based on the repeatability of dynamic system motion. This method is composed of feedforward and feedback control laws. A benefit of this control scheme is that the input pattern that generates the desired motion can be formed without estimating the physical parameters of system dynamics. The numerical simulations show the good performance of the proposed scheme

  • PDF

DNP 제어기에 의한 비선형 동적 매니퓰레이터 제어 (Nonlinear Dynamic Manipulator Control Using DNP Controller)

  • 조현섭;김희숙;유인호;장성환
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1999년도 하계학술대회 논문집 B
    • /
    • pp.764-767
    • /
    • 1999
  • In this paper, to bring under robust and accurate control of auto-equipment systems which disturbance, parameter alteration of system, uncertainty and so forth exist, neural network controller called dynamic neural processor(DNP) is designed. Also, the architecture and learning algorithm of the proposed dynamic neural network, the DNP, are described and computer simulations are provided to demonstrate the effectiveness of the proposed learning method using the DNP.

  • PDF

Path Planning for a Robot Manipulator based on Probabilistic Roadmap and Reinforcement Learning

  • Park, Jung-Jun;Kim, Ji-Hun;Song, Jae-Bok
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권6호
    • /
    • pp.674-680
    • /
    • 2007
  • The probabilistic roadmap (PRM) method, which is a popular path planning scheme, for a manipulator, can find a collision-free path by connecting the start and goal poses through a roadmap constructed by drawing random nodes in the free configuration space. PRM exhibits robust performance for static environments, but its performance is poor for dynamic environments. On the other hand, reinforcement learning, a behavior-based control technique, can deal with uncertainties in the environment. The reinforcement learning agent can establish a policy that maximizes the sum of rewards by selecting the optimal actions in any state through iterative interactions with the environment. In this paper, we propose efficient real-time path planning by combining PRM and reinforcement learning to deal with uncertain dynamic environments and similar environments. A series of experiments demonstrate that the proposed hybrid path planner can generate a collision-free path even for dynamic environments in which objects block the pre-planned global path. It is also shown that the hybrid path planner can adapt to the similar, previously learned environments without significant additional learning.

DNP을 이용한 로봇 매니퓰레이터의 출력 궤환 적응제어기 설계 (Design of an Adaptive Output Feedback Controller for Robot Manipulators Using DNP)

  • 조현섭
    • 한국산학기술학회:학술대회논문집
    • /
    • 한국산학기술학회 2008년도 추계학술발표논문집
    • /
    • pp.191-196
    • /
    • 2008
  • The intent of this paper is to describe a neural network structure called dynamic neural processor(DNP), and examine how it can be used in developing a learning scheme for computing robot inverse kinematic transformations. The architecture and learning algorithm of the proposed dynamic neural network structure, the DNP, are described. Computer simulations are provided to demonstrate the effectiveness of the proposed learning using the DNP.

  • PDF