• Title/Summary/Keyword: cart inverted pendulum

Search Result 69, Processing Time 0.03 seconds

Analysis of Effects of Time-Delay in an Inverted Pendulum System Using the Controller Area Network

  • Cho, Sung-Min;Hong, Suk-Kyo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1474-1479
    • /
    • 2004
  • In this paper, the design of the network system using the CAN and the analysis of effects of time delay in the system are presented. A conventional implementation technique induces many problems because of the amount and complexity of wiring and maintenance problems. The network system reduces these problems, but it cause another problem; time delay. Time delay in a sampling time does not have much effects on the system, but time delay over the sampling time changes the control frequency and ended up makes the system unstable. It is verified that time delay between each parts has different effects on the entire system. The results from this paper will be a base for studying algorithms to reduce effects of time delay in the system using the CAN.

  • PDF

Learning Control of Inverted Pendulum Using Neural Networks (신경회로망을 이용한 도립전자의 학습제어)

  • Lee, Jea-Kang;Kim, Il-Hwan
    • Journal of Industrial Technology
    • /
    • v.24 no.A
    • /
    • pp.99-107
    • /
    • 2004
  • This paper considers reinforcement learning control with the self-organizing map. Reinforcement learning uses the observable states of objective system and signals from interaction of the system and the environments as input data. For fast learning in neural network training, it is necessary to reduce learning data. In this paper, we use the self-organizing map to parition the observable states. Partitioning states reduces the number of learning data which is used for training neural networks. And neural dynamic programming design method is used for the controller. For evaluating the designed reinforcement learning controller, an inverted pendulum of the cart system is simulated. The designed controller is composed of serial connection of self-organizing map and two Multi-layer Feed-Forward Neural Networks.

  • PDF

A Reinforcement Learning with CMAC

  • Kwon, Sung-Gyu
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.4
    • /
    • pp.271-276
    • /
    • 2006
  • To implement a generalization of value functions in Adaptive Search Element (ASE)-reinforcement learning, CMAC (Cerebellar Model Articulation Controller) is integrated into ASE controller. ASE-reinforcement learning scheme is briefly studied to discuss how CMAC is integrated into ASE controller. Neighbourhood Sequential Training for CMAC is utilized to establish the look-up table and to produce discrete control outputs. In computer simulation, an ASE controller and a couple of ASE-CMAC neural network are trained to balance the inverted pendulum on a cart. The number of trials until the controllers are established and the learning performance of the controllers are evaluated to find that generalization ability of the CMAC improves the speed of the ASE-reinforcement learning enough to realize the cartpole control system.

Stabilzed Control of an Inverted Pendulum Cart System Using the Optimal Regulator (최적 Regulator를 이용한 도립진자 시스템의 안정화 제어)

  • 박영식;최부귀
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.15 no.4
    • /
    • pp.315-323
    • /
    • 1990
  • A design technique of dynamic stabilization controller for the intrinsic unstable inverted pendulum system is introduced. Mathematical modelling with the more complex nonlinearity and the stabilized control theory presented by C.D.Johnson are adapted to this system by using the state-space approach. And the Stabilized controller with the designed optimal regulator type which can be fastly tracked and can be accurately counteracted aginst all effects of the constant distrubances and the parameteric variations is simulated and is implemeted successfully on the microcomputer.

  • PDF

Asymptotic Output Tracking of Non-minimum Phase Nonlinear Systems through Learning Based Inversion (학습제어를 이용한 비최소 위상 비선형 시스템의 점근적 추종)

  • Kim, Nam Guk
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.21 no.8
    • /
    • pp.32-42
    • /
    • 2022
  • Asymptotic tracking of a non-minimum phase nonlinear system has been a popular topic in control theory and application. In this paper, we propose a new control scheme to achieve asymptotic output tracking in anon-minimum phase nonlinear system for periodic trajectories through an iterative learning control with the stable inversion. The proposed design method is robust to parameter uncertainties and periodic external disturbances since it is based on iterative learning. The performance of the proposed algorithm was demonstrated through the simulation results using a typical non-minimum nonlinear system of an inverted pendulum on a cart.

Human-like Balancing Motion Generation based on Double Inverted Pendulum Model (더블 역 진자 모델을 이용한 사람과 같은 균형 유지 동작 생성 기술)

  • Hwang, Jaepyung;Suh, Il Hong
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.2
    • /
    • pp.239-247
    • /
    • 2017
  • The purpose of this study is to develop a motion generation technique based on a double inverted pendulum model (DIPM) that learns and reproduces humanoid robot (or virtual human) motions while keeping its balance in a pattern similar to a human. DIPM consists of a cart and two inverted pendulums, connected in a serial. Although the structure resembles human upper- and lower-body, the balancing motion in DIPM is different from the motion that human does. To do this, we use the motion capture data to obtain the reference motion to keep the balance in the existence of external force. By an optimization technique minimizing the difference between the motion of DIPM and the reference motion, control parameters of the proposed method were learned in advance. The learned control parameters are re-used for the control signal of DIPM as input of linear quadratic regulator that generates a similar motion pattern as the reference. In order to verify this, we use virtual human experiments were conducted to generate the motion that naturally balanced.

A Design Technique for Stabilization of Inverted Pendulum Cart System on the Inclined Rail (경사 레일상에 있는 도립진자 장치의안정화 설계기법)

  • 박영식;최부귀;윤병도
    • The Proceedings of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.3 no.4
    • /
    • pp.62-69
    • /
    • 1989
  • 휴대용 전기톱을 비롯한 학습 기계장치, 자동차 연동장치, 각종 화학 분석장치 및 산업용 로봇 시스템등의 전기설비에 광범위하게 응용되고 있는 고유 불안정 도립진자 시스템의 동적 안정화 제어기 설계기법이 소개된다. 복잡한 비선형 동특성을 고려한 수학적 모델링과 C. D. Johnson에 의해 제시된 외란 적응 제어 이론을 적응하여, 최적 레귤레이터형 안정화 제어기를 설계하였으며, 컴퓨터 시뮬레이션 및 실험결과가 만족스럽게 나타났다.

  • PDF

H$\infty$ Fuzzy Dynamic Output Feedback Controller Design with Pole Placement Constraints

  • Kim, Jongcheol;Sangchul Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.176.5-176
    • /
    • 2001
  • This paper presents a fuzzy dynamic output feedback controller design method for Parallel Distributed Compensation (PDC)-type Takagi-Sugeno (T-S) model based fuzzy dynamic system with H$\infty$ performance and additional constraints on the closed pole placement. Design condition for these controller is obtained in terms of the linear matrix inequalities (LMIs). The proposed fuzzy controller satisfies the disturbance rejection performance and the desired transient response. The design method is verified by this method for an inverted pendulum with a cart using the proposed method.

  • PDF

Stabilization Control of the Nonlinear System using A RVEGA ~. based Optimal Fuzzy Controller (RVEGA 최적 퍼지 제어기를 이용한 비선형 시스템의 안정화 제어에 관한 연구)

  • 이준탁;정동일
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.21 no.4
    • /
    • pp.393-403
    • /
    • 1997
  • In this paper, we proposed an optimal identification method of identifying the membership func¬tions and the fuzzy rules for the stabilization controller of the nonlinear system by RVEGA( Real Variable Elitist Genetic Algo rithm l. Although fuzzy logic controllers have been successfully applied to industrial plants, most of them have been relied heavily on expert's empirical knowl¬edge. So it is very difficult to determine the linguistic state space partitions and parameters of the membership functions and to extract the control rules. Most of conventional approaches have the drastic defects of trapping to a local minima. However, the proposed RVEGA which is similiar to the processes of natural evolution can optimize simulta¬neously the fuzzy rules and the parameters of membership functions. The validity of the RVEGA - based fuzzy controller was proved through applications to the stabi¬lization problems of an inverted pendulum system with highly nonlinear dynamics. The proposed RVEGA - based fuzzy controller has a swing -. up control mode(swing - up controller) and a stabi¬lization one(stabilization controller), moves a pendulum in an initial stable equilibrium point and a cart in an arbitrary position, to an unstable equilibrium point and a center of the rail. The stabi¬lization controller is composed of a hierarchical fuzzy inference structure; that is, the lower level inference for the virtual equilibrium point and the higher level one for position control of the cart according to the firstly inferred virtual equilibrium point. The experimental apparatus was imple¬mented by a DT -- 2801 board with AID, D/A converters and a PC - 586 microprocessor.

  • PDF

Reinforcement Learning Control using Self-Organizing Map and Multi-layer Feed-Forward Neural Network

  • Lee, Jae-Kang;Kim, Il-Hwan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.142-145
    • /
    • 2003
  • Many control applications using Neural Network need a priori information about the objective system. But it is impossible to get exact information about the objective system in real world. To solve this problem, several control methods were proposed. Reinforcement learning control using neural network is one of them. Basically reinforcement learning control doesn't need a priori information of objective system. This method uses reinforcement signal from interaction of objective system and environment and observable states of objective system as input data. But many methods take too much time to apply to real-world. So we focus on faster learning to apply reinforcement learning control to real-world. Two data types are used for reinforcement learning. One is reinforcement signal data. It has only two fixed scalar values that are assigned for each success and fail state. The other is observable state data. There are infinitive states in real-world system. So the number of observable state data is also infinitive. This requires too much learning time for applying to real-world. So we try to reduce the number of observable states by classification of states with Self-Organizing Map. We also use neural dynamic programming for controller design. An inverted pendulum on the cart system is simulated. Failure signal is used for reinforcement signal. The failure signal occurs when the pendulum angle or cart position deviate from the defined control range. The control objective is to maintain the balanced pole and centered cart. And four states that is, position and velocity of cart, angle and angular velocity of pole are used for state signal. Learning controller is composed of serial connection of Self-Organizing Map and two Multi-layer Feed-Forward Neural Networks.

  • PDF