• Title/Summary/Keyword: learning with a robot

Search Result 492, Processing Time 0.032 seconds

Implementation of a Learning Controller for Repetitive Gate Control of Biped Walking Robot (이족 보행 로봇의 반복 걸음새 제어를 위한 학습제어기의 구현)

  • Lim, Dong-Cheol;Oh, Sung-Nam;Kuc, Tae-Yong
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.594-596
    • /
    • 2005
  • This paper present a learning controller for repetitive gate control of biped robot. The learning control scheme consists of a feedforward learning rule and linear feedback control input for stabilization of learning system. The feasibility of learning control to biped robotic motion is shown via dynamic simulation and experimental results with 24 DOF biped robot.

  • PDF

Intelligent Walking Modeling of Humanoid Robot Using Learning Based Neuro-Fuzzy System (학습기반 뉴로-퍼지 시스템을 이용한 휴머노이드 로봇의 지능보행 모델링)

  • Park, Gwi-Tae;Kim, Dong-Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.4
    • /
    • pp.358-364
    • /
    • 2007
  • Intelligent walking modeling of humanoid robot using learning based neuro-fuzzy system is presented in this paper. Walking pattern, trajectory of the zero moment point (ZMP) in a humanoid robot is used as an important criterion for the balance of the walking robots but its complex dynamics makes robot control difficult. In addition, it is difficult to generate stable and natural walking motion for a robot. To handle these difficulties and explain empirical laws of the humanoid robot, we are modeling practical humanoid robot using neuro-fuzzy system based on the two types of natural motions which are walking trajectories on a t1at floor and on an ascent. Learning based neuro-fuzzy system employed has good learning capability and computational performance. The results from neuro-fuzzy system are compared with previous approach.

Co-Operative Strategy for an Interactive Robot Soccer System by Reinforcement Learning Method

  • Kim, Hyoung-Rock;Hwang, Jung-Hoon;Kwon, Dong-Soo
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.2
    • /
    • pp.236-242
    • /
    • 2003
  • This paper presents a cooperation strategy between a human operator and autonomous robots for an interactive robot soccer game, The interactive robot soccer game has been developed to allow humans to join into the game dynamically and reinforce entertainment characteristics. In order to make these games more interesting, a cooperation strategy between humans and autonomous robots on a team is very important. Strategies can be pre-programmed or learned by robots themselves with learning or evolving algorithms. Since the robot soccer system is hard to model and its environment changes dynamically, it is very difficult to pre-program cooperation strategies between robot agents. Q-learning - one of the most representative reinforcement learning methods - is shown to be effective for solving problems dynamically without explicit knowledge of the system. Therefore, in our research, a Q-learning based learning method has been utilized. Prior to utilizing Q-teaming, state variables describing the game situation and actions' sets of robots have been defined. After the learning process, the human operator could play the game more easily. To evaluate the usefulness of the proposed strategy, some simulations and games have been carried out.

Cooperative Robot for Table Balancing Using Q-learning (테이블 균형맞춤 작업이 가능한 Q-학습 기반 협력로봇 개발)

  • Kim, Yewon;Kang, Bo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.4
    • /
    • pp.404-412
    • /
    • 2020
  • Typically everyday human life tasks involve at least two people moving objects such as tables and beds, and the balancing of such object changes based on one person's action. However, many studies in previous work performed their tasks solely on robots without factoring human cooperation. Therefore, in this paper, we propose cooperative robot for table balancing using Q-learning that enables cooperative work between human and robot. The human's action is recognized in order to balance the table by the proposed robot whose camera takes the image of the table's state, and it performs the table-balancing action according to the recognized human action without high performance equipment. The classification of human action uses a deep learning technology, specifically AlexNet, and has an accuracy of 96.9% over 10-fold cross-validation. The experiment of Q-learning was carried out over 2,000 episodes with 200 trials. The overall results of the proposed Q-learning show that the Q function stably converged at this number of episodes. This stable convergence determined Q-learning policies for the robot actions. Video of the robotic cooperation with human over the table balancing task using the proposed Q-Learning can be found at http://ibot.knu.ac.kr/videocooperation.html.

Development of Humanoid Robot HUMIC and Reinforcement Learning-based Robot Behavior Intelligence using Gazebo Simulator (휴머노이드 로봇 HUMIC 개발 및 Gazebo 시뮬레이터를 이용한 강화학습 기반 로봇 행동 지능 연구)

  • Kim, Young-Gi;Han, Ji-Hyeong
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.260-269
    • /
    • 2021
  • To verify performance or conduct experiments using actual robots, a lot of costs are needed such as robot hardware, experimental space, and time. Therefore, a simulation environment is an essential tool in robotics research. In this paper, we develop the HUMIC simulator using ROS and Gazebo. HUMIC is a humanoid robot, which is developed by HCIR Lab., for human-robot interaction and an upper body of HUMIC is similar to humans with a head, body, waist, arms, and hands. The Gazebo is an open-source three-dimensional robot simulator that provides the ability to simulate robots accurately and efficiently along with simulated indoor and outdoor environments. We develop a GUI for users to easily simulate and manipulate the HUMIC simulator. Moreover, we open the developed HUMIC simulator and GUI for other robotics researchers to use. We test the developed HUMIC simulator for object detection and reinforcement learning-based navigation tasks successfully. As a further study, we plan to develop robot behavior intelligence based on reinforcement learning algorithms using the developed simulator, and then apply it to the real robot.

Obstacle Avoidance of Mobile Robot Using Reinforcement Learning in Virtual Environment (가상 환경에서의 강화학습을 활용한 모바일 로봇의 장애물 회피)

  • Lee, Jong-lark
    • Journal of Internet of Things and Convergence
    • /
    • v.7 no.4
    • /
    • pp.29-34
    • /
    • 2021
  • In order to apply reinforcement learning to a robot in a real environment, it is necessary to use simulation in a virtual environment because numerous iterative learning is required. In addition, it is difficult to apply a learning algorithm that requires a lot of computation for a robot with low-spec. hardware. In this study, ML-Agent, a reinforcement learning frame provided by Unity, was used as a virtual simulation environment to apply reinforcement learning to the obstacle collision avoidance problem of mobile robots with low-spec hardware. A DQN supported by ML-Agent is adopted as a reinforcement learning algorithm and the results for a real robot show that the number of collisions occurred less then 2 times per minute.

Implementation of an Intelligent Controller for Biped Walking Robot using Genetic Algorithm and Learning Control (유전자 알고리즘과 학습제어를 이용한 이족보행 로봇의 지능 제어기 구현)

  • Kho, Jaw-Won;Lim, Dong-Cheol
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.55 no.2
    • /
    • pp.83-88
    • /
    • 2006
  • This paper proposes a method that minimizes the consumed energy by searching the optimal locations of the mass centers of the biped robot's links using Genetic Algorithm. This paper presents a learning controller for repetitive gait control of the biped robot. The learning control scheme consists of a feedforward learning nile and linear feedback control input for stabilization of learning system. The feasibility of learning control to the biped robotic motion is shown via computer simulation and experimental results with 24 DOF biped walking robot.

Comparative Analysis of Machine Learning Algorithms for Healthy Management of Collaborative Robots (협동로봇의 건전성 관리를 위한 머신러닝 알고리즘의 비교 분석)

  • Kim, Jae-Eun;Jang, Gil-Sang;Lim, KuK-Hwa
    • Journal of the Korea Safety Management & Science
    • /
    • v.23 no.4
    • /
    • pp.93-104
    • /
    • 2021
  • In this paper, we propose a method for diagnosing overload and working load of collaborative robots through performance analysis of machine learning algorithms. To this end, an experiment was conducted to perform pick & place operation while changing the payload weight of a cooperative robot with a payload capacity of 10 kg. In this experiment, motor torque, position, and speed data generated from the robot controller were collected, and as a result of t-test and f-test, different characteristics were found for each weight based on a payload of 10 kg. In addition, to predict overload and working load from the collected data, machine learning algorithms such as Neural Network, Decision Tree, Random Forest, and Gradient Boosting models were used for experiments. As a result of the experiment, the neural network with more than 99.6% of explanatory power showed the best performance in prediction and classification. The practical contribution of the proposed study is that it suggests a method to collect data required for analysis from the robot without attaching additional sensors to the collaborative robot and the usefulness of a machine learning algorithm for diagnosing robot overload and working load.

Robot learning control with fast convergence (빠른 수렴성을 갖는 로보트 학습제어)

  • 양원영;홍호선
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1988.10a
    • /
    • pp.67-71
    • /
    • 1988
  • We present an algorithm that uses trajectory following errors to improve a feedforward command to a robot in the iterative manner. It has been shown that when the manipulator handles an unknown object, the P-type learning algorithm can make the trajectory converge to a desired path and also that the proposed learning control algorithm performs better than the other type learning control algorithm. A numerical simulation of a three degree of freedom manipulator such as PUMA-560 ROBOT has been performed to illustrate the effectiveness of the proposed learning algorithm.

  • PDF

Robust feedback error learning neural networks control of robot systems with guaranteed stability

  • Kim, Sung-Woo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10a
    • /
    • pp.197-200
    • /
    • 1996
  • This paper considers feedback error learning neural networks for robot manipulator control. Feedback error learning proposed by Kawato [2,3,5] is a useful learning control scheme, if nonlinear subsystems (or basis functions) consisting of the robot dynamic equation are known exactly. However, in practice, unmodeled uncertainties and disturbances deteriorate the control performance. Hence, we presents a robust feedback error learning scheme which add robustifying control signal to overcome such effects. After the learning rule is derived, the stability is analyzed using Lyapunov method.

  • PDF