• Title/Summary/Keyword: Robot Learning

Search Result 854, Processing Time 0.028 seconds

UTAUT Model of Pre-service Teachers for Telepresence Robot-Assisted Learning (원격연결형 로봇보조학습에 대한 예비교사의 통합기술수용모델)

  • Han, Jeong-Hye
    • Journal of Creative Information Culture
    • /
    • v.4 no.2
    • /
    • pp.95-101
    • /
    • 2018
  • As a result of introducing robot assisted learning which utilizes social robots or telepresence robots in language learning or special education, research on technology acceptance model for robot-assisted learning is also being conducted. The unified theory of acceptance and use of technology (UTAUT) model of intelligent robot has been studied, but of tele-operated robot is insufficient. The purpose of this paper is to estimate the UTAUT model by pre-service teachers who experienced telepresence robot-assisted learning that can be done in future school. It is found that the estimated UTAUT model consists of more concise factors than social robots, and the importance of perceived enjoyment is higher. In other words, the pre-service teachers showed significant acceptance of tele-operated robots with enhanced enjoyment composed of its mobility, communication, and touchable appearance of the face and body.

Behavior Learning and Evolution of Individual Robot for Cooperative Behavior of Swarm Robot System (군집 로봇의 협조 행동을 위한 로봇 개체의 행동학습과 진화)

  • Sim, Kwee-Bo;Lee, Dong-Wook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.131-137
    • /
    • 2006
  • In swarm robot systems, each robot must behaves by itself according to the its states and environments, and if necessary, must cooperates with other robots in order to carry out a given task. Therefore it is essential that each robot has both learning and evolution ability to adapt the dynamic environments. In this paper, the new learning and evolution method based on reinforcement learning having delayed reward ability and distributed genetic algorithms is proposed for behavior learning and evolution of collective autonomous mobile robots. Reinforcement learning having delayed reward is still useful even though when there is no immediate reward. And by distributed genetic algorithm exchanging the chromosome acquired under different environments by communication each robot can improve its behavior ability. Specially, in order to improve the performance of evolution, selective crossover using the characteristic of reinforcement learning is adopted in this paper. we verify the effectiveness of the proposed method by applying it to cooperative search problem.

Deep Reinforcement Learning in ROS-based autonomous robot navigation

  • Roland, Cubahiro;Choi, Donggyu;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.47-49
    • /
    • 2022
  • Robot navigation has seen a major improvement since the the rediscovery of the potential of Artificial Intelligence (AI) and the attention it has garnered in research circles. A notable achievement in the area was Deep Learning (DL) application in computer vision with outstanding daily life applications such as face-recognition, object detection, and more. However, robotics in general still depend on human inputs in certain areas such as localization, navigation, etc. In this paper, we propose a study case of robot navigation based on deep reinforcement technology. We look into the benefits of switching from traditional ROS-based navigation algorithms towards machine learning approaches and methods. We describe the state-of-the-art technology by introducing the concepts of Reinforcement Learning (RL), Deep Learning (DL) and DRL before before focusing on visual navigation based on DRL. The case study preludes further real life deployment in which mobile navigational agent learns to navigate unbeknownst areas.

  • PDF

Reward Shaping for a Reinforcement Learning Method-Based Navigation Framework

  • Roland, Cubahiro;Choi, Donggyu;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.9-11
    • /
    • 2022
  • Applying Reinforcement Learning in everyday applications and varied environments has proved the potential of the of the field and revealed pitfalls along the way. In robotics, a learning agent takes over gradually the control of a robot by abstracting the navigation model of the robot with its inputs and outputs, thus reducing the human intervention. The challenge for the agent is how to implement a feedback function that facilitates the learning process of an MDP problem in an environment while reducing the time of convergence for the method. In this paper we will implement a reward shaping system avoiding sparse rewards which gives fewer data for the learning agent in a ROS environment. Reward shaping prioritizes behaviours that brings the robot closer to the goal by giving intermediate rewards and helps the algorithm converge quickly. We will use a pseudocode implementation as an illustration of the method.

  • PDF

Use of learning method to generate of motion pattern for robot (학습기법을 이용한 로봇의 모션패턴 생성 연구)

  • Kim, Dong-won
    • Journal of Platform Technology
    • /
    • v.6 no.3
    • /
    • pp.23-30
    • /
    • 2018
  • A motion pattern generation is a process of calculating a certain stable motion trajectory for stably operating a certain motion. A motion control is to make a posture of a robot stable by eliminating occurring disturbances while a robot is in operation using a pre-generated motion pattern. In this paper, a general method of motion pattern generation for a biped walking robot using universal approximator, learning neural networks, is proposed. Existing techniques are numerical methods using recursive computation and approximating methods which generate an approximation of a motion pattern by simplifying a robot's upper body structure. In near future other approaches for the motion pattern generations will be applied and compared as to be done.

A Simple Learning Variable Structure Control Law for Rigid Robot Manipulators

  • Choi, Han-Ho;Kuc, Tae-Yong;Lee, Dong-Hun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.354-359
    • /
    • 2003
  • In this paper, we consider the problem of designing a simple learning variable structure system for repeatable tracking control of robot manipulators. We combine a variable structure control law as the robust part for stabilization and a feedforward learning law as the intelligent part for nonlinearity compensation. We show that the tracking error asymptotically converges to zero. Finally, we give computer simulation results in order to show the effectiveness of our method.

  • PDF

A study on Indirect Adaptive Decentralized Learning Control of the Vertical Multiple Dynamic System

  • Lee, Soo-Cheol;Park, Seok-Sun;Lee, Jeh-Won
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.7 no.1
    • /
    • pp.62-66
    • /
    • 2006
  • The learning control develops controllers that learn to improve their performance at executing a given task, based on experience performing this specific task. In a previous work, the authors presented an iterative precision of linear decentralized learning control based on p-integrated learning method for the vertical dynamic multiple systems. This paper develops an indirect decentralized learning control based on adaptive control method. The original motivation of the learning control field was learning in robots doing repetitive tasks such as an assembly line works. This paper starts with decentralized discrete time systems, and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the nominal trajectory, and using the usual robot controllers that are decentralized, treating each link as if it is independent of any coupling with other links. Some techniques will show up in the numerical simulation for vertical dynamic robot. The methods of learning system are shown for the iterative precision of each link.

A Study on the Development of Robust control Algorithm for Stable Robot Locomotion (안정된 로봇걸음걸이를 위한 견실한 제어알고리즘 개발에 관한 연구)

  • Hwang, Won-Jun;Yoon, Dae-Sik;Koo, Young-Mok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.18 no.4
    • /
    • pp.259-266
    • /
    • 2015
  • This study presents new scheme for various walking pattern of biped robot under the limitted enviroments. We show that the neural network is significantly more attractive intelligent controller design than previous traditional forms of control systems. A multilayer backpropagation neural network identification is simulated to obtain a learning control solution of biped robot. Once the neural network has learned, the other neural network control is designed for various trajectory tracking control with same learning-base. The main advantage of our scheme is that we do not require any knowledge about the system dynamic and nonlinear characteristic, and can therefore treat the robot as a black box. It is also shown that the neural network is a powerful control theory for various trajectory tracking control of biped robot with same learning-vase. That is, we do net change the control parameter for various trajectory tracking control. Simulation and experimental result show that the neural network is practically feasible and realizable for iterative learning control of biped robot.

Behavior Learning and Evolution of Swarm Robot based on Harmony Search Algorithm (Harmony Search 알고리즘 기반 군집로봇의 행동학습 및 진화)

  • Kim, Min-Kyung;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.3
    • /
    • pp.441-446
    • /
    • 2010
  • Each robot decides and behaviors themselves surrounding circumstances in the swarm robot system. Robots have to conduct tasks allowed through cooperation with other robots. Therefore each robot should have the ability to learn and evolve in order to adapt to a changing environment. In this paper, we proposed learning based on Q-learning algorithm and evolutionary using Harmony Search algorithm and are trying to improve the accuracy using Harmony Search Algorithm, not the Genetic Algorithm. We verify that swarm robot has improved the ability to perform the task.

Implementation and Performance Evaluation of RTOS-Based Dynamic Controller for Robot Manipulator (Real-Time OS 기반의 로봇 매니퓰레이터 동력학 제어기의 구현 및 성능평가)

  • Kho, Jaw-Won;Lim, Dong-Cheal
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.57 no.2
    • /
    • pp.109-114
    • /
    • 2008
  • In this paper, a dynamic learning controller for robot manipulator is implemented using real-time operating system with capabilities of multitasking, intertask communication and synchronization, event-driven, priority-driven scheduling, real-time clock control, etc. The controller hardware system with VME bus and related devices is developed and applied to implement a dynamic learning control scheme for robot manipulator. Real-time performance of the proposed dynamic learning controller is tested and evaluated for tracking of the desired trajectory and compared with the conventional servo controller.