• 제목/요약/키워드: Robot Learning

검색결과 856건 처리시간 0.023초

이족 휴머노이드 로봇의 유연한 보행을 위한 학습기반 뉴로-퍼지시스템의 응용 (Use of Learning Based Neuro-fuzzy System for Flexible Walking of Biped Humanoid Robot)

  • 김동원;강태구;황상현;박귀태
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년 학술대회 논문집 정보 및 제어부문
    • /
    • pp.539-541
    • /
    • 2006
  • Biped locomotion is a popular research area in robotics due to the high adaptability of a walking robot in an unstructured environment. When attempting to automate the motion planning process for a biped walking robot, one of the main issues is assurance of dynamic stability of motion. This can be categorized into three general groups: body stability, body path stability, and gait stability. A zero moment point (ZMP), a point where the total forces and moments acting on the robot are zero, is usually employed as a basic component for dynamically stable motion. In this rarer, learning based neuro-fuzzy systems have been developed and applied to model ZMP trajectory of a biped walking robot. As a result, we can provide more improved insight into physical walking mechanisms.

  • PDF

An Overview of Learning Control in Robot Applications

  • Ryu, Yeong-Soon
    • 한국공작기계학회:학술대회논문집
    • /
    • 한국공작기계학회 1996년도 추계학술대회 논문
    • /
    • pp.6-10
    • /
    • 1996
  • This paper presents an overview of research results obtained by the authors in a series of publications. Methods are developed both for time-varying and time-invariant for linear and nonlinear. for time domain and frequency domain . and for discrete-time and continuous-time systems. Among the topics presented are: 1. Learning control based on integral control concepts applied in the repetition domain. 2. New algorithms that give improved transient response of the indirect adaptive control ideas. 4. Direct model reference learning control. 5 . Learning control based frequency domain. 6. Use of neural networks in learning control. 7. Decentralized learning controllers. These learning algorithms apply to robot control. The decentralized learning control laws are important in such applications becaused of the usual robot decentralized controller structured.

  • PDF

모바일 로봇을 위한 학습 기반 관성-바퀴 오도메트리 (Learning-based Inertial-wheel Odometry for a Mobile Robot)

  • 김명수;장근우;박재흥
    • 로봇학회논문지
    • /
    • 제18권4호
    • /
    • pp.427-435
    • /
    • 2023
  • This paper proposes a method of estimating the pose of a mobile robot by using a learning model. When estimating the pose of a mobile robot, wheel encoder and inertial measurement unit (IMU) data are generally utilized. However, depending on the condition of the ground surface, slip occurs due to interaction between the wheel and the floor. In this case, it is hard to predict pose accurately by using only encoder and IMU. Thus, in order to reduce pose error even in such conditions, this paper introduces a pose estimation method based on a learning model using data of the wheel encoder and IMU. As the learning model, long short-term memory (LSTM) network is adopted. The inputs to LSTM are velocity and acceleration data from the wheel encoder and IMU. Outputs from network are corrected linear and angular velocity. Estimated pose is calculated through numerically integrating output velocities. Dataset used as ground truth of learning model is collected in various ground conditions. Experimental results demonstrate that proposed learning model has higher accuracy of pose estimation than extended Kalman filter (EKF) and other learning models using the same data under various ground conditions.

백스터 로봇의 시각기반 로봇 팔 조작 딥러닝을 위한 강화학습 알고리즘 구현 (Implementation of End-to-End Training of Deep Visuomotor Policies for Manipulation of a Robotic Arm of Baxter Research Robot)

  • 김성운;김솔아;하파엘 리마;최재식
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.40-49
    • /
    • 2019
  • Reinforcement learning has been applied to various problems in robotics. However, it was still hard to train complex robotic manipulation tasks since there is a few models which can be applicable to general tasks. Such general models require a lot of training episodes. In these reasons, deep neural networks which have shown to be good function approximators have not been actively used for robot manipulation task. Recently, some of these challenges are solved by a set of methods, such as Guided Policy Search, which guide or limit search directions while training of a deep neural network based policy model. These frameworks are already applied to a humanoid robot, PR2. However, in robotics, it is not trivial to adjust existing algorithms designed for one robot to another robot. In this paper, we present our implementation of Guided Policy Search to the robotic arms of the Baxter Research Robot. To meet the goals and needs of the project, we build on an existing implementation of Baxter Agent class for the Guided Policy Search algorithm code using the built-in Python interface. This work is expected to play an important role in popularizing robot manipulation reinforcement learning methods on cost-effective robot platforms.

2족 보행로봇의 실시간 작업동작 생성을 위한 지능제어에 관한 연구 (A Study on Intelligent Control of Real-Time Working Motion Generation of Bipped Robot)

  • 김민성;조상영;구영목;정양근;한성현
    • 한국산업융합학회 논문집
    • /
    • 제19권1호
    • /
    • pp.1-9
    • /
    • 2016
  • In this paper, we propose a new learning control scheme for various walk motion control of biped robot with same learning-base by neural network. We show that learning control algorithm based on the neural network is significantly more attractive intelligent controller design than previous traditional forms of control systems. A multi layer back propagation neural network identification is simulated to obtain a dynamic model of biped robot. Once the neural network has learned, the other neural network control is designed for various trajectory tracking control with same learning-base. The biped robots have been received increased attention due to several properties such as its human like mobility and the high-order dynamic equation. These properties enable the biped robots to perform the dangerous works instead of human beings. Thus, the stable walking control of the biped robots is a fundamentally hot issue and has been studied by many researchers. However, legged locomotion, it is difficult to control the biped robots. Besides, unlike the robot manipulator, the biped robot has an uncontrollable degree of freedom playing a dominant role for the stability of their locomotion in the biped robot dynamics. From the simulation and experiments the reliability of iterative learning control was illustrated.

교육용로봇을 이용한 프로그래밍 학습 모형 - 재량활동 및 특기적성 시간에 레고 마인드스톰의 Labview 언어 중심으로 - (A Programming Language Learning Model Using Educational Robot)

  • 문외식
    • 정보교육학회논문지
    • /
    • 제11권2호
    • /
    • pp.231-241
    • /
    • 2007
  • 본 연구는 창의적 문제해결 능력 향상을 위한 알고리즘 학습도구로서 로봇을 이용한 프로그래밍 학습방법을 제안하는데 목적이 있다. 이를 위해 30차시 분량의 로봇 프로그래밍 교육과정과 교재를 개발하였으며, 초등학생 6학년을 대상으로 30차시를 학습시킨 후 평가하였다. 각 차시별 학습결과 산출물 중심으로 성취수준을 평가한 결과, 학습자들이 교육과정 내용을 대부분 이해한 수준으로 분석되었다. 이러한 결과는 개발한 교육과정과 교재가 초등학생들에게 충분히 공감하고 실천 가능하도록 구성되었다고 판단된다. 본 연구에서의 실행 경험을 통해 초등학교에서 로봇 프로그램 학습이 창의적 알고리즘 학습도구로 성공할 수 있는 가능성을 확인하게 되었다.

  • PDF

Linear decentralized learning control for the robot moving on the horizontal plane

  • Lee, Soo-Cheol
    • 한국경영과학회:학술대회논문집
    • /
    • 대한산업공학회/한국경영과학회 1995년도 춘계공동학술대회논문집; 전남대학교; 28-29 Apr. 1995
    • /
    • pp.869-879
    • /
    • 1995
  • The new field of learning control develops controllers that learn to improve their performance at executing a given task, based on experience performing this task. The simplest forms of learning control are based on the same concept as integral control, but operating in the domain of the repetitions of the task. In the previous paper, I had studied the use of such controllers in a decentralized system, such as a robot with the controller for each link acting independently. The basic result of the paper is to show that stability of the learning controllers for all subsystems when the coupling between subsystems is turned off, assures stability of the decentralized learning in the coupled system, provided that the sample time in the digital learning controller is sufficiently short. In this paper, we present two examples. The first illustrates the effect of coupling between subsystems in the system dynamics, and the second studies the application of decentralized learning control to robot problems. The latter example illustrates the application of decentralized learning control to nonlinear systems, and also studies the effect of the coupling between subsystems introduced in the input matrix by the discretization of the system equations. The conclusion is that for sufficiently small learning gain, and sufficiently small sample time, the simple learning control law based on integral control applied to each robot axis will produce zero tracking error in spite o the dynamic coupling in the robot equations. Of course, the results of this paper have much more general application than just to the robotics tracking problem. Convergence in decentralized systems is seen to depend only on the input and output matrices, provided the sample time is suffiently small.

  • PDF

동적 환경에서 강화학습을 이용한 다중이동로봇의 제어 (Reinforcement learning for multi mobile robot control in the dynamic environments)

  • 김도윤;정명진
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 한국자동제어학술회의논문집(국내학술편); 포항공과대학교, 포항; 24-26 Oct. 1996
    • /
    • pp.944-947
    • /
    • 1996
  • Realization of autonomous agents that organize their own internal structure in order to behave adequately with respect to their goals and the world is the ultimate goal of AI and Robotics. Reinforcement learning gas recently been receiving increased attention as a method for robot learning with little or no a priori knowledge and higher capability of reactive and adaptive behaviors. In this paper, we present a method of reinforcement learning by which a multi robots learn to move to goal. The results of computer simulations are given.

  • PDF

멀티모달 상호작용 중심의 로봇기반교육 콘텐츠를 활용한 r-러닝 시스템 사용의도 분석 (A Study on the Intention to Use a Robot-based Learning System with Multi-Modal Interaction)

  • 오준석;조혜경
    • 제어로봇시스템학회논문지
    • /
    • 제20권6호
    • /
    • pp.619-624
    • /
    • 2014
  • This paper introduces a robot-based learning system which is designed to teach multiplication to children. In addition to a small humanoid and a smart device delivering educational content, we employ a type of mixed-initiative operation which provides enhanced multi-modal cognition to the r-learning system through human intervention. To investigate major factors that influence people's intention to use the r-learning system and to see how the multi-modality affects the connections, we performed a user study based on TAM (Technology Acceptance Model). The results support the fact that the quality of the system and the natural interaction are key factors for the r-learning system to be used, and they also reveal very interesting implications related to the human behaviors.

강화학습과 분산유전알고리즘을 이용한 자율이동로봇군의 행동학습 및 진화 (Behavior leaning and evolution of collective autonomous mobile robots using reinforcement learning and distributed genetic algorithms)

  • 이동욱;심귀보
    • 전자공학회논문지S
    • /
    • 제34S권8호
    • /
    • pp.56-64
    • /
    • 1997
  • In distributed autonomous robotic systems, each robot must behaves by itself according to the its states and environements, and if necessary, must cooperates with other orbots in order to carray out a given task. Therefore it is essential that each robot has both learning and evolution ability to adapt the dynamic environments. In this paper, the new learning and evolution method based on reinforement learning having delayed reward ability and distributed genectic algorithms is proposed for behavior learning and evolution of collective autonomous mobile robots. Reinforement learning having delayed reward is still useful even though when there is no immediate reward. And by distributed genetic algorithm exchanging the chromosome acquired under different environments by communication each robot can improve its behavior ability. Specially, in order to improve the perfodrmance of evolution, selective crossover using the characteristic of reinforcement learning is adopted in this paper, we verify the effectiveness of the proposed method by applying it to cooperative search problem.

  • PDF