• Title/Summary/Keyword: Robot-Agent

Search Result 146, Processing Time 0.024 seconds

Design and implementation of Robot Soccer Agent Based on Reinforcement Learning (강화 학습에 기초한 로봇 축구 에이전트의 설계 및 구현)

  • Kim, In-Cheol
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.139-146
    • /
    • 2002
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless these algorithms can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL) as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. In this paper we use the AMMQL algorithn as a learning method for dynamic positioning of the robot soccer agent, and implement a robot soccer agent system called Cogitoniks.

On a Multi-Agent System for Assisting Human Intention

  • Tawaki, Hajime;Tan, Joo Kooi;Kim, Hyoung-Seop;Ishikawa, Seiji
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1126-1129
    • /
    • 2003
  • In this paper, we propose a multi-agent system for assisting those who need help in taking objects around him/her. One may imagine this kind of situation when a person is lying in bed and wishes to take an object on a distant table that cannot be reached only by stretching his/her hand. The proposed multi-agent system is composed of three main independent agents; a vision agent, a robot agent, and a pass agent. Once a human expresses his/her intention by pointing to a particular object using his/her hand and a finger, these agents cooperatively bring the object to him/her. Natural communication between a human and the multi-agent system is realized in this way. Performance of the proposed system is demonstrated in an experiment, in which a human intends to take one of the four objects on the floor and the three agents successfully cooperate to find out the object and to bring it to the human.

  • PDF

Dynamic Positioning of Robot Soccer Simulation Game Agents using Reinforcement learning

  • Kwon, Ki-Duk;Cho, Soo-Sin;Kim, In-Cheol
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.59-64
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to chose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state- action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem. we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL)as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Reinforcement Learning Approach to Agents Dynamic Positioning in Robot Soccer Simulation Games

  • Kwon, Ki-Duk;Kim, In-Cheol
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 2001.10a
    • /
    • pp.321-324
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement Beaming is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement loaming is different from supervised teaming in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement loaming algorithms like Q-learning do not require defining or loaming any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning(AMMQL) as an improvement of the existing Modular Q-Learning(MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Development of vision-based soccer robots for multi-agent cooperative systems (다개체 협력 시스템을 위한 비젼 기반 축구 로봇 시스템의 개발)

  • 심현식;정명진;최인환;김종환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.608-611
    • /
    • 1997
  • The soccer robot system consists of multi agents, with highly coordinated operation and movements so as to fulfill specific objectives, even under adverse situation. The coordination of the multi-agents is associated with a lot of supplementary work in advance. The associated issues are the position correction, prevention of communication congestion, local information sensing in addition to the need for imitating the human-like decision making. A control structure for soccer robot is designed and several behaviors and actions for a soccer robot are proposed. Variable zone defense as a basic strategy and several special strategies for fouls are applied to SOTY2 team.

  • PDF

Implementation and Performance Analysis of Web Robot for URL Analysis (URL 분석을 위한 웹 로봇 구현 및 성능분석)

  • Kim, Weon;Kim, Hie-Cheol;Chin, Yong-Ohk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.3C
    • /
    • pp.226-233
    • /
    • 2002
  • This paper proposes the web robot based on Multi-Agent which the mutual dependency should be minimized each other with dividing the function each to collect Webpage. In result it is written to make a foundation for producing the effective statistics to analyze the domestic webpages and text, multimedia file composition ratio through performance analysis of the implemented system. It is easy that Web robot of the sequential processing method to collect Webpage on the same resource environment produces the limit of collecting performance. So to speak Webpages have "Dead-links" URL which is produced by the temporary host down and instability of network resource. If there are much "Dead-links" URL in the webpages, it takes a lot of time for web robot to collect HTML. The propose of this paper to be proposed, makes the maximum improvement to extract the webpages to process "Dead-links" URL on the Inactive URL scanner Agent.

Cognitive and Emotional Structure of a Robotic Game Player in Turn-based Interaction

  • Yang, Jeong-Yean
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.154-162
    • /
    • 2015
  • This paper focuses on how cognitive and emotional structures affect humans during long-term interaction. We design an interaction with a turn-based game, the Chopstick Game, in which two agents play with numbers using their fingers. While a human and a robot agent alternate turn, the human user applies herself to play the game and to learn new winning skills from the robot agent. Conventional valence and arousal space is applied to design emotional interaction. For the robotic system, we implement finger gesture recognition and emotional behaviors that are designed for three-dimensional virtual robot. In the experimental tests, the properness of the proposed schemes is verified and the effect of the emotional interaction is discussed.

Cooperative Action Controller of Multi-Agent System (다 개체 시스템의 협동 행동제어기)

  • Kim, Young-Back;Jang, Hong-Min;Kim, Dae-Jun;Choi, Young-Kiu;Kim, Sung-Shin
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.3024-3026
    • /
    • 1999
  • This paper presents a cooperative action controller of a multi-agent system. To achieve an object, i.e. win a game, it is necessary that a robot has its own roles, actions and work with each other. The presented incorporated action controller consists of the role selection, action selection and execution layer. In the first layer, a fuzzy logic controller is used. Each robot selects its own action and makes its own path trajectory in the second layer. In the third layer, each robot performs their own action based on the velocity information which is sent from main computer. Finally, simulation shows that each robot selects proper roles and incorporates actions by the proposed controller.

  • PDF

Self-Reconfiguration of Service-Oriented Application using Agent and ESB in Intelligent Robot (지능로봇에서 에이전트와 ESB를 사용한 서비스 지향 애플리케이션의 자가 재구성)

  • Lee, Jae-Jeong;Kim, Jin-Han;Lee, Chang-Ho;Lee, Byung-Jeong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.8
    • /
    • pp.813-817
    • /
    • 2008
  • Intelligent Robots (IR) get data of the current situation from sensors and perform knowledgeable services. Self-reconfiguration of IR is an important factor to change itself without stopping while supporting environment and technology change. In this paper, we propose an agent based self-reconfiguration framework of IR using ESB (Enterprise Service Bus). This framework focuses on dynamic discovery and reconfiguration of service-oriented applications using multi-agent system in intelligent robots. When IR meets an irresolvable situation it downloads a necessary service agent from an external service repository, executes the agent, and resolves the situation. Agent technology provides an intelligent approach for collaborations of IR. The prototype has also been implemented to show the validity of our study.

The design of controllers for soccer robots (축구 로봇을 위한 제어기 설계)

  • 김광춘;김동한;김종환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.612-616
    • /
    • 1997
  • In this paper, two kinds of controller are proposed for a soccer robot system.. One for Supervisor and defense mode, and the other for attack mode. Robot soccer game has very dynamic characteristics. Furthermore, there exist competitions between agents. The soccer-playing robot should take an appropriate action according to its surroundings. Initially, an attack mode controller using a vector field concept is designed, then a supervisor and a defense mode controller are designed with a Petri-net. The efficiency and applicability of the proposed controllers are demonstrated through a real robot soccer game(MiroSot 97).

  • PDF