• Title/Summary/Keyword: Robot Soccer Simulation

Search Result 19, Processing Time 0.028 seconds

Cooperative Control of the Multi-Agent System for Teleoperation (원격조종 다개체 로봇의 협동제어)

  • 황정훈;권동수
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.154-154
    • /
    • 2000
  • The cooperative strategy for the teleoperated multi-agent system is presented. And this scheme has been applied to the teleoperated robot soccer system that is newly proposed. For the teleoperated robot soccer system, we made mapping functions to control a 2-wheeled mobile robot using a 2 DoF stickcontroller. The simulation with a real stickcontroller has been evaluated the performance of the proposed mapping function. Then, the basic cooperation strategy has been tested between teleoperated robot and autonomous robot It is shown that the multi-agent system for teleoperation can have a good performance for a job Like a scoring a goal

  • PDF

Path Planning of Soccer Robot using Bezier Curve (Bezier 곡선을 이용한 축구로봇의 경로 계획)

  • 조규상;이종운
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2002.06a
    • /
    • pp.161-165
    • /
    • 2002
  • This paper describe a trajectory generation method for a soccer robot using cubic Bezier curve. It is proposed that the method to determine the location of control points. The control points are determined by the distance and the velocity parameters of start and target positions. Simulation results show its traceability of the trajectory of mobile robot.

  • PDF

Cell-based motion control of mobile robots for soccer game

  • Baek, Seung-Min;Han, Woong-Gie;Kuc, Tae-Yong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.819-824
    • /
    • 1997
  • This paper presents a cell-based motion control strategy for soccer playing mobile robots. In the central robot motion planner, the planar ground is divided into rectangular cells with variable sizes and motion indices to which direction the mobile robot should move. At every time the multiple objects-the goal gate, ball, and robots-detected, integer values of motion indices are assigned to the cells occupied by mobile robots. Once the indices being calculated, the most desirable state-action pair is chosen from the state and action sets to achieve successful soccer game strategy. The proposed strategy is computationally simple enough to be used for fast robotic soccer system.

  • PDF

A reinforcement learning-based method for the cooperative control of mobile robots (강화 학습에 의한 소형 자율 이동 로봇의 협동 알고리즘 구현)

  • 김재희;조재승;권인소
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.648-651
    • /
    • 1997
  • This paper proposes methods for the cooperative control of multiple mobile robots and constructs a robotic soccer system in which the cooperation will be implemented as a pass play of two robots. To play a soccer game, elementary actions such as shooting and moving have been designed, and Q-learning, which is one of the popular methods for reinforcement learning, is used to determine what actions to take. Through simulation, learning is successful in case of deliberate initial arrangements of ball and robots, thereby cooperative work can be accomplished.

  • PDF

LPD(Linear Parameter Dependent) System Modeling and Control of Mobile Soccer Robot

  • Kang, Jin-Shik;Rhim, Chul-Woo
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.2
    • /
    • pp.243-251
    • /
    • 2003
  • In this paper, a new model for mobile soccer robot, a type of linear system, is presented. A controller, consisting of two loops the one of which is the inner state feedback loop designed for stability and plant be well conditioned and the outer loop is a well-known PI controller designed for tracking the reference input, is suggested. Because the plant, the soccer robot, is parameter dependent, it requires the controller to be insensitive to the parameter variation. To achieve this objective, the pole-sensitivity as a pole-variation with respect to the parameter variation is defined and design algorithms for state-feedback controllers are suggested, consisting of two matrices one of which is for general pole-placement and other for parameter insensitive. This paper shows that the PI controller is equivalent to the state feedback and the cost function for reference tracking is equivalent to the LQ cost. By using these properties, we suggest a tuning procedure for the PI controller. We that the control algorithm in this paper, based on the linear system theory, is well work by simulation, and the LPD system modeling and control are more easy treatment for soccer robot.

Behavior strategies of Soccer Robot using Classifier System (분류자 시스템을 이용한 축구 로봇의 행동 전략)

  • Sim, Kwee-Bo;Kim, Ji-Youn
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.4
    • /
    • pp.289-293
    • /
    • 2002
  • Learning Classifier System (LCS) finds a new rule set using genetic algorithm (GA). In this paper, The Zeroth Level Classifier System (ZCS) is applied to evolving the strategy of a robot soccer simulation game (SimuroSot), which is a state varying dynamical system changed over time, as GBML (Genetic Based Machine Learning) and we show the effectiveness of the proposed scheme through the simulation of robot soccer.

Adapative Modular Q-Learning for Agents´ Dynamic Positioning in Robot Soccer Simulation

  • Kwon, Ki-Duk;Kim, In-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.149.5-149
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent´s dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless ...

  • PDF

Reinforcement Learning Approach to Agents Dynamic Positioning in Robot Soccer Simulation Games

  • Kwon, Ki-Duk;Kim, In-Cheol
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 2001.10a
    • /
    • pp.321-324
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement Beaming is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement loaming is different from supervised teaming in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement loaming algorithms like Q-learning do not require defining or loaming any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning(AMMQL) as an improvement of the existing Modular Q-Learning(MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Development of Attack Intention Extractor for Soccer Robot system (축구 로봇의 공격 의도 추출기 설계)

  • 박해리;정진우;변증남
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.4
    • /
    • pp.193-205
    • /
    • 2003
  • There has been so many research activities about robot soccer system in the many research fields, for example, intelligent control, communication, computer technology, sensor technology, image processing, mechatronics. Especially researchers research strategy for attacking in the field of strategy, and develop intelligent strategy. Then, soccer robots cannot defense completely and efficiently by using simple defense strategy. Therefore, intention extraction of attacker is needed for efficient defense. In this thesis, intention extractor of soccer robots is designed and developed based on FMMNN(Fuzzy Min-Max Neural networks ). First, intention for soccer robot system is defined, and intention extraction for soccer robot system is explained.. Next, FMMNN based intention extractor for soccer robot system is determined. FMMNN is one of the pattern classification method and have several advantages: on-line adaptation, short training time, soft decision. Therefore, FMMNN is suitable for soccer robot system having dynamic environment. Observer extracts attack intention of opponents by using this intention exactor, and this intention extractor is also used for analyzing strategy of opponent team. The capability of developed intention extractor is verified by simulation of 3 vs. 3 robot succor simulator. It was confirmed that the rates of intention extraction each experiment increase.

Prediction of Ball Trajectory in Robot Soccer Using Kalman Filter (로봇축구에서의 칼만필터를 이용한 공의 경로 추정)

  • Lee, Jin-Hee;Park, Tae-Hyun;Kang, Geun-Taek;Lee, Won-Chang
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.2998-3000
    • /
    • 1999
  • Robot soccer is a challenging research area in which multiple robots collaborate in adversarial environment to achieve specific objectives. We designed and built the robotic agents for robot soccer, especially MIROSOT. We have been developing the appropriate vision algorithm, algorithm for ball tracking and prediction, algorithms for collaboration between the robots in an uncertain dynamic environment. In this work we focus on the development of ball tracking and prediction algorithm using Kalman filter. Robustness and feasibility of the proposed algorithm is demonstrated by simulation.

  • PDF