• Title/Summary/Keyword: Soccer game

Search Result 102, Processing Time 0.029 seconds

The design of controllers for soccer robots (축구 로봇을 위한 제어기 설계)

  • 김광춘;김동한;김종환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.612-616
    • /
    • 1997
  • In this paper, two kinds of controller are proposed for a soccer robot system.. One for Supervisor and defense mode, and the other for attack mode. Robot soccer game has very dynamic characteristics. Furthermore, there exist competitions between agents. The soccer-playing robot should take an appropriate action according to its surroundings. Initially, an attack mode controller using a vector field concept is designed, then a supervisor and a defense mode controller are designed with a Petri-net. The efficiency and applicability of the proposed controllers are demonstrated through a real robot soccer game(MiroSot 97).

  • PDF

Co-Operative Strategy for an Interactive Robot Soccer System by Reinforcement Learning Method

  • Kim, Hyoung-Rock;Hwang, Jung-Hoon;Kwon, Dong-Soo
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.2
    • /
    • pp.236-242
    • /
    • 2003
  • This paper presents a cooperation strategy between a human operator and autonomous robots for an interactive robot soccer game, The interactive robot soccer game has been developed to allow humans to join into the game dynamically and reinforce entertainment characteristics. In order to make these games more interesting, a cooperation strategy between humans and autonomous robots on a team is very important. Strategies can be pre-programmed or learned by robots themselves with learning or evolving algorithms. Since the robot soccer system is hard to model and its environment changes dynamically, it is very difficult to pre-program cooperation strategies between robot agents. Q-learning - one of the most representative reinforcement learning methods - is shown to be effective for solving problems dynamically without explicit knowledge of the system. Therefore, in our research, a Q-learning based learning method has been utilized. Prior to utilizing Q-teaming, state variables describing the game situation and actions' sets of robots have been defined. After the learning process, the human operator could play the game more easily. To evaluate the usefulness of the proposed strategy, some simulations and games have been carried out.

Network Based Robot Soccer System (네트워크기반 로봇 축구 시스템)

  • Cho, Dong Kwon;Chung, Sang Bong;Sung, Young Whee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.4 no.1
    • /
    • pp.9-15
    • /
    • 2009
  • In this paper, a network based robot soccer system is proposed. The system consists of robots, an image processing sub-system, a game server, and client systems. Embedded technique is applied to the hardware and software for controlling the robots and image processing. In this robot soccer system, a gamer can see and control robots in a remote site through Internet. During the game, the game server gives geometrical information on robots such as positions and orientations. We demonstrated the game in public and obtained optimistic results even though some technical problemssuch as communication delay and precise control for the robots should be improved.

  • PDF

NPC Control Model for Defense in Soccer Game Applying the Decision Tree Learning Algorithm (결정트리 학습 알고리즘을 활용한 축구 게임 수비 NPC 제어 방법)

  • Cho, Dal-Ho;Lee, Yong-Ho;Kim, Jin-Hyung;Park, So-Young;Rhee, Dae-Woong
    • Journal of Korea Game Society
    • /
    • v.11 no.6
    • /
    • pp.61-70
    • /
    • 2011
  • In this paper, we propose a defense NPC control model in the soccer game by applying the Decision Tree learning algorithm. The proposed model extracts the direction patterns and the action patterns generated by many soccer game users, and applies these patterns to the Decision Tree learning algorithm. Then, the proposed model decides the direction and the action according to the learned Decision Tree. Experimental results show that the proposed model takes some time to learn the Decision Tree while the proposed model takes 0.001-0.003 milliseconds to decide the direction and the action based on the learned Decision Tree. Therefore, the proposed model can control NPC in the soccer game system in real time. Also, the proposed model achieves higher accuracy than a previous model (Letia98); because the proposed model can utilize current state information, its analyzed information, and previous state information.

A Tool for the Analysis of Robot Soccer Game

  • Matko, Drago;Klancar, Gregor;Lepetic, Marko
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.2
    • /
    • pp.222-228
    • /
    • 2003
  • A tool which can be used for the analysis of a robot soccer game is presented. The tool enables automatic filtering and selection of game sequences which are suitable for the analysis of the game. Fuzzy logic is used since the data gathered by a camera is highly noisy. The data used in the paper was recorded during the game Germany - Slovenia in Hagen, on November 11, 2001. The dynamic parameters of our robots are estimated using the least squares technique. Meandering parameters are estimated and an attempt is made to identify the strategy of the opposing team with the method of introspection.

Evolvable Cooperation Strategy for the Interactive Robot Soccer with Genetic Programming

  • Kim, Hyoung-Rock;Hwang, Jung-Hoon;Kwon, Dong-Soo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.59.2-59
    • /
    • 2001
  • This paper presents an evolvable cooperation strategy based on a genetic programming for the interactive robot soccer game. The interactive robot soccer game has been developed to allow a person to join in the game dynamically and to reinforce entertainment characteristics. In this game, a cooperation strategy between humans and autonomous robots is very important in order to make the game more enjoyable. First of all, necessary action sets for the cooperation strategy and its strategy structure are presented. In the first stage, a blocking action that an autonomous robot cut off an enemy robot from disturbing the way of the human controlled robot has been considered. The success probability of the blocking action has beer obtained in ...

  • PDF

A Soccer Image Sequence Mosaicking and Analysis Method Using Line and Advertisement Board Detection

  • Yoon, Ho-Sub;Bae, Young-Lae J.;Yang, Young-Kyu
    • ETRI Journal
    • /
    • v.24 no.6
    • /
    • pp.443-454
    • /
    • 2002
  • This paper introduces a system for mosaicking sequences of soccer images in a panoramic view for soccer game analysis. The continuous mosaic images of the soccer ground field allow the user to view a wide picture of the players' actions. The initial component of our algorithm automatically detects and traces the players and some lines. The next component of our algorithm finds the parameters of the captured image coordinates and transforms them into ground model coordinates for automatic soccer game analysis. The results of our experimentations indicate that the proposed system offers a promising method for segmenting, mosaicking, and analyzing soccer image sequences.

  • PDF

Embodiment of Effective Multi-Robot Control Algorithm Using Petri-Net (Petri-Net을 이용한 효과적인 다중로봇 제어알고리즘의 구현)

  • 선승원;국태용
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.11
    • /
    • pp.906-916
    • /
    • 2003
  • A multi-robot control algorithm using Petri-Net is proposed for 5vs5 robot soccer. The dynamic environment of robot soccer is modeled by defining the place and transition of each robot and converting it into Petri-Net diagram. Once all the places and transitions of robots are represented by the Petri-Net model, their actions can be chosen according to the roles of robots and position of the ball in soccer game, e.g., offensive, defensive and goalie robot. The proposed modeling method is implemented for soccer robot system. The efficiency and applicability of the proposed multiple-robot control algorithm using Petri-Net are demonstrated through 5vs5 Middle League SimuroSot soccer game.

Generating Cooperative Behavior by Multi-Agent Profit Sharing on the Soccer Game

  • Miyazaki, Kazuteru;Terada, Takashi;Kobayashi, Hiroaki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.166-169
    • /
    • 2003
  • Reinforcement learning if a kind of machine learning. It aims to adapt an agent to a given environment with a clue to a reward and a penalty. Q-learning [8] that is a representative reinforcement learning system treats a reward and a penalty at the same time. There is a problem how to decide an appropriate reward and penalty values. We know the Penalty Avoiding Rational Policy Making algorithm (PARP) [4] and the Penalty Avoiding Profit Sharing (PAPS) [2] as reinforcement learning systems to treat a reward and a penalty independently. though PAPS is a descendant algorithm of PARP, both PARP and PAPS tend to learn a local optimal policy. To overcome it, ion this paper, we propose the Multi Best method (MB) that is PAPS with the multi-start method[5]. MB selects the best policy in several policies that are learned by PAPS agents. By applying PS, PAPS and MB to a soccer game environment based on the SoccerBots[9], we show that MB is the best solution for the soccer game environment.

  • PDF

The development of a micro robot system for robot soccer game (로봇 축구 대회를 위한 마이크로 로봇 시스템의 개발)

  • 이수호;김경훈;김주곤;조형석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.507-510
    • /
    • 1996
  • In this paper we present the multi-agent robot system developed for participating in micro robot soccer tournament. The multi-agent robot system consists of micro robot, a vision system, a host computer and a communication module. Mcro robot are equipped with two mini DC motors with encoders and gearboxes, a R/F receiver, a CPU and infrared sensors for obstacle detection. A vision system is used to recognize the position of the ball and opponent robots, position and orientation of our robots. The vision system is composed of a color CCD camera and a vision processing unit. Host computer is a Pentium PC, and it receives information from the vision system, generates commands for each robot using a robot management algorithm and transmits commands to the robots by the R/F communication module. And in order to achieve a given mission in micro robot soccer game, cooperative behaviors by robots are essential. Cooperative work between individual agents is achieved by the command of host computer.

  • PDF