• Title/Summary/Keyword: Prey agent

Search Result 14, Processing Time 0.043 seconds

A research on non-interactive multi agents by ACS & Direction vector algorithm (ACS & 방향벡터 알고리즘을 이용한 비 대화형 멀티에이전트 전략에 관한 연구)

  • Kim, Hyun;Yoon, Seok-Hyun;Chung, Tae-Choong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.12
    • /
    • pp.11-18
    • /
    • 2010
  • In this paper, We suggest new strategies on non-interactive agents applied in a prey pursuit problem of multi agent research. The structure of the prey pursuit problem by grid space(Four agent & one prey). That is allied agents captured over one prey. That problem has long been known in interactive, non-interactive of multi agent research. We trying hard to find its own solution from non-interactive agent method on not in the same original environment(circular environment). We used ACS applied Direction vector to learning and decide on a direction. Exchange of information between agents have been previously presented (an interactive agent) out of the way information exchange ratio (non-interactive agents), applied the new method. Can also solve the problem was to find a solution. This is quite distinct from the other existing multi agent studies, that doesn't apply interactive agents but independent agent to find a solution.

Avoidance Behavior of Small Mobile Robots based on the Successive Q-Learning

  • Kim, Min-Soo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.164.1-164
    • /
    • 2001
  • Q-learning is a recent reinforcement learning algorithm that does not need a modeling of environment and it is a suitable approach to learn behaviors for autonomous agents. But when it is applied to multi-agent learning with many I/O states, it is usually too complex and slow. To overcome this problem in the multi-agent learning system, we propose the successive Q-learning algorithm. Successive Q-learning algorithm divides state-action pairs, which agents can have, into several Q-functions, so it can reduce complexity and calculation amounts. This algorithm is suitable for multi-agent learning in a dynamically changing environment. The proposed successive Q-learning algorithm is applied to the prey-predator problem with the one-prey and two-predators, and its effectiveness is verified from the efficient avoidance ability of the prey agent.

  • PDF

Multi-Agent Control Strategy using Reinforcement Leaning (강화학습을 이용한 다중 에이전트 제어 전략)

  • 이형일
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.5
    • /
    • pp.937-944
    • /
    • 2003
  • The most important problems in the multi-agent system are to accomplish a gnat through the efficient coordination of several agents and to prevent collision with other agents. In this paper, we propose a new control strategy for succeeding the goal of a prey pursuit problem efficiently Our control method uses reinforcement learning to control the multi-agent system and consider the distance as well as the space relationship among the agents in the state space of the prey pursuit problem.

  • PDF

The Application of Direction Vector Function for Multi Agents Strategy and The Route Recommendation System Research in A Dynamic Environment (멀티에이전트 전략을 위한 방향벡터 함수 활용과 동적 환경에 적응하는 경로 추천시스템에 관한 연구)

  • Kim, Hyun;Chung, Tae-Choong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.2
    • /
    • pp.78-85
    • /
    • 2011
  • In this paper, a research on multi-agent is carried out in order to develop a system that can provide drivers with real-time route recommendation by reflecting Dynamic Environment Information which acts as an agent in charge of Driver's trait, road condition and Route recommendation system. DEI is equivalent to number of n multi-agent and is an environment variable which is used in route recommendation system with optimal routes for drivers. Route recommendation system which reflects DEI can be considered as a new field of topic in multi-agent research. The representative research of Multi-agent, the Prey Pursuit Problem, was used to generate a fresh solution. In this thesis paper, you will be able to find the effort of indulging the lack of Prey Pursuit Problem,, which ignored practicality. Compared to the experiment, it was provided a real practical experiment applying the algorithm, the new Ant-Q method, plus a comparison between the strategies of the established direction vector was put into effect. Together with these methods, the increase of the efficiency was able to be proved.

Univector Field Method based Multi-Agent Navigation for Pursuit Problem

  • Viet, Hoang Huu;An, Sang-Hyeok;Chung, Tae-Choong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.1
    • /
    • pp.86-93
    • /
    • 2012
  • This paper presents a new approach to solve the pursuit problem based on a univector field method. In our proposed method, a set of eight agents works together instantaneously to find suitable moving directions and follow the univector field to pursue and capture a prey agent by surrounding it from eight directions in an infinite grid-world. In addition, a set of strategies is proposed to make the pursuit problem more realistic in the real world environment. This is a general approach, and it can be extended for an environment that contains static or moving obstacles. Experimental results show that our proposed algorithm is effective for the pursuit problem.

Analysis of Behaviour of Prey to avoid Pursuit using Quick Rotation (급회전을 이용한 희생자의 추격 피하기 행동 분석)

  • Lee, Jae Moon
    • Journal of Korea Game Society
    • /
    • v.13 no.6
    • /
    • pp.27-34
    • /
    • 2013
  • This paper analyzes the behaviour of a prey to avoid the pursuit of a predator at predator-prey relationship to be appeared in the collective behavior of animals. One of the methods to avoid the pursuit of a predator is to rotate quickly when a predator arrives near to it. At that moment, a critical distance and a rotating angular are very important for the prey in order to survive from the pursuit, where the critical distance is the distance between the predator and the prey just before rotation. In order to analyze the critical distance and the rotating angular, this paper introduces the energy for a predator which it has at starting point of the chase and consumes during the chase. Through simulations, we can know that the rotating angle for a prey to survive from the pursuit is increased when the critical distance is shorter and when the ratio of predator's mass and prey's mass is also decreased. The results of simulations are the similar phenomenon in nature and therefore it means that the method to analyze in this paper is correct.

Multagent Control Strategy Using Reinforcement Learning (강화학습을 이용한 다중 에이전트 제어 전략)

  • Lee, Hyong-Ill;Kim, Byung-Cheon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.3
    • /
    • pp.249-256
    • /
    • 2003
  • The most important problems in the multi-agent system are to accomplish a goal through the efficient coordination of several agents and to prevent collision with other agents. In this paper, we propose a new control strategy for succeeding the goal of the prey pursuit problem efficiently. Our control method uses reinforcement learning to control the multi-agent system and consider the distance as well as the space relationship between the agents in the state space of the prey pursuit problem.

Studies on the Biology and Predatory Behaviour of Eocanthecona furcellata (Wolff.) Predating on Spilarctia obliqua (Walk.) in Mulberry Plantation

  • Kumar, Vineet;Morrison, M.N.;Rajadurai, S.;Babu, A.M.;Thiagarajan, V.;Datta, R.K.
    • International Journal of Industrial Entomology and Biomaterials
    • /
    • v.2 no.2
    • /
    • pp.173-180
    • /
    • 2001
  • The stink bug, Eocanthecona furcellata (Wolff.) is a natural and potential biocontrol agent of Spilarctia obliqua (Walk.). The present investigation reveals the biology, predatory efficiency and reproductive parameters of the predator which feeds on S. obliqua caterpillars in mulberry plantation. In order to find out the role of prey sine on the biology of the predators the predatory insects were separately fed with small and large caterpillars of S. obliqua. The incubation period of the eggs of E. furcellata was 8.37${\pm}$0.44 days, while the nymphal duration varied as per the prey sine. The predator when supplied with small larvae of prey, consumed 61.1 larvae and completed nymphal stage in 19.9 days; while those fed with larger prey, consumed 36.1 larvae and completed their nymphal stage in 21.55 days. The prey size also influences the reproductive parameters of the predator, The adult female predator is more voracious feeder than the adult male and consumed 41.9${\pm}$0.64 small larvae and 42.2${\pm}$0.87 large larvae during their life span. The longevity of male and female was observed as 20.7 and 29.4 days respectively. Visualization of the predator as well as the movement of the prey increases the predatory efficiency. Scanning electron microscopic studies on the feeding part explain its support in effective predation. Field observations indicated a drastic fall in the incidence of the mulberry pest, S. obliqua with the increased population E. furcellata in mulberry plantation.

  • PDF

Multi-agent Coordination Strategy Using Reinforcement Learning (강화 학습을 이용한 다중 에이전트 조정 전략)

  • Kim, Su-Hyun;Kim, Byung-Cheon;Yoon, Byung-Joo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2000.10a
    • /
    • pp.285-288
    • /
    • 2000
  • 본 논문에서는 다중 에이전트(multi-agent) 환경에서 에이전트들의 행동을 효율적으로 조정 (coordination)하기 위해 강화 학습(reinforcement learning)을 이용하였다. 제안된 방법은 각 에이전트가 목표(goal)와의 거리 관계(distance relationship)와 인접 에이전트들과의 공간 관계(spatial relationship)를 이용하였다. 그러므로 각 에이전트는 다른 에이전트와 충돌(collision) 현상이 발생하지 않으면서, 최적의 다음 상태를 선택할 수 있다. 또한, 상태 공간으로부터 입력되는 강화 값이 0과 1 사이의 값을 갖기 때문에 각 에이전트가 선택한 (상태, 행동) 쌍이 얼마나 좋은가를 나타낼 수 있다. 제안된 방법을 먹이 포획 문제(prey pursuit problem)에 적용한 결과 지역 제어(local control)나. 분산 제어(distributed control) 전략을 이용한 방법보다 여러 에이전트들의 행동을 효율적으로 조정할 수 있었으며, 매우 빠르게 먹이를 포획할 수 있음을 알 수 있었다.

  • PDF

The multi agent control heuristic using direction vector (방향 벡터를 이용한 다중에이전트 휴리스틱)

  • Kim Hyun;Lee SeungGwan;Chung TaeChoong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.11a
    • /
    • pp.525-528
    • /
    • 2004
  • 먹이추적문제(prey pursuit problem)는 가상 격자로 이루어진 공간 내에 다중의 에이전트를 이용하여 먹이를 포획하는 것이다. 에이전트들은 먹이를 포획하기 위해 $30{\times}30$으로 이루어진 격자공간 (gride)안에서 기존 제안된 지역 제어, 분산 제어, 강화학습을 이용한 분산 제어 전략들을 적용하여 먹이를 포획하는 전략을 구현하였다. 제한된 격자 공간은 현실세계를 표현하기에는 너무도 역부족이어서 본 논문에서는 제한된 격자공간이 아닌 현실 세계와 흡사한 무한 공간 환경을 표현하고자 하였다. 표현된 환경의 모델은 순환구조(circular)형 격자 공간이라는 새로운 실험 공간이며, 새로운 공간에 맞는 전략은 에이전트와 먹이와의 추적 관계를 방향 벡터를 고려한 모델로 구현하였다. 기존 실험과는 차별화 된 환경에서 에이전트들은 휴리스틱을 통한 학습을 할 수 있다는 가정과 먹이의 효율적 포획, 충돌문제 해결이라는 결과를 얻었다.

  • PDF