• Title/Summary/Keyword: Game Agent

Search Result 154, Processing Time 0.032 seconds

Measuring gameplay similarity between human and reinforcement learning artificial intelligence (사람과 강화학습 인공지능의 게임플레이 유사도 측정)

  • Heo, Min-Gu;Park, Chang-Hoon
    • Journal of Korea Game Society
    • /
    • v.20 no.6
    • /
    • pp.63-74
    • /
    • 2020
  • Recently, research on automating game tests using artificial intelligence agents instead of humans is attracting attention. This paper aims to collect play data from human and artificial intelligence and analyze their similarity as a preliminary study for game balancing automation. At this time, constraints were added at the learning stage in order to create artificial intelligence that can play similar to humans. Play datas obtained 14 people and 60 artificial intelligence by playing Flippy bird games 10 times each. The collected datas compared and analyzed for movement trajectory, action position, and dead position using the cosine similarity method. As a result of the analysis, an artificial intelligence agent with a similarity of 0.9 or more with humans was found.

AGENT-BASED SIMULATION OF ORGANIZATIONAL DYNAMICS IN CONSTRUCTION PROJECT TEAMS

  • JeongWook Son;Eddy M. Rojas
    • International conference on construction engineering and project management
    • /
    • 2011.02a
    • /
    • pp.439-444
    • /
    • 2011
  • As construction projects have been getting larger and more complex, a single individual or organization cannot have complete knowledge or the abilities to handle all matters. Collaborative practices among heterogeneous individuals, which are temporarily congregated to carry out a project, are required in order to accomplish project objectives. These organizational knowledge creation processes of project teams should be understood from the active and dynamic viewpoint of how they create information and knowledge rather than from the passive and static input-process-output sequence. To this end, agent-based modeling and simulation which is built from the ground-up perspective can provide the most appropriate way to systematically investigate them. In this paper, agent-based modeling and simulation as a research method and a medium for representing theory is introduced. To illustrate, an agent-based simulation of the evolution of collaboration in large-scale project teams from a game theory and social network perspective is presented.

  • PDF

3D Affordance Field based Crowd Agent Behavior Simulation (3D 행동 유도장 기반 대규모 에이전트 행동 시뮬레이션)

  • Ok, Sooyol;Han, MyungWoo;Lee, Suk-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.5
    • /
    • pp.629-641
    • /
    • 2021
  • Crowd behavior simulations have been studied to further accelerated and refined by parallelism by inducing agent-interacting forces into the image field representing the forces of attraction and repulsion. However, it was difficult to consider rapidly changing environments such as fire situations in buildings because texture images must be generated in advance simulation starts and simulations can only be performed in 2D spaces. In this paper, we propose a crowd agent behavior simulation method based on agent's 3D affordance field for flexible agent behavior in variable geomorphological environments in 3D space. The proposed method generates 3D affordance field related to agents and sensors in 3D space and defines the agent behavior in 3D space for the crowd behavior simulation based on an image-inducing field to a 3D space. Experimental results verified that our method enables the development of large-scale crowd behavior simulations that are flexible to various fire evacuation situations in 3D virtual spaces.

Game System for Agent applied Artificial Intelligence based on Augmented Reality (증강현실 기반의 인공지능이 적용된 에이전트를 위한 게임 시스템)

  • Jang, yu-na;Park, sung-jun
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2010.05a
    • /
    • pp.49-51
    • /
    • 2010
  • 스마트폰의 도입으로 인하여 증강현실이 널리 알려짐에 따라 대중들의 관심은 이에 집중되고 있으며 휴대성으로 인하여 모바일기기에서의 증강현실 연구가 하나의 흐름으로 자리 잡고 있다. 기존의 증강현실과 인공지능이 결합된 연구들은 주로 로봇공학이 많은 비율을 차지하고 있으며 게임에 접목된 연구들은 부족한 실정이다. 또한 인공지능이 적용된 에이전트들의 움직임을 위한 데이터들은 아직까지 사용자가 직접 입력해주거나 이를 인식하는데 마커를 사용하고 있다. 본 논문에서는 마커리스 추적 기술을 사용하여 생성한 데이터를 인공지능부분에서 사용하며 증강현실 기반의 인공지능이 적용된 에이전트를 위한 게임 시스템을 제안한다. 그리고 이를 아이폰 모바일 기기에서 구현하였으며 인식율, 정확도를 측정하여 본 시스템을 검증하였다.

  • PDF

Design of Improved Intellectual MOB Agent for Online Game (온라인 게임을 위한 향상된 지능형 MOB 에이전트 설계)

  • Kim, Jin-Soo;Bang, Yong-Chan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.11a
    • /
    • pp.413-416
    • /
    • 2005
  • 기존의 온라인 게임에서 구현되어 있는 수동적인 MOB(Mobile Character)에 '회피' 상태를 추가하고 3 가지 각각의 행동 전이에 따른 행동 패턴을 행동 특성 곡선으로 표현하며 '공격'과 '접근'자극을 스트레스 모형에 적용하여 스트레스에 따른 MOB 에이전트의 행동 패턴 변화를 설명하고 주변의 다른 에이전트들과의 협동을 도모할 수 있는 지능적인 MOB 에이전트를 [1]논문에서 설계하였다. 본 논문에서는 [1]논문에서의 모형을 향상시키기 위하여 행동패턴을 구체화하고 수식을 추가하였으며, 또한 스트레스 카운터를 추가하여 보다 현실적인 모형을 설계하였다.

  • PDF

Design of Intellectual MOB Agent for Multi-player Online Game (다사용자 온라인 게임을 위한 지능형 MOB 에이전트 설계)

  • Kim, Jin-Soo;Bang, Yong-Chan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.05a
    • /
    • pp.325-328
    • /
    • 2005
  • 기존의 다사용자 온라인 게임에서 구현되어 있는 MOB(Mobile Character)들은 '대기' 와 '공격' 의 2 가지 상태를 가지며 사용자의 '공격' 이라는 이벤트에만 반응하도록 설계되어 있는 수동적인 에이전트들이다. 본 논문에서는 기존의 '대기' 와 '공격' 상태에 '회피' 상태를 추가하고 3 가지 각각의 행동 전이에 따른 행동 패턴을 행동 특성 곡선으로 표현하며 '공격' 과 '접근' 자극을 스트레스 모형에 적용하여 스트레스에 따른 MOB 에이전트의 행동 패턴 변화를 설명하고 주변의 다른 에이전트들과의 협동을 도모할 수 있는 지능적인 NPC 에이전트를 설계한다.

  • PDF

Implementation of Target Object Tracking Method using Unity ML-Agent Toolkit (Unity ML-Agents Toolkit을 활용한 대상 객체 추적 머신러닝 구현)

  • Han, Seok Ho;Lee, Yong-Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.3
    • /
    • pp.110-113
    • /
    • 2022
  • Non-playable game character plays an important role in improving the concentration of the game and the interest of the user, and recently implementation of NPC with reinforcement learning has been in the spotlight. In this paper, we estimate an AI target tracking method via reinforcement learning, and implement an AI-based tracking agency of specific target object with avoiding traps through Unity ML-Agents Toolkit. The implementation is built in Unity game engine, and simulations are conducted through a number of experiments. The experimental results show that outstanding performance of the tracking target with avoiding traps is shown with good enough results.

MODELING POLITICAL AND ECONOMIC RELATIONS BETWEEN NORWAY AND RUSSIA: A BEHAVIORAL GAME THEORY APPROACH

  • Babaei, Samereh;Gordji, Madjid Eshaghi
    • The Pure and Applied Mathematics
    • /
    • v.29 no.2
    • /
    • pp.141-160
    • /
    • 2022
  • From the past until now, political and economic relations among countries have been one of the most important issues among analysts and numerous studies have tried to analyze these relations from different theoretical perspectives. The dynamic system of games has introduced a new modeling method in the game theory. In this study, we use behavioral models (level- k) along with the dynamic system in games to model rational agent behavior. As an application, we study Russia- Norway economic and political relations (1970-2019). The dynamic system in games along with behavioral games theory can be used to predict the players behavior in the future.

A Study on Tools for Agent System Development (3차원 미니 회피 게임개발)

  • Lee, Yong-Un;Kim, Soo Kyun;An, Syung-Og
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.459-460
    • /
    • 2011
  • 본 논문에서는 이러한 방식에서 탈피한, 3D 그래픽을 이용한 1인칭 미니 회피게임을 제작하는 방법에 대해 제안한다. 기존의 3인칭 시점에 상하좌우 네 방향으로만 움직이는 방식을 벗어나, 시점을 1인칭으로 변환하고 FPS와 같은 시점과 이동방식을 제공하며, 기존의 2D게임에서 사용되던 축이 고정된 오브젝트의 충돌인 AABB(Axis Aligned Bounding Box)가 아닌 축이 수시로 변하는 OBB(Oriented Bounding Box) 방식을 사용함으로 써, 3D 그래픽에서도 2D 그래픽에서처럼 정교한 충돌 검출 기능이 가능하도록 제작한다.

A Naive Bayesian-based Model of the Opponent's Policy for Efficient Multiagent Reinforcement Learning (효율적인 멀티 에이전트 강화 학습을 위한 나이브 베이지만 기반 상대 정책 모델)

  • Kwon, Ki-Duk
    • Journal of Internet Computing and Services
    • /
    • v.9 no.6
    • /
    • pp.165-177
    • /
    • 2008
  • An important issue in Multiagent reinforcement learning is how an agent should learn its optimal policy in a dynamic environment where there exist other agents able to influence its own performance. Most previous works for Multiagent reinforcement learning tend to apply single-agent reinforcement learning techniques without any extensions or require some unrealistic assumptions even though they use explicit models of other agents. In this paper, a Naive Bayesian based policy model of the opponent agent is introduced and then the Multiagent reinforcement learning method using this model is explained. Unlike previous works, the proposed Multiagent reinforcement learning method utilizes the Naive Bayesian based policy model, not the Q function model of the opponent agent. Moreover, this learning method can improve learning efficiency by using a simpler one than other richer but time-consuming policy models such as Finite State Machines(FSM) and Markov chains. In this paper, the Cat and Mouse game is introduced as an adversarial Multiagent environment. And then effectiveness of the proposed Naive Bayesian based policy model is analyzed through experiments using this game as test-bed.

  • PDF