• Title/Summary/Keyword: Game Agent

Search Result 154, Processing Time 0.024 seconds

Stealthy Behavior Simulations Based on Cognitive Data (인지 데이터 기반의 스텔스 행동 시뮬레이션)

  • Choi, Taeyeong;Na, Hyeon-Suk
    • Journal of Korea Game Society
    • /
    • v.16 no.2
    • /
    • pp.27-40
    • /
    • 2016
  • Predicting stealthy behaviors plays an important role in designing stealth games. It is, however, difficult to automate this task because human players interact with dynamic environments in real time. In this paper, we present a reinforcement learning (RL) method for simulating stealthy movements in dynamic environments, in which an integrated model of Q-learning with Artificial Neural Networks (ANN) is exploited as an action classifier. Experiment results show that our simulation agent responds sensitively to dynamic situations and thus is useful for game level designer to determine various parameters for game.

Analysis on the Bargaining Game Using Artificial Agents (인공에이전트를 이용한 교섭게임에 관한 연구)

  • Chang, Seok-cheol;Soak, Sang-moon;Yun, Joung-il;Yoon, Jung-won;Ahn, Byung-ha
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.32 no.3
    • /
    • pp.172-179
    • /
    • 2006
  • Over the past few years, a considerable number of studies have been conducted on modeling the bargaining game using artificial agents on within-model interaction. However, very few attempts have been made at study on between-model interaction. This paper investigates the interaction and co-evolutionary process among heterogeneous artificial agents in the bargaining game. We present two kinds of the artificial agents participating in the bargaining game. They play some bargaining games with their strategies based on genetic algorithm (GA) and reinforcement learning (RL). We compare agents' performance between two agents under various conditions which are the changes of the parameters of artificial agents and the maximal number of round in the bargaining game. Finally, we discuss which agents show better performance and why the results are produced.

Server Performance Improvement with Predicted Range of Agent Movement (이동 범위 예측을 통한 온라인 서버 성능 향상 기법)

  • Kim, Yong-O;Shin, Seung-Ho;Kang, Shin-Jin
    • Journal of Korea Game Society
    • /
    • v.11 no.1
    • /
    • pp.101-109
    • /
    • 2011
  • The performance of server becomes important issues in online game with the online game market expansion. This paper proposes a method of improving performance to decrease synchronized packets for each entity's informations in game. Our method provides adapted solution of reconstructing spatial subdivision to reduce a load of movement between boundary regions using prediction of entity's movement range and disabled regions where entity can not move to. It is shown through the experiments that proposed method outperforms existing method in terms of processing quantity of packets.

Implementation and Performance Evaluation of Migration Agent for Seamless Virtual Environment System in Grid Computing Network (그리드 컴퓨팅 네트워크에서 Seamless 가상 환경 시스템 구축을 위한 마이그레이션 에이전트 구현 및 성능 평가)

  • Won, Dong Hyun;An, Dong-Un
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.11
    • /
    • pp.269-274
    • /
    • 2018
  • MMORPG is a role-playing game that tens of thousands of people access it online at the same time. Users connect to the server through the game client and play with their own characters. If the user moves to a management area of another server beyond the area managed by the server, the user information must be transmitted to the server to be moved. In an actual game, the user is required to synchronize the established and the transferred information. In this paper, we propose a migration agent server in the virtual systems. We implement a seamless virtual server using the grid method to experiment with seamless server architecture for virtual systems. We propose a method to minimize the delay and equalize the load when the user moves to another server region in the virtual environment. Migration Agent acts as a cache server to reduce response time, the response time was reduced by 50% in the case of 70,000 people.

Learning soccer robot using genetic programming

  • Wang, Xiaoshu;Sugisaka, Masanori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1999.10a
    • /
    • pp.292-297
    • /
    • 1999
  • Evolving in artificial agent is an extremely difficult problem, but on the other hand, a challenging task. At present the studies mainly centered on single agent learning problem. In our case, we use simulated soccer to investigate multi-agent cooperative learning. Consider the fundamental differences in learning mechanism, existing reinforcement learning algorithms can be roughly classified into two types-that based on evaluation functions and that of searching policy space directly. Genetic Programming developed from Genetic Algorithms is one of the most well known approaches belonging to the latter. In this paper, we give detailed algorithm description as well as data construction that are necessary for learning single agent strategies at first. In following step moreover, we will extend developed methods into multiple robot domains. game. We investigate and contrast two different methods-simple team learning and sub-group loaming and conclude the paper with some experimental results.

  • PDF

Dynamic Positioning of Robot Soccer Simulation Game Agents using Reinforcement learning

  • Kwon, Ki-Duk;Cho, Soo-Sin;Kim, In-Cheol
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.59-64
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to chose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state- action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem. we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL)as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Architecture and Path-Finding Behavior of An Intelligent Agent Deploying within 3D Virtual Environment (3차원 가상환경에서 동작하는 지능형 에이전트의 구조와 경로 찾기 행위)

  • Kim, In-Cheol;Lee, Jae-Ho
    • The KIPS Transactions:PartB
    • /
    • v.10B no.1
    • /
    • pp.1-12
    • /
    • 2003
  • In this paper, we Introduce the Unreal Tournament (UT) game and the Gamebots system. The former it a well-known 3D first-person action game and the latter is an intelligent agent research testbed based on UT And then we explain the design and implementation of KGBot, which is an intelligent non-player character deploying effectively within the 3D virtual environment provided by UT and the Gamebots system. KGBot is a bot client within the Gamebots System. KGBot accomplishes its own task to find out and dominate several domination points pro-located on the complex surface map of 3D virtual environment KGBot adopts UM-PRS as its control engine, which is a general BDI agent architecture. KGBot contains a hierarchical knowledge base representing its complex behaviors in multiple layers. In this paper, we explain details of KGBot's Intelligent behaviors, tuck af locating the hidden domination points by exploring the unknown world effectively. constructing a path map by collecting the waypoints and paths distributed over the world, and finding an optimal path to certain destination based on this path graph. Finally we analyze the performance of KGBot exploring strategy and control engine through some experiments on different 3D maps.

A Study on the Agency Theory and Accounting (에이전시이론과 회계감사에 관한 연구)

  • 공해영
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.12 no.20
    • /
    • pp.123-138
    • /
    • 1989
  • The primary objective of the agency research in the game theory lives in the maintenance of Pareto is optimal condition for the optimal incentive contract. The basic concepts which are related to this objective are reviewed in connection with the general assumptions to model it, the moral hazard and adverse selection which arised from the information asymmetry, and finally the problem of risk distribution. The demand for auditing and the role of auditor have been addressed by ASOBAC. Issues which an auditor is explicitly introduced in a principal-agent framework have been addressed in this paper. These issues must be confronted to appropriately with the auditor, and to achieve an adequate understanding of optimal confronting arrangement with the auditor. The first step in introducing an auditor into this analysis is to examine the game-theoretic foundation of such a expended agency model. The Mathematical program formulated may not yield solution that are resonable. This arises because the program may call for the auditor and manager to play dominated Nash equilibra in some subgame. The nontrivial natures of the subgame implies that randomized strategies by the auditor and manager nay be of crucial importance. The possibilities for overcoming the randomized strategy problem were suggested; change the rule of the game and or impose covexity condition. The former seems unjustifiable in on auditing context, and the latter promising but difficult to achieve. The discussion ended with an extension of the revelation principle to the owner manager-auditor game, assuming strategies. An examination of the restriction and improvement direction of the basic concept of agency theory was addressed in the later part of this paper. Many important aspects of auditor incentives are inherently multiple-agent, multiple-period, multiple-objectine, phenomena and require further analyses and researches.

  • PDF

Affective interaction to emotion expressive VR agents (가상현실 에이전트와의 감성적 상호작용 기법)

  • Choi, Ahyoung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.5
    • /
    • pp.37-47
    • /
    • 2016
  • This study evaluate user feedback such as physiological response and facial expression when subjects play a social decision making game with interactive virtual agent partners. In the social decision making game, subjects will invest some of money or credit in one of projects. Their partners (virtual agents) will also invest in one of the projects. They will interact with different kinds of virtual agents which behave reciprocated or unreciprocated behavior while expressing socially affective facial expression. The total money or credit which the subject earns is contingent on partner's choice. From this study, I observed that subject's appraisal of interaction with cooperative/uncooperative (or friendly/unfriendly) virtual agents in an investment game result in increased autonomic and somatic response, and that these responses were observed by physiological signal and facial expression in real time. For assessing user feedback, Photoplethysmography (PPG) sensor, Galvanic skin response (GSR) sensor while capturing front facial image of the subject from web camera were used. After all trials, subjects asked to answer to questions associated with evaluation how much these interaction with virtual agents affect to their appraisals.

Motivation-based Hierarchical Behavior Planning

  • Song, Wei;Cho, Kyung-Eun;Um, Ky-Hyun
    • Journal of Korea Game Society
    • /
    • v.8 no.1
    • /
    • pp.79-90
    • /
    • 2008
  • This paper describes a motivation-based hierarchical behavior planning framework to allow autonomous agents to select adaptive actions in simulation game environments. The combined behavior planning system is formed by four levels of specification, which are motivation extraction, goal list generation, action list determination and optimization. Our model increases the complexity of virtual human behavior planning by adding motivation with sudden and cumulative attributes. The motivation selection by probability distribution allows NPC to make multiple decisions in certain situations in order to embody realistic virtual humans. Hierarchical goal tree enhances the effective reactivity. Optimizing for potential actions provides NPC with safe and satisfying actions to adapt to the virtual environment. A restaurant simulation game was used to elucidate the mechanism of the framework.

  • PDF