• Title/Summary/Keyword: 게임 에이전트

Search Result 151, Processing Time 0.032 seconds

An Optimization Strategy of Task Allocation using Coordination Agent (조정 에이전트를 이용한 작업 할당 최적화 기법)

  • Park, Jae-Hyun;Um, Ky-Hyun;Cho, Kyung-Eun
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.93-104
    • /
    • 2007
  • In the complex real-time multi-agent system such as game environment, dynamic task allocations are repeatedly performed to achieve a goal in terms of system efficiency. In this research, we present a task allocation scheme suitable for the real-time multi-agent environment. The scheme is to optimize the task allocation by complementing existing coordination agent with $A^*$ algorithm. The coordination agent creates a status graph that consists of nodes which represent the combinations of tasks and agents, and refines the graph to remove nodes of non-execution tasks and agents. The coordination agent performs the selective utilization of the $A^*$ algorithm method and the greedy method for real-time re-allocation. Then it finds some paths of the minimum cost as optimized results by using $A^*$ algorithm. Our experiments show that the coordination agent with $A^*$ algorithm improves a task allocation efficiency about 25% highly than the coordination agent only with greedy algorithm.

  • PDF

A Dynamic state transition based on Augmented Reality using the 3-axis accelerometer sensor (3축 가속도 센서를 이용한 증강현실 기반의 동적 상태변환 알고리즘)

  • Jang, Yu-Na;Park, Sung-Jun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2010.07a
    • /
    • pp.99-102
    • /
    • 2010
  • 스마트폰의 도입으로 인하여 증강현실이 널리 알려짐에 따라 대중들의 관심은 이에 집중되고 있으며 휴대성으로 인하여 모바일 기기에서의 증강현실 연구가 하나의 흐름으로 자리 잡고 있다. 기존의 증강 현실 관련 응용 기술들이 많이 연구되고 있지만 실제 게임에서 사용되고 있는 인공 지능과 결합된 연구는 이루어지고 있지 않다. 본 논문에서는 스마트 폰의 기능중 하나인 3축 가속도 센서를 이용하여 증강 현실 환경에서 3D 에이전트의 상태를 동적으로 변환하는 인공 지능 알고리즘을 제안한다. 인공지능이 적용된 에이전트의 상태를 제어하기 위한 전통적인 방식으로서 사용자가 직접 입력해 주거나 이를 인식하는데 마커를 사용하여 해결하였다. 본 논문에서는 증강 현실 구현을 위해 마커리스 추적 기술을 사용하였고 3축 가속도 센서를 이용하여 동적으로 에이전트의 상태를 변환하도록 하였다.

  • PDF

Design of PPO-based Reinforcement Learning Agents for Match-3 Game Stage Configuration (Match-3 Game 스테이지 구성을 위한 PPO 기반 강화학습 에이전트 설계)

  • Hong, Jamin;Chung, Jaehwa
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.648-651
    • /
    • 2022
  • Match-3 Game 은 스테이지 구성 및 난이도 설정이 중요한 게임이나 다양한 밸런스 요소로 인해 스테이지 구성에 중요한 요소인 난이도 설정에 많은 시간이 소요된다. 특히 게임을 플레이하는 유저가 재미를 느끼는 수준으로 난이도를 설정하는 것이 중요하며, 이를 자동화하기 위해 실제 유저의 플레이 데이터를 활용하여 사람과 유사한 수준의 자동 플레이 에이전트 개발이 진행되었다. 하지만 플레이 데이터의 확보는 쉽지 않기에 연구 방향은 플레이 데이터가 없는 강화학습으로 확장되고 있다. 스테이지 구성에 중요한 요소인 난이도를 설정하기 위함이라면 각 스테이지 간의 상대적인 난이도 차이를 파악하는 것으로 가능하다. 이를 위해 게임의 규칙을 학습한 강화학습 에이전트로 밸런스 요소의 변화에 따른 다양한 난이도의 스테이지를 50 회씩 플레이하여, 평균 획득 점수를 기준으로 스테이지 구성에 필요한 각 스테이지들의 난이도를 파악할 수 있었다.

Generation of AI Agent in Imperfect Information Card Games Using MCTS Algorithm: Focused on Hearthstone (MCTS 기법을 활용한 불완전 정보 카드 게임에서의 인공지능 에이전트 생성 : 하스스톤을 중심으로)

  • Oh, Pyeong;Kim, Ji-Min;Kim, Sun-Jeong;Hong, Seokmin
    • Journal of Korea Game Society
    • /
    • v.16 no.6
    • /
    • pp.79-90
    • /
    • 2016
  • Recently, many researchers have paid attention to the improved generation of AI agent in the area of game industry. Monte-Carlo Tree Search(MCTS) is one of the algorithms to search an optimal solution through random search with perfect information, and it is suitable for the purpose of calculating an approximate value to the solution of an equation which cannot be expressed explicitly. Games in Trading Card Game(TCG) genre such as the heartstone has imperfect information because the cards and play of an opponent are not predictable. In this study, MCTS is suggested in imperfect information card games so as to generate AI agents. In addition, the practicality of MCTS algorithm is verified by applying to heartstone game which is currently used.

Design of Improved Intellectual MOB Agent for Online Game (온라인 게임을 위한 향상된 지능형 MOB 에이전트 설계)

  • Kim, Jin-Soo;Bang, Yong-Chan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.11a
    • /
    • pp.413-416
    • /
    • 2005
  • 기존의 온라인 게임에서 구현되어 있는 수동적인 MOB(Mobile Character)에 '회피' 상태를 추가하고 3 가지 각각의 행동 전이에 따른 행동 패턴을 행동 특성 곡선으로 표현하며 '공격'과 '접근'자극을 스트레스 모형에 적용하여 스트레스에 따른 MOB 에이전트의 행동 패턴 변화를 설명하고 주변의 다른 에이전트들과의 협동을 도모할 수 있는 지능적인 MOB 에이전트를 [1]논문에서 설계하였다. 본 논문에서는 [1]논문에서의 모형을 향상시키기 위하여 행동패턴을 구체화하고 수식을 추가하였으며, 또한 스트레스 카운터를 추가하여 보다 현실적인 모형을 설계하였다.

  • PDF

Affective interaction to emotion expressive VR agents (가상현실 에이전트와의 감성적 상호작용 기법)

  • Choi, Ahyoung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.5
    • /
    • pp.37-47
    • /
    • 2016
  • This study evaluate user feedback such as physiological response and facial expression when subjects play a social decision making game with interactive virtual agent partners. In the social decision making game, subjects will invest some of money or credit in one of projects. Their partners (virtual agents) will also invest in one of the projects. They will interact with different kinds of virtual agents which behave reciprocated or unreciprocated behavior while expressing socially affective facial expression. The total money or credit which the subject earns is contingent on partner's choice. From this study, I observed that subject's appraisal of interaction with cooperative/uncooperative (or friendly/unfriendly) virtual agents in an investment game result in increased autonomic and somatic response, and that these responses were observed by physiological signal and facial expression in real time. For assessing user feedback, Photoplethysmography (PPG) sensor, Galvanic skin response (GSR) sensor while capturing front facial image of the subject from web camera were used. After all trials, subjects asked to answer to questions associated with evaluation how much these interaction with virtual agents affect to their appraisals.

Observation of Bargaining Game using Co-evolution between Particle Swarm Optimization and Differential Evolution (입자군집최적화와 차분진화알고리즘 간의 공진화를 활용한 교섭게임 관찰)

  • Lee, Sangwook
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.549-557
    • /
    • 2014
  • Recently, analysis of bargaining game using evolutionary computation is essential issues in field of game theory. In this paper, we observe a bargaining game using co-evolution between two heterogenous artificial agents. In oder to model two artificial agents, we use a particle swarm optimization and a differential evolution. We investigate algorithm parameters for the best performance and observe that which strategy is better in the bargaining game under the co-evolution between two heterogenous artificial agents. Experimental simulation results show that particle swarm optimization outperforms differential evolution in the bargaining game.

Observation of Bargaining Game by Considering Bargaining Cost and Co-evolution (교섭비용과 공진화를 고려한 교섭게임 관찰)

  • Lee, sangwook
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2018.05a
    • /
    • pp.17-18
    • /
    • 2018
  • 최근 게임이론 분야에서는 인공에이전트들 간의 공진화를 활용해 교섭게임 현상을 관찰하고 있다. 본 논문에서는 공진화를 활용한 실세계 교섭게임 관찰을 보다 더 현실과 유사하게 묘사하기 위해 교섭게임 단계별 비용을 고려한다. 각 교섭게임 단계에서 협상이 결렬되면 다음 단계로 넘어 갈 때 추가적인 비용이 발생하여 게임 참여자 모두 몫이 줄어든다. 시뮬레이션 실험 결과 단계별 비용이 증가 할수록 협상이 빠른 단계에서 이루어지는 것을 확인하였다.

  • PDF

A Dynamic Pricing Negotiation Model in the Online Ticket Resale Market (온라인 티켓 재판매 시장에서의 Dynamic Pricing 협상모델)

  • Cho, Jae-Hyung
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.4
    • /
    • pp.133-148
    • /
    • 2009
  • This study has tried to suggest a new model that can effectively redistribute the tickets in the online ticket resale market, while suggesting a new allocation mechanism based on an agent negotiation. To this end, this study has analyzed an auction in the online ticket resale market through Game theory. As a result of new agent mechanism, it has been proved that the price stability of ticket resale market leads to an increase. An agent negotiation helps to stabilize the ticket prices that are usually inclined to rise at auction, benefiting all the participants in the negotiations, consequently showing a Pareto solution. Especially, a framework for a negotiation process is suggested and domain and processes ontology are designed interrelatedly. With this modeling, a possibility of Ontology based agent negotiation is suggested.

  • PDF

A Naive Bayesian-based Model of the Opponent's Policy for Efficient Multiagent Reinforcement Learning (효율적인 멀티 에이전트 강화 학습을 위한 나이브 베이지만 기반 상대 정책 모델)

  • Kwon, Ki-Duk
    • Journal of Internet Computing and Services
    • /
    • v.9 no.6
    • /
    • pp.165-177
    • /
    • 2008
  • An important issue in Multiagent reinforcement learning is how an agent should learn its optimal policy in a dynamic environment where there exist other agents able to influence its own performance. Most previous works for Multiagent reinforcement learning tend to apply single-agent reinforcement learning techniques without any extensions or require some unrealistic assumptions even though they use explicit models of other agents. In this paper, a Naive Bayesian based policy model of the opponent agent is introduced and then the Multiagent reinforcement learning method using this model is explained. Unlike previous works, the proposed Multiagent reinforcement learning method utilizes the Naive Bayesian based policy model, not the Q function model of the opponent agent. Moreover, this learning method can improve learning efficiency by using a simpler one than other richer but time-consuming policy models such as Finite State Machines(FSM) and Markov chains. In this paper, the Cat and Mouse game is introduced as an adversarial Multiagent environment. And then effectiveness of the proposed Naive Bayesian based policy model is analyzed through experiments using this game as test-bed.

  • PDF