• Title/Summary/Keyword: game optimization

Search Result 136, Processing Time 0.023 seconds

Developing a Subset Sum Problem based Puzzle Game for Learning Mathematical Programming (수리계획법 학습을 위한 부분집합총합문제 기반 퍼즐 게임 개발)

  • Kim, Jun-Woo;Im, Kwang-Hyuk
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.680-689
    • /
    • 2013
  • In recent, much attention has been paid to the educational serious games that provide both fun and learning effects. However, most educational games have been targeted at the infants and children, and it is still hard to use such games in higher education. On the contrary, this paper aims to develop an educational game for teaching mathematical programming to the undergraduates. It is well known that most puzzle games can be transformed into associated optimization problem and vice versa, and this paper proposes a simple educational game based on the subset sum problem. This game enables the users to play the puzzle and construct their own mathematical programming model for solving it. Moreover, the users are provided with appropriate instructions for modeling and their models are evaluated by using the data automatically generated. It is expected that the educational game in this paper will be helpful for teaching basic programming models to the students in industrial engineering or management science.

A Study on a MMORPG Battle Simulator with Input of Rules (룰 입력을 통한 MMORPG전투 시뮬레이터에 관한 연구)

  • Kim, Jung-Hyun;Kim, Kyung-Sik
    • Journal of Korea Game Society
    • /
    • v.7 no.3
    • /
    • pp.97-103
    • /
    • 2007
  • Battles between players and monsters are considered to be the core issues of RPG games, and they are something that effects the players the most when it comes to thinking whether that particular game is fun or not. Even though MMORPGs have enabled RPGs to get out of the repetitiveness into unlimited possibility of expandability and capability of modification of game plays, essential game play process remains still important - the basic battle between the player and the monsters. In this research, we made a simulator to simulate the process of the battles to estimate results of battles by pre-testing the battles with setting up the experience values of players and monsters using formulae needed for the battle systems at the beginning of game development. It can be utilized at a tester to find out a better configuration of monsters in the maps for better optimization of time and efficiency of the battles to satisfy the game planning.

  • PDF

Interference Management Algorithm Based on Coalitional Game for Energy-Harvesting Small Cells

  • Chen, Jiamin;Zhu, Qi;Zhao, Su
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4220-4241
    • /
    • 2017
  • For the downlink energy-harvesting small cell network, this paper proposes an interference management algorithm based on distributed coalitional game. The cooperative interference management problem of the energy-harvesting small cells is modeled as a coalitional game with transfer utility. Based on the energy harvesting strategy of the small cells, the time sharing mode of the small cells in the same coalition is determined, and an optimization model is constructed to maximize the total system rate of the energy-harvesting small cells. Using the distributed algorithm for coalition formation proposed in this paper, the stable coalition structure, optimal time sharing strategy and optimal power distribution are found to maximize the total utility of the small cell system. The performance of the proposed algorithm is discussed and analyzed finally, and it is proved that this algorithm can converge to a stable coalition structure with reasonable complexity. The simulations show that the total system rate of the proposed algorithm is superior to that of the non-cooperative algorithm in the case of dense deployment of small cells, and the proposed algorithm can converge quickly.

Mean Field Game based Reinforcement Learning for Weapon-Target Assignment (평균 필드 게임 기반의 강화학습을 통한 무기-표적 할당)

  • Shin, Min Kyu;Park, Soon-Seo;Lee, Daniel;Choi, Han-Lim
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.23 no.4
    • /
    • pp.337-345
    • /
    • 2020
  • The Weapon-Target Assignment(WTA) problem can be formulated as an optimization problem that minimize the threat of targets. Existing methods consider the trade-off between optimality and execution time to meet the various mission objectives. We propose a multi-agent reinforcement learning algorithm for WTA based on mean field game to solve the problem in real-time with nearly optimal accuracy. Mean field game is a recent method introduced to relieve the curse of dimensionality in multi-agent learning algorithm. In addition, previous reinforcement learning models for WTA generally do not consider weapon interference, which may be critical in real world operations. Therefore, we modify the reward function to discourage the crossing of weapon trajectories. The feasibility of the proposed method was verified through simulation of a WTA problem with multiple targets in realtime and the proposed algorithm can assign the weapons to all targets without crossing trajectories of weapons.

Comparing between particle swarm optimization and differential evolution in bargaining game (교섭게임에서 입자군집최적화와 차분진화알고리즘 비교)

  • Lee, Sangwook
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2015.05a
    • /
    • pp.55-56
    • /
    • 2015
  • 근래에 게임이론 분야에서 진화계산 기법을 사용한 분석은 중요한 이슈이다. 본 논문에서는 교섭게임에서 입자군집최적화와 차분진화알고리즘 간의 공진화 과정을 관찰하고 상호 경쟁에서 얻는 이득을 비교하여 두 알고리즘의 성능을 분석한다. 실험결과 입자군집최적화가 차분진화알고리즘에 비해 교섭게임에서 더 우수한 성능을 보임을 확인하였다.

  • PDF

Optimal Controller Design for Single-Phase PFC Rectifiers Using SPEA Multi-Objective Optimization

  • Amirahmadi, Ahmadreza;Dastfan, Ali;Rafiei, Mohammadreza
    • Journal of Power Electronics
    • /
    • v.12 no.1
    • /
    • pp.104-112
    • /
    • 2012
  • In this paper a new method for the design of a simple PI controller is presented and it has been applied in the control of a Boost based PFC rectifier. The Strength Pareto evolutionary algorithm, which is based on the Pareto Optimality concept, used in Game theory literature is implemented as a multi-objective optimization approach to gain a good transient response and a high quality input current. In the proposed method, the input current harmonics and the dynamic response have been assumed as objective functions, while the PI controller's gains of the PFC rectifier (Kpi, Tpi) are design variables. The proposed algorithm generates a set of optimal gains called a Pareto Set corresponding to a Pareto Front, which is a set of optimal results for the objective functions. All of the Pareto Front points are optimum, but according to the design priority objective function, each one can be selected. Simulation and experimental results are presented to prove the superiority of the proposed design methodology over other methods.

Some Recent Results of Approximation Algorithms for Markov Games and their Applications

  • 장형수
    • Proceedings of the Korean Society of Computational and Applied Mathematics Conference
    • /
    • 2003.09a
    • /
    • pp.15-15
    • /
    • 2003
  • We provide some recent results of approximation algorithms for solving Markov Games and discuss their applications to problems that arise in Computer Science. We consider a receding horizon approach as an approximate solution to two-person zero-sum Markov games with an infinite horizon discounted cost criterion. We present error bounds from the optimal equilibrium value of the game when both players take “correlated” receding horizon policies that are based on exact or approximate solutions of receding finite horizon subgames. Motivated by the worst-case optimal control of queueing systems by Altman, we then analyze error bounds when the minimizer plays the (approximate) receding horizon control and the maximizer plays the worst case policy. We give two heuristic examples of the approximate receding horizon control. We extend “parallel rollout” and “hindsight optimization” into the Markov game setting within the framework of the approximate receding horizon approach and analyze their performances. From the parallel rollout approach, the minimizing player seeks to combine dynamically multiple heuristic policies in a set to improve the performances of all of the heuristic policies simultaneously under the guess that the maximizing player has chosen a fixed worst-case policy. Given $\varepsilon$>0, we give the value of the receding horizon which guarantees that the parallel rollout policy with the horizon played by the minimizer “dominates” any heuristic policy in the set by $\varepsilon$, From the hindsight optimization approach, the minimizing player makes a decision based on his expected optimal hindsight performance over a finite horizon. We finally discuss practical implementations of the receding horizon approaches via simulation and applications.

  • PDF

Paradox in collective history-dependent Parrondo games (집단 과거 의존 파론도 게임의 역설)

  • Lee, Ji-Yeon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.4
    • /
    • pp.631-641
    • /
    • 2011
  • We consider a history-dependent Parrondo game in which the winning probability of the present trial depends on the results of the last two trials in the past. When a fraction of an infinite number of players are allowed to choose between two fair Parrondo games at each turn, we compare the blind strategy such as a random sequence of choices with the short-range optimization strategy. In this paper, we show that the random sequence of choices yields a steady increase of average profit. However, if we choose the game that gives the higher expected profit at each turn, surprisingly we are not supposed to get a long-run positive profit for some parameter values.

Comparison of Learning Performance by Reinforcement Learning Agent Visibility Information Difference (강화학습 에이전트 시야 정보 차이에 의한 학습 성능 비교)

  • Kim, Chan Sub;Jang, Si-Hwan;Yang, Seong-Il;Kang, Shin Jin
    • Journal of Korea Game Society
    • /
    • v.21 no.5
    • /
    • pp.17-28
    • /
    • 2021
  • Reinforcement learning, in which artificial intelligence develops itself to find the best solution to problems, is a technology that is highly valuable in many fields. In particular, the game field has the advantage of providing a virtual environment for problem-solving to reinforcement learning artificial intelligence, and reinforcement learning agents solve problems about their environment by identifying information about their situation and environment using observations. In this experiment, the instant dungeon environment of the RPG game was simplified and produced and various observation variables related to the field of view were set to the agent. As a result of the experiment, it was possible to figure out how much each set variable affects the learning speed, and these results can be referred to in the study of game RPG reinforcement learning.

Development of Intelligent Multi-Agent in the Game Environment (게임 환경에서의 지능형 다중 에이전트 개발)

  • Kim, DongMin;Choi, JinWoo;Woo, ChongWoo
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.69-78
    • /
    • 2015
  • Recently, research on the multi-agent system is developed actively in the various fields, especially on the control of complex system and optimization. In this study, we develop a multi-agent system for NPC simulation in game environment. The purpose of the development is to support quick and precise decision by inferencing the situation of the dynamic discrete domain, and to support an optimization process of the agent system. Our approach employed Petri-net as a basic agent model to simplify structure of the system, and used fuzzy inference engine to support decision making in various situation. Our experimentation describes situation of the virtual battlefield between the NPCs, which are divided two groups, such as fuzzy rule based agent and automata based agent. We calculate the percentage of winning and survival rate from the several simulations, and the result describes that the fuzzy rule based agent showed better performance than the automata based agent.