Some Recent Results of Approximation Algorithms for Markov Games and their Applications

  • Published : 2003.09.01

Abstract

We provide some recent results of approximation algorithms for solving Markov Games and discuss their applications to problems that arise in Computer Science. We consider a receding horizon approach as an approximate solution to two-person zero-sum Markov games with an infinite horizon discounted cost criterion. We present error bounds from the optimal equilibrium value of the game when both players take “correlated” receding horizon policies that are based on exact or approximate solutions of receding finite horizon subgames. Motivated by the worst-case optimal control of queueing systems by Altman, we then analyze error bounds when the minimizer plays the (approximate) receding horizon control and the maximizer plays the worst case policy. We give two heuristic examples of the approximate receding horizon control. We extend “parallel rollout” and “hindsight optimization” into the Markov game setting within the framework of the approximate receding horizon approach and analyze their performances. From the parallel rollout approach, the minimizing player seeks to combine dynamically multiple heuristic policies in a set to improve the performances of all of the heuristic policies simultaneously under the guess that the maximizing player has chosen a fixed worst-case policy. Given $\varepsilon$>0, we give the value of the receding horizon which guarantees that the parallel rollout policy with the horizon played by the minimizer “dominates” any heuristic policy in the set by $\varepsilon$, From the hindsight optimization approach, the minimizing player makes a decision based on his expected optimal hindsight performance over a finite horizon. We finally discuss practical implementations of the receding horizon approaches via simulation and applications.

Keywords