Game Relations and Metrics ∗ Luca de Alfaro Computer Engineering Department University of California, Santa Cruz, USA Rupak Majumdar Department of Computer Science University of California, Los Angeles, USA Vishwanath Raman Computer Science Department University of California, Santa Cruz, USA and Synopsys Inc, Mountain View, USA Mari¨ elle Stoelinga FMT Group University of Twente, the Netherlands Abstract We consider two-player games played over finite state spaces for an infinite number of rounds. At each state, the players simultaneously choose moves; the moves deter- mine a successor state. It is often advantageous for players to choose probability distributions over moves, rather than single moves. Given a goal (e.g., “reach a target state”), the question of winning is thus a probabilistic one: “what is the maximal probability of winning from a given state?”. On these game structures, two fundamental notions are those of equivalences and metrics. Given a set of winning conditions, two states are equivalent if the players can win the same games with the same probability from both states. Metrics provide a bound on the difference in the probabil- ities of winning across states, capturing a quantitative no- tion of state “similarity”. We introduce equivalences and metrics for two-player game structures, and we show that they characterize the dif- ference in probability of winning games whose goals are ex- pressed in the quantitative µ-calculus. The quantitative µ- calculus can express a large set of goals, including reacha- bility, safety, and ω-regular properties. Thus, we claim that our relations and metrics provide the canonical extensions to games, of the classical notion of bisimulation for transi- tion systems. We develop our results both for equivalences and metrics, which generalize bisimulation, and for asym- metrical versions, which generalize simulation. * This research was sponsored in part by the grants NSF-CCF-0427202, NSF-CCF-0546170, and NSF-CCR-0132780. 1. Introduction We consider two-player games played for an infinite number of rounds over finite state spaces. At each round, the players simultaneously and independently select moves; the moves then determine a probability distribution over successor states. These games, known variously as stochas- tic games [24] or concurrent games [3, 1, 5], generalize many common structures in computer science, from tran- sition systems, to Markov chains [12] and Markov decision processes [6]. The games are turn-based if, at each state, at most one of the players has a choice of moves, and deter- ministic if the successor state is uniquely determined by the current state, and by the moves chosen by the players. It is well-known that in such games with simultaneous moves it is often advantageous for the players to random- ize their moves, so that at each round, they play not a sin- gle “pure” move, but rather, a probability distribution over the available moves. These probability distributions over moves, called mixed moves [20], lead to various notions of equilibria [29, 20], such as the equilibrium result expressed by the minimax theorem [29]. Intuitively, the benefit of playing mixed, rather than pure, moves lies in preventing the adversary from tailoring a response to the individual move played. Even for simple reachability games, the use of mixed moves may allow players to win, with probability 1, games that they would lose (i.e., win with probability 0) if restricted to playing pure moves [3]. With mixed moves, the question of winning a game with respect to a goal is thus a probabilistic one: what is the maximal probability a player can be guaranteed of winning, regardless of how the other player plays? This probability is known, in brief, as the winning probability. In LICS 07: Proceedings of the 22nd Annual IEEE Symposium on Logic in Computer Science, 2007