Learning in quantum games - POLARIS - Performance analysis and Optimization of LARge Infrastructure and Systems
Pré-Publication, Document De Travail Année : 2023

Learning in quantum games

Résumé

In this paper, we introduce a class of learning dynamics for general quantum games, that we call "follow the quantum regularized leader" (FTQL), in reference to the classical "follow the regularized leader" (FTRL) template for learning in finite games. We show that the induced quantum state dynamics decompose into (i) a classical, commutative component which governs the dynamics of the system's eigenvalues in a way analogous to the evolution of mixed strategies under FTRL; and (ii) a non-commutative component for the system's eigenvectors which has no classical counterpart. Despite the complications that this non-classical component entails, we find that the FTQL dynamics incur no more than constant regret in all quantum games. Moreover, adjusting classical notions of stability to account for the nonlinear geometry of the state space of quantum games, we show that only pure quantum equilibria can be stable and attracting under FTQL while, as a partial converse, pure equilibria that satisfy a certain "variational stability" condition are always attracting. Finally, we show that the FTQL dynamics are Poincar\'e recurrent in quantum min-max games, extending in this way a very recent result for the quantum replicator dynamics.
Fichier principal
Vignette du fichier
Main.pdf (943.87 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04629302 , version 1 (29-06-2024)

Identifiants

Citer

Kyriakos Lotidis, Panayotis Mertikopoulos, Nicholas Bambos. Learning in quantum games. 2023. ⟨hal-04629302⟩
49 Consultations
25 Téléchargements

Altmetric

Partager

More