Learning Optimal Admission Control in Partially Observable Queueing Networks - POLARIS - Performance analysis and Optimization of LARge Infrastructure and Systems
Article Dans Une Revue Queueing Systems Année : 2024

Learning Optimal Admission Control in Partially Observable Queueing Networks

Résumé

We present an efficient reinforcement learning algorithm that learns the optimal admission control policy in a partially observable queueing network. Specifically, only the arrival and departure times from the network are observable, and optimality refers to the average holding/rejection cost in infinite horizon. While reinforcement learning in Partially Observable Markov Decision Processes (POMDP) is prohibitively expensive in general, we show that our algorithm has a regret that only depends sub-linearly on the maximal number of jobs in the network, S. In particular, in contrast with existing regret analyses, our regret bound does not depend on the diameter of the underlying Markov Decision Process (MDP), which in most queueing systems is at least exponential in S. The novelty of our approach is to leverage Norton's equivalent theorem for closed product-form queueing networks and an efficient reinforcement learning algorithm for MDPs with the structure of birth-and-death processes.
Fichier principal
Vignette du fichier
questa.pdf (1.11 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04170992 , version 1 (25-07-2023)
hal-04170992 , version 2 (22-02-2024)
hal-04170992 , version 3 (02-07-2024)

Licence

Identifiants

Citer

Jonatha Anselmi, Bruno Gaujal, Louis-Sébastien Rebuffi. Learning Optimal Admission Control in Partially Observable Queueing Networks. Queueing Systems, 2024, pp.1-48. ⟨10.1007/s11134-024-09917-y⟩. ⟨hal-04170992v3⟩
253 Consultations
117 Téléchargements

Altmetric

Partager

More