Online Learning of Weakly Coupled MDP Policies for Load Balancing and Auto Scaling
Résumé
Load balancing and auto scaling are at the core of scalable, contemporary systems, addressing dynamic resource allocation and service rate adjustments in response to workload changes. This paper introduces a novel model and algorithms for tuning load balancers coupled with auto scalers, considering bursty traffic arriving at finite queues. We begin by presenting the problem as a weakly coupled Markov Decision Processes (MDP), solvable via a linear program (LP). However, as the number of control variables of such LP grows combinatorially, we introduce a more tractable relaxed LP formulation, and extend it to tackle the problem of online parameter learning and policy optimization using a two-timescale algorithm based on the LP Lagrangian. Our numerical experiments shed insight into properties of the optimal policy. In particular, we identify a phase transition in the probability of job acceptance as a function of the job dropping costs. The experiments also indicate the efficacy of the proposed online learning method, that learns parameters together with the optimal policy, in converging to the optimal solution of the relaxed LP. In summary, the contributions of this work encompass an analytical model and its LP-based solution approach, together with an online learning algorithm, offering insights into the effective management of distributed systems.
Origine | Fichiers produits par l'(les) auteur(s) |
---|