OpReg-Boost: Learning to Accelerate Online Algorithms with Operator Regression - ENSTA Paris - École nationale supérieure de techniques avancées Paris Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

OpReg-Boost: Learning to Accelerate Online Algorithms with Operator Regression

Résumé

This paper presents a new regularization approach -- termed OpReg-Boost -- to boost the convergence and lessen the asymptotic error of online optimization and learning algorithms. In particular, the paper considers online algorithms for optimization problems with a time-varying (weakly) convex composite cost. For a given online algorithm, OpReg-Boost learns the closest algorithmic map that yields linear convergence; to this end, the learning procedure hinges on the concept of operator regression. We show how to formalize the operator regression problem and propose a computationally-efficient Peaceman-Rachford solver that exploits a closed-form solution of simple quadratically-constrained quadratic programs (QCQPs). Simulation results showcase the superior properties of OpReg-Boost w.r.t. the more classical forward-backward algorithm, FISTA, and Anderson acceleration.

Dates et versions

hal-03664795 , version 1 (11-05-2022)

Identifiants

Citer

Nicola Bastianello, Andrea Simonetto, Emiliano Dall'Anese. OpReg-Boost: Learning to Accelerate Online Algorithms with Operator Regression. Learning for Dynamics & Control Conference, Jun 2022, Stanford, United States. ⟨hal-03664795⟩
19 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More