A Stochastic Gradient Type Algorithm for Closed Loop Problems
Résumé
We focus on solving closed-loop stochastic problems, and propose a perturbed gradient algorithm to achieve this goal. The main hurdle in such problems is the fact that the control variables are infinite dimensional, and have hence to be represented in a finite way in order to numerically solve the problem. In the same way, the gradient of the criterion is itself an infinite dimensional object. Our algorithm replaces this exact (and unknown) gradient by a perturbed one, which consists in the product of the true gradient evaluated at a random point and a kernel function which extends this gradient to the neighbourhood of the random point. Proceeding this way, we explore the whole space iteration after iteration through random points. Since each kernel function is perfectly known by a finite (and small) number of parameters, say N, the control at iteration k is perfectly known as an infinite dimensional object by at most N x k parameters. The main strength of this method is that it avoids any discretization of the underlying space, provided that we can draw as many points as needed in this space. Hence, we can take into account in a new way the possible measurability constraints of the problem. Moreover, the randomization of this algorithm implies that the most probable parts of the space are the most explored ones, what is a priori an interesting feature. In this paper, we first show a convergence result of this algorithm in the general case, and then give a few numerical examples showing the interest of this method for solving practical stochastic optimization problems.