Temporal difference learning with kernels for pricing american-style options
Résumé
We propose in this paper to study the problem of estimating the cost-to-go function for an infinite-horizon discounted Markov chain with possibly continuous state space. For implementation purposes, the state space is typically discretized. As soon as the dimension of the state space becomes large, the computation is no more practicable, a phenomenon referred to as the curse of dimensionality. The approximation of dynamic programming problems is therefore of major importance. A powerful method for dynamic programming, often referred to as neuro-dynamic programming, consists in representing the Bellman function as a linear combination of a priori defined functions, called neurons. In this article, we propose an alternative approach very similar to temporal differences, based on functional gradient descent and using an infinite kernel basis.Furthermore, our algorithm, though aimed at infinite dimensional problems, is implementable in practice. We prove the convergence of this algorithm, and show applications on e.g. bermudan option pricing.