Continual State Representation Learning for Reinforcement Learning using Generative Replay - ENSTA Paris - École nationale supérieure de techniques avancées Paris Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

Continual State Representation Learning for Reinforcement Learning using Generative Replay

Résumé

We consider the problem of building a state representation model in a continual fashion. As the environment changes, the aim is to efficiently compress the sensory state's information without losing past knowledge. The learned features are then fed to a Reinforcement Learning algorithm to learn a policy. We propose to use Variational Auto-Encoders for state representation, and Generative Replay, i.e. the use of generated samples, to maintain past knowledge. We also provide a general and statistically sound method for automatic environment change detection. Our method provides efficient state representation as well as forward transfer, and avoids catastrophic forgetting. The resulting model is capable of incrementally learning information without using past data and with a bounded system size.
Fichier principal
Vignette du fichier
_NIPS_CL_Workshop__Continual_State_Representation_Learning_for_Reinforcement_Learning (2).pdf (1.6 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01951399 , version 1 (11-12-2018)

Identifiants

  • HAL Id : hal-01951399 , version 1

Citer

Hugo Caselles-Dupré, Michael Garcia-Ortiz, David Filliat. Continual State Representation Learning for Reinforcement Learning using Generative Replay. Workshop on Continual Learning, NeurIPS 2018 - Thirty-second Conference on Neural Information Processing Systems, Dec 2018, Montréal, Canada. ⟨hal-01951399⟩
118 Consultations
103 Téléchargements

Partager

Gmail Facebook X LinkedIn More