Interactive Robot Learning for Multimodal Emotion Recognition - ENSTA Paris - École nationale supérieure de techniques avancées Paris Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Interactive Robot Learning for Multimodal Emotion Recognition

Résumé

Interaction plays a critical role in skills learning for natural communication. In human-robot interaction (HRI), robots can get feedback during the interaction to improve their social abilities. In this context, we propose an interactive robot learning framework using mul-timodal data from thermal facial images and human gait data for online emotion recognition. We also propose a new decision-level fusion method for the multimodal classification using Random Forest (RF) model. Our hybrid online emotion recognition model focuses on the detection of four human emotions (i.e., neutral, happiness, angry, and sadness). After conducting offline training and testing with the hybrid model, the accuracy of the online emotion recognition system is more than 10% lower than the offline one. In order to improve our system, the human verbal feedback is injected into the robot interactive learning. With the new online emotion recognition system, a 12.5% accuracy increase compared with the online system without interactive robot learning is obtained.
Fichier principal
Vignette du fichier
ICSR2019_final_1.pdf (1.25 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02371856 , version 1 (20-11-2019)

Identifiants

  • HAL Id : hal-02371856 , version 1

Citer

Chuang Yu, Adriana Tapus. Interactive Robot Learning for Multimodal Emotion Recognition. The Eleventh International Conference on Social Robotics, Nov 2019, Madrid, Spain. ⟨hal-02371856⟩
100 Consultations
318 Téléchargements

Partager

Gmail Facebook X LinkedIn More