A multi-modal perception based assistive robotic system for the elderly - Université Toulouse III - Paul Sabatier - Toulouse INP Accéder directement au contenu
Article Dans Une Revue Computer Vision and Image Understanding Année : 2016

A multi-modal perception based assistive robotic system for the elderly

Résumé

In this paper, we present a multi-modal perception based framework to realize a non-intrusive domestic assistive robotic system. It is non-intrusive in that it only starts interaction with a user when it detects the user's intention to do so. All the robot's actions are based on multi-modal perceptions which include user detection based on RGB-D data, user's intention-for-interaction detection with RGB-D and audio data, and communication via user distance mediated speech recognition. The utilization of multi-modal cues in different parts of the robotic activity paves the way to successful robotic runs (94% success rate). Each presented perceptual component is systematically evaluated using appropriate dataset and evaluation metrics. Finally the complete system is fully integrated on the PR2 robotic platform and validated through system sanity check runs and user studies with the help of 17 volunteer elderly participants.
Fichier principal
Vignette du fichier
cviu_revised_submission_Mollaret.pdf (2.29 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01300463 , version 1 (11-04-2016)

Identifiants

Citer

Christophe Mollaret, Alhayat Ali Mekonnen, Frédéric Lerasle, Isabelle Ferrané, Julien Pinquier, et al.. A multi-modal perception based assistive robotic system for the elderly. Computer Vision and Image Understanding, 2016, Special issue on Assistive Computer Vision and Robotics : Assistive Solutions for Mobility, Communication and HMI, 149, pp.78-97. ⟨10.1016/j.cviu.2016.03.003⟩. ⟨hal-01300463⟩
328 Consultations
297 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More