ChaLearn Looking at People RGB-D Isolated and Continuous Datasets for Gesture Recognition - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

ChaLearn Looking at People RGB-D Isolated and Continuous Datasets for Gesture Recognition

Résumé

In this paper, we present two large video multi-modal datasets for RGB and RGB-D gesture recognition: the ChaLearn LAP RGB-D Isolated Gesture Dataset (IsoGD) and the Continuous Gesture Dataset (ConGD). Both datasets are derived from the ChaLearn Gesture Dataset (CGD) that has a total of more than 50000 gestures for the "one-shot-learning" competition. To increase the potential of the old dataset, we designed new well curated datasets composed of 249 gesture labels, and including 47933 gestures manually labeled the begin and end frames in sequences. Using these datasets we will open two competitions on the CodaLab platform so that researchers can test and compare their methods for "user independent" gesture recognition. The first challenge is designed for gesture spotting and recognition in continuous sequences of gestures while the second one is designed for gesture classification from segmented data. The baseline method based on the bag of visual words (BoVW) model is also presented.
Fichier non déposé

Dates et versions

hal-01381151 , version 1 (14-10-2016)

Identifiants

  • HAL Id : hal-01381151 , version 1

Citer

Jun Wan, Yibing Zhao, Shuai Zhou, Isabelle Guyon, Sergio Escalera, et al.. ChaLearn Looking at People RGB-D Isolated and Continuous Datasets for Gesture Recognition. CVPR 2016 - IEEE Conference on Computer Vision and Pattern Recognition - Workshops, 2016, Las Vegas, United States. ⟨hal-01381151⟩
3140 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More