Abstract
In this paper, we present two large video multi-modal datasets for RGB and RGB-D gesture recognition: the ChaLearn LAP RGB-D Isolated Gesture Dataset (IsoGD) and the Continuous Gesture Dataset (ConGD). Both datasets are derived from the ChaLearn Gesture Dataset (CGD) that has a total of more than 50000 gestures for the 'one-shot-learning' competition. To increase the potential of the old dataset, we designed new well curated datasets composed of 249 gesture labels, and including 47933 gestures manually labeled the begin and end frames in sequences. Using these datasets we will open two competitions on the CodaLab platform so that researchers can test and compare their methods for 'user independent' gesture recognition. The first challenge is designed for gesture spotting and recognition in continuous sequences of gestures while the second one is designed for gesture classification from segmented data. The baseline method based on the bag of visual words model is also presented.
Original language | English |
---|---|
Title of host publication | Proceedings - 29th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2016 |
Number of pages | 9 |
Publisher | IEEE Computer Society Press |
Publication date | 16 Dec 2016 |
Pages | 761-769 |
Article number | 7789590 |
ISBN (Electronic) | 9781467388504 |
DOIs | |
Publication status | Published - 16 Dec 2016 |
Externally published | Yes |
Event | 29th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2016 - Las Vegas, United States Duration: 26 Jun 2016 → 1 Jul 2016 |
Conference
Conference | 29th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2016 |
---|---|
Country/Territory | United States |
City | Las Vegas |
Period | 26/06/2016 → 01/07/2016 |
Series | IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops |
---|---|
ISSN | 2160-7508 |
Bibliographical note
Publisher Copyright:© 2016 IEEE.