TY - GEN
T1 - ChaLearn multi-modal gesture recognition 2013
T2 - 2013 15th ACM International Conference on Multimodal Interaction, ICMI 2013
AU - Escalera, Sergio
AU - Gonzàlez, Jordi
AU - Baró, Xavier
AU - Reyes, Miguel
AU - Guyon, Isabelle
AU - Athitsos, Vassilis
AU - Escalante, Hugo
AU - Sigal, Leonid
AU - Argyros, Antonis
AU - Sminchisescu, Cristian
AU - Bowden, Richard
AU - Sclaroff, Stan
PY - 2013
Y1 - 2013
N2 - We organized a Grand Challenge and Workshop on Multi-Modal Gesture Recognition. The MMGR Grand Challenge focused on the recognition of continuous natural gestures from multi-modal data (including RGB, Depth, user mask, Skeletal model, and audio). We made available a large labeled video database of 13,858 gestures from a lexicon of 20 Italian gesture categories recorded with a Kinect™ camera. More than 54 teams participated in the challenge and a final error rate of 12% was achieved by the winner of the competition. Winners of the competition published their work in the workshop of the Challenge. The MMGR Workshop was held at ICMI conference 2013, Sidney. A total of 9 relevant papers with basis on multi-modal gesture recognition were accepted for presentation. This includes multi-modal descriptors, multi-class learning strategies for segmentation and classification in temporal data, as well as relevant applications in the field, including multi-modal Social Signal Processing and multi-modal Human Computer Interfaces. Five relevant invited speakers participated in the workshop: Profs. Leonid Signal from Disney Research, Antonis Argyros from FORTH, Institute of Computer Science, Cristian Sminchisescu from Lund University, Richard Bowden from University of Surrey, and Stan Sclaroff from Boston University. They summarized their research in the field and discussed past, current, and future challenges in Multi-Modal Gesture Recognition.
AB - We organized a Grand Challenge and Workshop on Multi-Modal Gesture Recognition. The MMGR Grand Challenge focused on the recognition of continuous natural gestures from multi-modal data (including RGB, Depth, user mask, Skeletal model, and audio). We made available a large labeled video database of 13,858 gestures from a lexicon of 20 Italian gesture categories recorded with a Kinect™ camera. More than 54 teams participated in the challenge and a final error rate of 12% was achieved by the winner of the competition. Winners of the competition published their work in the workshop of the Challenge. The MMGR Workshop was held at ICMI conference 2013, Sidney. A total of 9 relevant papers with basis on multi-modal gesture recognition were accepted for presentation. This includes multi-modal descriptors, multi-class learning strategies for segmentation and classification in temporal data, as well as relevant applications in the field, including multi-modal Social Signal Processing and multi-modal Human Computer Interfaces. Five relevant invited speakers participated in the workshop: Profs. Leonid Signal from Disney Research, Antonis Argyros from FORTH, Institute of Computer Science, Cristian Sminchisescu from Lund University, Richard Bowden from University of Surrey, and Stan Sclaroff from Boston University. They summarized their research in the field and discussed past, current, and future challenges in Multi-Modal Gesture Recognition.
KW - computer vision
KW - gesture recognition
KW - multi-modal data analysis
UR - http://www.scopus.com/inward/record.url?scp=84892567660&partnerID=8YFLogxK
U2 - 10.1145/2522848.2532597
DO - 10.1145/2522848.2532597
M3 - Article in proceeding
AN - SCOPUS:84892567660
SN - 9781450321297
T3 - ICMI 2013 - Proceedings of the 2013 ACM International Conference on Multimodal Interaction
SP - 365
EP - 370
BT - ICMI 2013 - Proceedings of the 2013 ACM International Conference on Multimodal Interaction
Y2 - 9 December 2013 through 13 December 2013
ER -