Action Recognition in Semi-synthetic Images using Motion Primitives

Publikation: Working paperForskning

132 Downloads (Pure)

Resumé

This technical report describes an action recognition approach based on motion primitives. A few characteristic time instances are found in a sequence containing an action and the action is classified from these instances. The characteristic instances are defined solely on the human motion, hence motion primitives. The motion primitives are extracted by double difference images and represented by four features. In each frame the primitive, if any, that best explains the observed data is identified. This leads to a discrete recognition problem since a video sequence will be converted into a string containing a sequence of symbols, each representing a primitive. After pruning the string a probabilistic Edit Distance classifier is applied to identify which action best describes the pruned string. The method is evaluated on five one-arm gestures. A test is performed with semi-synthetic input data achieving a recognition rate of 96.5%.

OriginalsprogEngelsk
Udgivelses stedAalborg
UdgiverComputer Vision and Media Technology Laboratory (CVMT), Aalborg University
Antal sider7
StatusUdgivet - 2006

Fingerprint

Classifiers

Citer dette

Fihl, P., Holte, M. B., Moeslund, T. B., & Reng, L. (2006). Action Recognition in Semi-synthetic Images using Motion Primitives. Aalborg: Computer Vision and Media Technology Laboratory (CVMT), Aalborg University.
Fihl, Preben ; Holte, Michael Boelstoft ; Moeslund, Thomas B. ; Reng, Lars. / Action Recognition in Semi-synthetic Images using Motion Primitives. Aalborg : Computer Vision and Media Technology Laboratory (CVMT), Aalborg University, 2006.
@techreport{0843d950a0e911db8ed6000ea68e967b,
title = "Action Recognition in Semi-synthetic Images using Motion Primitives",
abstract = "This technical report describes an action recognition approach based on motion primitives. A few characteristic time instances are found in a sequence containing an action and the action is classified from these instances. The characteristic instances are defined solely on the human motion, hence motion primitives. The motion primitives are extracted by double difference images and represented by four features. In each frame the primitive, if any, that best explains the observed data is identified. This leads to a discrete recognition problem since a video sequence will be converted into a string containing a sequence of symbols, each representing a primitive. After pruning the string a probabilistic Edit Distance classifier is applied to identify which action best describes the pruned string. The method is evaluated on five one-arm gestures. A test is performed with semi-synthetic input data achieving a recognition rate of 96.5{\%}.",
keywords = "Gesture recognition, Motion primitives",
author = "Preben Fihl and Holte, {Michael Boelstoft} and Moeslund, {Thomas B.} and Lars Reng",
year = "2006",
language = "English",
publisher = "Computer Vision and Media Technology Laboratory (CVMT), Aalborg University",
type = "WorkingPaper",
institution = "Computer Vision and Media Technology Laboratory (CVMT), Aalborg University",

}

Fihl, P, Holte, MB, Moeslund, TB & Reng, L 2006 'Action Recognition in Semi-synthetic Images using Motion Primitives' Computer Vision and Media Technology Laboratory (CVMT), Aalborg University, Aalborg.

Action Recognition in Semi-synthetic Images using Motion Primitives. / Fihl, Preben; Holte, Michael Boelstoft; Moeslund, Thomas B.; Reng, Lars.

Aalborg : Computer Vision and Media Technology Laboratory (CVMT), Aalborg University, 2006.

Publikation: Working paperForskning

TY - UNPB

T1 - Action Recognition in Semi-synthetic Images using Motion Primitives

AU - Fihl, Preben

AU - Holte, Michael Boelstoft

AU - Moeslund, Thomas B.

AU - Reng, Lars

PY - 2006

Y1 - 2006

N2 - This technical report describes an action recognition approach based on motion primitives. A few characteristic time instances are found in a sequence containing an action and the action is classified from these instances. The characteristic instances are defined solely on the human motion, hence motion primitives. The motion primitives are extracted by double difference images and represented by four features. In each frame the primitive, if any, that best explains the observed data is identified. This leads to a discrete recognition problem since a video sequence will be converted into a string containing a sequence of symbols, each representing a primitive. After pruning the string a probabilistic Edit Distance classifier is applied to identify which action best describes the pruned string. The method is evaluated on five one-arm gestures. A test is performed with semi-synthetic input data achieving a recognition rate of 96.5%.

AB - This technical report describes an action recognition approach based on motion primitives. A few characteristic time instances are found in a sequence containing an action and the action is classified from these instances. The characteristic instances are defined solely on the human motion, hence motion primitives. The motion primitives are extracted by double difference images and represented by four features. In each frame the primitive, if any, that best explains the observed data is identified. This leads to a discrete recognition problem since a video sequence will be converted into a string containing a sequence of symbols, each representing a primitive. After pruning the string a probabilistic Edit Distance classifier is applied to identify which action best describes the pruned string. The method is evaluated on five one-arm gestures. A test is performed with semi-synthetic input data achieving a recognition rate of 96.5%.

KW - Gesture recognition

KW - Motion primitives

M3 - Working paper

BT - Action Recognition in Semi-synthetic Images using Motion Primitives

PB - Computer Vision and Media Technology Laboratory (CVMT), Aalborg University

CY - Aalborg

ER -

Fihl P, Holte MB, Moeslund TB, Reng L. Action Recognition in Semi-synthetic Images using Motion Primitives. Aalborg: Computer Vision and Media Technology Laboratory (CVMT), Aalborg University. 2006.