Recognition and Prediction of Human Actions for Safe Human-Robot Collaboration

Research output: Contribution to conference without publisher/journalConference abstract for conferenceResearch

Abstract

Collaborative industrial robots are creating new opportunities for collaboration between humans and robots in shared workspaces. In order for such collaboration to be efficient, robots - as well as humans - need to have an understanding of the other's intentions and current ongoing action. In this work, we propose a method for learning, classifying, and predicting actions taken by a human. Our proposed method is based on the human skeleton model from a Kinect. For demonstration of our approach we chose a typical pick-and-place scenario. Therefore, only arms and upper body are considered; in total 15 joints. Based on trajectories on these joints, different classes of motion are separated using partitioning around medoids (PAM). Subsequently, SVM is used to train the classes to form a library of human motions. The approach allows run-time detection of when a new motion has been initiated, early prediction of the motion class based on the first fraction of the movement, and classification of the motion when it has neded. Additionally, the approach makes it possible to expand the library of human motions in run-time by detection of previously unknown motion types.
Original languageEnglish
Publication date2017
Publication statusPublished - 2017
Event6th Aalborg U Robotics Workshop - Aalborg, Denmark
Duration: 27 Nov 201727 Nov 2017

Conference

Conference6th Aalborg U Robotics Workshop
CountryDenmark
CityAalborg
Period27/11/201727/11/2017

Fingerprint

Robots
Industrial robots
Demonstrations
Trajectories

Keywords

  • Robot vision
  • Machine Learning

Cite this

Andersen, R. S., Bøgh, S., & Ceballos, I. (2017). Recognition and Prediction of Human Actions for Safe Human-Robot Collaboration. Abstract from 6th Aalborg U Robotics Workshop, Aalborg, Denmark.
Andersen, Rasmus Skovgaard ; Bøgh, Simon ; Ceballos, Iker. / Recognition and Prediction of Human Actions for Safe Human-Robot Collaboration. Abstract from 6th Aalborg U Robotics Workshop, Aalborg, Denmark.
@conference{e363466b6b814e00aa37a0cdde32106a,
title = "Recognition and Prediction of Human Actions for Safe Human-Robot Collaboration",
abstract = "Collaborative industrial robots are creating new opportunities for collaboration between humans and robots in shared workspaces. In order for such collaboration to be efficient, robots - as well as humans - need to have an understanding of the other's intentions and current ongoing action. In this work, we propose a method for learning, classifying, and predicting actions taken by a human. Our proposed method is based on the human skeleton model from a Kinect. For demonstration of our approach we chose a typical pick-and-place scenario. Therefore, only arms and upper body are considered; in total 15 joints. Based on trajectories on these joints, different classes of motion are separated using partitioning around medoids (PAM). Subsequently, SVM is used to train the classes to form a library of human motions. The approach allows run-time detection of when a new motion has been initiated, early prediction of the motion class based on the first fraction of the movement, and classification of the motion when it has neded. Additionally, the approach makes it possible to expand the library of human motions in run-time by detection of previously unknown motion types.",
keywords = "Robot vision, Machine Learning",
author = "Andersen, {Rasmus Skovgaard} and Simon B{\o}gh and Iker Ceballos",
year = "2017",
language = "English",
note = "null ; Conference date: 27-11-2017 Through 27-11-2017",

}

Andersen, RS, Bøgh, S & Ceballos, I 2017, 'Recognition and Prediction of Human Actions for Safe Human-Robot Collaboration', Aalborg, Denmark, 27/11/2017 - 27/11/2017, .

Recognition and Prediction of Human Actions for Safe Human-Robot Collaboration. / Andersen, Rasmus Skovgaard; Bøgh, Simon; Ceballos, Iker.

2017. Abstract from 6th Aalborg U Robotics Workshop, Aalborg, Denmark.

Research output: Contribution to conference without publisher/journalConference abstract for conferenceResearch

TY - ABST

T1 - Recognition and Prediction of Human Actions for Safe Human-Robot Collaboration

AU - Andersen, Rasmus Skovgaard

AU - Bøgh, Simon

AU - Ceballos, Iker

PY - 2017

Y1 - 2017

N2 - Collaborative industrial robots are creating new opportunities for collaboration between humans and robots in shared workspaces. In order for such collaboration to be efficient, robots - as well as humans - need to have an understanding of the other's intentions and current ongoing action. In this work, we propose a method for learning, classifying, and predicting actions taken by a human. Our proposed method is based on the human skeleton model from a Kinect. For demonstration of our approach we chose a typical pick-and-place scenario. Therefore, only arms and upper body are considered; in total 15 joints. Based on trajectories on these joints, different classes of motion are separated using partitioning around medoids (PAM). Subsequently, SVM is used to train the classes to form a library of human motions. The approach allows run-time detection of when a new motion has been initiated, early prediction of the motion class based on the first fraction of the movement, and classification of the motion when it has neded. Additionally, the approach makes it possible to expand the library of human motions in run-time by detection of previously unknown motion types.

AB - Collaborative industrial robots are creating new opportunities for collaboration between humans and robots in shared workspaces. In order for such collaboration to be efficient, robots - as well as humans - need to have an understanding of the other's intentions and current ongoing action. In this work, we propose a method for learning, classifying, and predicting actions taken by a human. Our proposed method is based on the human skeleton model from a Kinect. For demonstration of our approach we chose a typical pick-and-place scenario. Therefore, only arms and upper body are considered; in total 15 joints. Based on trajectories on these joints, different classes of motion are separated using partitioning around medoids (PAM). Subsequently, SVM is used to train the classes to form a library of human motions. The approach allows run-time detection of when a new motion has been initiated, early prediction of the motion class based on the first fraction of the movement, and classification of the motion when it has neded. Additionally, the approach makes it possible to expand the library of human motions in run-time by detection of previously unknown motion types.

KW - Robot vision

KW - Machine Learning

M3 - Conference abstract for conference

ER -

Andersen RS, Bøgh S, Ceballos I. Recognition and Prediction of Human Actions for Safe Human-Robot Collaboration. 2017. Abstract from 6th Aalborg U Robotics Workshop, Aalborg, Denmark.