Depth Value Pre-Processing for Accurate Transfer Learning Based RGB-D Object Recognition

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

3 Citations (Scopus)
511 Downloads (Pure)

Abstract

Object recognition is one of the important tasks in computer vision which has found enormous applications.Depth modality is proven to provide supplementary information to the common RGB modality for objectrecognition. In this paper, we propose methods to improve the recognition performance of an existing deeplearning based RGB-D object recognition model, namely the FusionNet proposed by Eitel et al. First, we showthat encoding the depth values as colorized surface normals is beneficial, when the model is initialized withweights learned from training on ImageNet data. Additionally, we show that the RGB stream of the FusionNetmodel can benefit from using deeper network architectures, namely the 16-layered VGGNet, in exchange forthe 8-layered CaffeNet. In combination, these changes improves the recognition performance with 2.2% incomparison to the original FusionNet, when evaluating on the Washington RGB-D Object Dataset.
Original languageEnglish
Title of host publicationInternational Joint Conference on Computational Intelligence
PublisherSCITEPRESS Digital Library
Publication date2017
Pages121-128
ISBN (Print)978-989-758-274-5
DOIs
Publication statusPublished - 2017
EventInternational Joint Conference on Computational Intelligence - Funchal, Portugal
Duration: 1 Nov 20173 Nov 2017
Conference number: 9
http://www.ijcci.org/

Conference

ConferenceInternational Joint Conference on Computational Intelligence
Number9
CountryPortugal
CityFunchal
Period01/11/201703/11/2017
Internet address

Fingerprint

Object recognition
Processing
Network architecture
Computer vision

Keywords

  • Deep Learning
  • Surface Normals
  • Computer Vision
  • Artificial Vision
  • RGB-D
  • Convolutional Neural Networks
  • TransferLearning

Cite this

Aakerberg, A., Nasrollahi, K., Rasmussen, C. B., & Moeslund, T. B. (2017). Depth Value Pre-Processing for Accurate Transfer Learning Based RGB-D Object Recognition. In International Joint Conference on Computational Intelligence (pp. 121-128). SCITEPRESS Digital Library. https://doi.org/10.5220/0006511501210128
Aakerberg, Andreas ; Nasrollahi, Kamal ; Rasmussen, Christoffer Bøgelund ; Moeslund, Thomas B. / Depth Value Pre-Processing for Accurate Transfer Learning Based RGB-D Object Recognition. International Joint Conference on Computational Intelligence. SCITEPRESS Digital Library, 2017. pp. 121-128
@inproceedings{0400b36cce8b4e3cb1a0d911b1678321,
title = "Depth Value Pre-Processing for Accurate Transfer Learning Based RGB-D Object Recognition",
abstract = "Object recognition is one of the important tasks in computer vision which has found enormous applications.Depth modality is proven to provide supplementary information to the common RGB modality for objectrecognition. In this paper, we propose methods to improve the recognition performance of an existing deeplearning based RGB-D object recognition model, namely the FusionNet proposed by Eitel et al. First, we showthat encoding the depth values as colorized surface normals is beneficial, when the model is initialized withweights learned from training on ImageNet data. Additionally, we show that the RGB stream of the FusionNetmodel can benefit from using deeper network architectures, namely the 16-layered VGGNet, in exchange forthe 8-layered CaffeNet. In combination, these changes improves the recognition performance with 2.2{\%} incomparison to the original FusionNet, when evaluating on the Washington RGB-D Object Dataset.",
keywords = "Deep Learning , Surface Normals, Computer Vision, Artificial Vision, RGB-D, Convolutional Neural Networks, TransferLearning",
author = "Andreas Aakerberg and Kamal Nasrollahi and Rasmussen, {Christoffer B{\o}gelund} and Moeslund, {Thomas B.}",
year = "2017",
doi = "10.5220/0006511501210128",
language = "English",
isbn = "978-989-758-274-5",
pages = "121--128",
booktitle = "International Joint Conference on Computational Intelligence",
publisher = "SCITEPRESS Digital Library",

}

Aakerberg, A, Nasrollahi, K, Rasmussen, CB & Moeslund, TB 2017, Depth Value Pre-Processing for Accurate Transfer Learning Based RGB-D Object Recognition. in International Joint Conference on Computational Intelligence. SCITEPRESS Digital Library, pp. 121-128, International Joint Conference on Computational Intelligence, Funchal, Portugal, 01/11/2017. https://doi.org/10.5220/0006511501210128

Depth Value Pre-Processing for Accurate Transfer Learning Based RGB-D Object Recognition. / Aakerberg, Andreas; Nasrollahi, Kamal; Rasmussen, Christoffer Bøgelund; Moeslund, Thomas B.

International Joint Conference on Computational Intelligence. SCITEPRESS Digital Library, 2017. p. 121-128.

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

TY - GEN

T1 - Depth Value Pre-Processing for Accurate Transfer Learning Based RGB-D Object Recognition

AU - Aakerberg, Andreas

AU - Nasrollahi, Kamal

AU - Rasmussen, Christoffer Bøgelund

AU - Moeslund, Thomas B.

PY - 2017

Y1 - 2017

N2 - Object recognition is one of the important tasks in computer vision which has found enormous applications.Depth modality is proven to provide supplementary information to the common RGB modality for objectrecognition. In this paper, we propose methods to improve the recognition performance of an existing deeplearning based RGB-D object recognition model, namely the FusionNet proposed by Eitel et al. First, we showthat encoding the depth values as colorized surface normals is beneficial, when the model is initialized withweights learned from training on ImageNet data. Additionally, we show that the RGB stream of the FusionNetmodel can benefit from using deeper network architectures, namely the 16-layered VGGNet, in exchange forthe 8-layered CaffeNet. In combination, these changes improves the recognition performance with 2.2% incomparison to the original FusionNet, when evaluating on the Washington RGB-D Object Dataset.

AB - Object recognition is one of the important tasks in computer vision which has found enormous applications.Depth modality is proven to provide supplementary information to the common RGB modality for objectrecognition. In this paper, we propose methods to improve the recognition performance of an existing deeplearning based RGB-D object recognition model, namely the FusionNet proposed by Eitel et al. First, we showthat encoding the depth values as colorized surface normals is beneficial, when the model is initialized withweights learned from training on ImageNet data. Additionally, we show that the RGB stream of the FusionNetmodel can benefit from using deeper network architectures, namely the 16-layered VGGNet, in exchange forthe 8-layered CaffeNet. In combination, these changes improves the recognition performance with 2.2% incomparison to the original FusionNet, when evaluating on the Washington RGB-D Object Dataset.

KW - Deep Learning

KW - Surface Normals

KW - Computer Vision

KW - Artificial Vision

KW - RGB-D

KW - Convolutional Neural Networks

KW - TransferLearning

U2 - 10.5220/0006511501210128

DO - 10.5220/0006511501210128

M3 - Article in proceeding

SN - 978-989-758-274-5

SP - 121

EP - 128

BT - International Joint Conference on Computational Intelligence

PB - SCITEPRESS Digital Library

ER -

Aakerberg A, Nasrollahi K, Rasmussen CB, Moeslund TB. Depth Value Pre-Processing for Accurate Transfer Learning Based RGB-D Object Recognition. In International Joint Conference on Computational Intelligence. SCITEPRESS Digital Library. 2017. p. 121-128 https://doi.org/10.5220/0006511501210128