Abstract

We present a pattern recognition framework for semantic segmentation of visual structures, that is, multi-class labelling at pixel level, and apply it to the task of segmenting organs in the eviscerated viscera from slaughtered poultry in RGB-D images. This is a step towards replacing the current strenuous manual inspection at poultry processing plants. Features are extracted from feature maps such as activation maps from a convolutional neural network (CNN). A random forest classifier assigns class probabilities, which are further refined by utilizing context in a conditional random field. The presented method is compatible with both 2D and 3D features, which allows us to explore the value of adding 3D and CNN-derived features. The dataset consists of 604 RGB-D images showing 151 unique sets of eviscerated viscera from four different perspectives. A mean Jaccard index of 78.11 % is achieved across the four classes of organs by using features derived from 2D, 3D and a CNN, compared to 74.28 % using only basic 2D image features.

Original languageEnglish
Article number117
JournalSensors
Volume18
Issue number1
Number of pages15
ISSN1424-8220
DOIs
Publication statusPublished - 3 Jan 2018

Keywords

  • Journal Article
  • RGB-D
  • 3D
  • 2D
  • CNN
  • Conditional random field
  • Semantic segmentation
  • Random forest

Fingerprint

Dive into the research topics of 'Organ Segmentation in Poultry Viscera Using RGB-D'. Together they form a unique fingerprint.

Cite this