Context-Aware Fusion of RGB and Thermal Imagery for Traffic Monitoring

Research output: Contribution to journalJournal articleResearchpeer-review

3 Citations (Scopus)
187 Downloads (Pure)

Abstract

In order to enable a robust 24-h monitoring of traffic under changing environmental conditions, it is beneficial to observe the traffic scene using several sensors, preferably from different modalities. To fully benefit from multi-modal sensor output, however, one must fuse the data. This paper introduces a new approach for fusing color RGB and thermal video streams by using not only the information from the videos themselves, but also the available contextual information of a scene. The contextual information is used to judge the quality of a particular modality and guides the fusion of two parallel segmentation pipelines of the RGB and thermal video streams. The potential of the proposed context-aware fusion is demonstrated by extensive tests of quantitative and qualitative characteristics on existing and novel video datasets and benchmarked against competing approaches to multi-modal fusion.
Original languageDanish
JournalSensors
Volume16
Issue number11
ISSN1424-8220
DOIs
Publication statusPublished - 18 Nov 2016

Cite this

@article{3df77fa0547f42b89145db8eb979e9ef,
title = "Context-Aware Fusion of RGB and Thermal Imagery for Traffic Monitoring",
abstract = "In order to enable a robust 24-h monitoring of traffic under changing environmental conditions, it is beneficial to observe the traffic scene using several sensors, preferably from different modalities. To fully benefit from multi-modal sensor output, however, one must fuse the data. This paper introduces a new approach for fusing color RGB and thermal video streams by using not only the information from the videos themselves, but also the available contextual information of a scene. The contextual information is used to judge the quality of a particular modality and guides the fusion of two parallel segmentation pipelines of the RGB and thermal video streams. The potential of the proposed context-aware fusion is demonstrated by extensive tests of quantitative and qualitative characteristics on existing and novel video datasets and benchmarked against competing approaches to multi-modal fusion.",
keywords = "Context-aware fusion, Traffic surveillance, Segmentation",
author = "Thiemo Alldieck and Bahnsen, {Chris Holmberg} and Moeslund, {Thomas B.}",
year = "2016",
month = "11",
day = "18",
doi = "10.3390/s16111947",
language = "Dansk",
volume = "16",
journal = "Sensors",
issn = "1424-8220",
publisher = "M D P I AG",
number = "11",

}

Context-Aware Fusion of RGB and Thermal Imagery for Traffic Monitoring. / Alldieck, Thiemo; Bahnsen, Chris Holmberg; Moeslund, Thomas B.

In: Sensors, Vol. 16, No. 11, 18.11.2016.

Research output: Contribution to journalJournal articleResearchpeer-review

TY - JOUR

T1 - Context-Aware Fusion of RGB and Thermal Imagery for Traffic Monitoring

AU - Alldieck, Thiemo

AU - Bahnsen, Chris Holmberg

AU - Moeslund, Thomas B.

PY - 2016/11/18

Y1 - 2016/11/18

N2 - In order to enable a robust 24-h monitoring of traffic under changing environmental conditions, it is beneficial to observe the traffic scene using several sensors, preferably from different modalities. To fully benefit from multi-modal sensor output, however, one must fuse the data. This paper introduces a new approach for fusing color RGB and thermal video streams by using not only the information from the videos themselves, but also the available contextual information of a scene. The contextual information is used to judge the quality of a particular modality and guides the fusion of two parallel segmentation pipelines of the RGB and thermal video streams. The potential of the proposed context-aware fusion is demonstrated by extensive tests of quantitative and qualitative characteristics on existing and novel video datasets and benchmarked against competing approaches to multi-modal fusion.

AB - In order to enable a robust 24-h monitoring of traffic under changing environmental conditions, it is beneficial to observe the traffic scene using several sensors, preferably from different modalities. To fully benefit from multi-modal sensor output, however, one must fuse the data. This paper introduces a new approach for fusing color RGB and thermal video streams by using not only the information from the videos themselves, but also the available contextual information of a scene. The contextual information is used to judge the quality of a particular modality and guides the fusion of two parallel segmentation pipelines of the RGB and thermal video streams. The potential of the proposed context-aware fusion is demonstrated by extensive tests of quantitative and qualitative characteristics on existing and novel video datasets and benchmarked against competing approaches to multi-modal fusion.

KW - Context-aware fusion

KW - Traffic surveillance

KW - Segmentation

U2 - 10.3390/s16111947

DO - 10.3390/s16111947

M3 - Tidsskriftartikel

VL - 16

JO - Sensors

JF - Sensors

SN - 1424-8220

IS - 11

ER -