Learning to Remove Rain in Traffic Surveillance by Using Synthetic Data

Chris Holmberg Bahnsen, David Vázquez, Antonio M. López, Thomas B. Moeslund

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

1 Citation (Scopus)
229 Downloads (Pure)

Abstract

Rainfall is a problem in automated traffic surveillance. Rain streaks occlude the road users and degrade the overall visibility which in turn decrease object detection performance. One way of alleviating this is by artificially removing the rain from the images. This requires knowledge of corresponding rainy and rain-free images. Such images are often produced by overlaying synthetic rain on top of rain-free images. However, this method fails to incorporate the fact that rain fall in the entire three-dimensional volume of the scene. To overcome this, we introduce training data from the SYNTHIA virtual world that models rain streaks in the entirety of a scene. We train a conditional Generative Adversarial Network for rain removal and apply it on traffic surveillance images from SYNTHIA and the AAU RainSnow datasets. To measure the applicability of the rain-removed images in a traffic surveillance context, we run the YOLOv2 object detection algorithm on the original and rain-removed frames. The results on SYNTHIA show an 8% increase in detection accuracy compared to the original rain image. Interestingly, we find that high PSNR or SSIM scores do not imply good object detection performance.

Original languageEnglish
Title of host publication Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
EditorsAndreas Kerren, Christophe Hurter, Jose Braz
Number of pages8
Volume4
PublisherSCITEPRESS Digital Library
Publication date2019
Pages123-130
ISBN (Electronic)9789897583544
DOIs
Publication statusPublished - 2019
Event14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Visigrapp 2019) - Prague, Czech Republic
Duration: 25 Feb 201927 Feb 2019

Conference

Conference14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Visigrapp 2019)
CountryCzech Republic
CityPrague
Period25/02/201927/02/2019

Fingerprint

Rain
Visibility

Keywords

  • Image Denoising
  • Rain Removal
  • Traffic Surveillance

Cite this

Bahnsen, C. H., Vázquez, D., M. López, A., & Moeslund, T. B. (2019). Learning to Remove Rain in Traffic Surveillance by Using Synthetic Data. In A. Kerren, C. Hurter, & J. Braz (Eds.), Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Vol. 4, pp. 123-130). SCITEPRESS Digital Library. https://doi.org/10.5220/0007361301230130
Bahnsen, Chris Holmberg ; Vázquez, David ; M. López, Antonio ; Moeslund, Thomas B. / Learning to Remove Rain in Traffic Surveillance by Using Synthetic Data. Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. editor / Andreas Kerren ; Christophe Hurter ; Jose Braz. Vol. 4 SCITEPRESS Digital Library, 2019. pp. 123-130
@inproceedings{7dabfc21092f4eb19033c8679c0b2d38,
title = "Learning to Remove Rain in Traffic Surveillance by Using Synthetic Data",
abstract = "Rainfall is a problem in automated traffic surveillance. Rain streaks occlude the road users and degrade the overall visibility which in turn decrease object detection performance. One way of alleviating this is by artificially removing the rain from the images. This requires knowledge of corresponding rainy and rain-free images. Such images are often produced by overlaying synthetic rain on top of rain-free images. However, this method fails to incorporate the fact that rain fall in the entire three-dimensional volume of the scene. To overcome this, we introduce training data from the SYNTHIA virtual world that models rain streaks in the entirety of a scene. We train a conditional Generative Adversarial Network for rain removal and apply it on traffic surveillance images from SYNTHIA and the AAU RainSnow datasets. To measure the applicability of the rain-removed images in a traffic surveillance context, we run the YOLOv2 object detection algorithm on the original and rain-removed frames. The results on SYNTHIA show an 8{\%} increase in detection accuracy compared to the original rain image. Interestingly, we find that high PSNR or SSIM scores do not imply good object detection performance.",
keywords = "Image Denoising, Rain Removal, Traffic Surveillance",
author = "Bahnsen, {Chris Holmberg} and David V{\'a}zquez and {M. L{\'o}pez}, Antonio and Moeslund, {Thomas B.}",
year = "2019",
doi = "10.5220/0007361301230130",
language = "English",
volume = "4",
pages = "123--130",
editor = "Andreas Kerren and Christophe Hurter and Jose Braz",
booktitle = "Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications",
publisher = "SCITEPRESS Digital Library",

}

Bahnsen, CH, Vázquez, D, M. López, A & Moeslund, TB 2019, Learning to Remove Rain in Traffic Surveillance by Using Synthetic Data. in A Kerren, C Hurter & J Braz (eds), Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. vol. 4, SCITEPRESS Digital Library, pp. 123-130, 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Visigrapp 2019), Prague, Czech Republic, 25/02/2019. https://doi.org/10.5220/0007361301230130

Learning to Remove Rain in Traffic Surveillance by Using Synthetic Data. / Bahnsen, Chris Holmberg; Vázquez, David; M. López, Antonio; Moeslund, Thomas B.

Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. ed. / Andreas Kerren; Christophe Hurter; Jose Braz. Vol. 4 SCITEPRESS Digital Library, 2019. p. 123-130.

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

TY - GEN

T1 - Learning to Remove Rain in Traffic Surveillance by Using Synthetic Data

AU - Bahnsen, Chris Holmberg

AU - Vázquez, David

AU - M. López, Antonio

AU - Moeslund, Thomas B.

PY - 2019

Y1 - 2019

N2 - Rainfall is a problem in automated traffic surveillance. Rain streaks occlude the road users and degrade the overall visibility which in turn decrease object detection performance. One way of alleviating this is by artificially removing the rain from the images. This requires knowledge of corresponding rainy and rain-free images. Such images are often produced by overlaying synthetic rain on top of rain-free images. However, this method fails to incorporate the fact that rain fall in the entire three-dimensional volume of the scene. To overcome this, we introduce training data from the SYNTHIA virtual world that models rain streaks in the entirety of a scene. We train a conditional Generative Adversarial Network for rain removal and apply it on traffic surveillance images from SYNTHIA and the AAU RainSnow datasets. To measure the applicability of the rain-removed images in a traffic surveillance context, we run the YOLOv2 object detection algorithm on the original and rain-removed frames. The results on SYNTHIA show an 8% increase in detection accuracy compared to the original rain image. Interestingly, we find that high PSNR or SSIM scores do not imply good object detection performance.

AB - Rainfall is a problem in automated traffic surveillance. Rain streaks occlude the road users and degrade the overall visibility which in turn decrease object detection performance. One way of alleviating this is by artificially removing the rain from the images. This requires knowledge of corresponding rainy and rain-free images. Such images are often produced by overlaying synthetic rain on top of rain-free images. However, this method fails to incorporate the fact that rain fall in the entire three-dimensional volume of the scene. To overcome this, we introduce training data from the SYNTHIA virtual world that models rain streaks in the entirety of a scene. We train a conditional Generative Adversarial Network for rain removal and apply it on traffic surveillance images from SYNTHIA and the AAU RainSnow datasets. To measure the applicability of the rain-removed images in a traffic surveillance context, we run the YOLOv2 object detection algorithm on the original and rain-removed frames. The results on SYNTHIA show an 8% increase in detection accuracy compared to the original rain image. Interestingly, we find that high PSNR or SSIM scores do not imply good object detection performance.

KW - Image Denoising

KW - Rain Removal

KW - Traffic Surveillance

UR - http://www.scopus.com/inward/record.url?scp=85068235741&partnerID=8YFLogxK

U2 - 10.5220/0007361301230130

DO - 10.5220/0007361301230130

M3 - Article in proceeding

VL - 4

SP - 123

EP - 130

BT - Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications

A2 - Kerren, Andreas

A2 - Hurter, Christophe

A2 - Braz, Jose

PB - SCITEPRESS Digital Library

ER -

Bahnsen CH, Vázquez D, M. López A, Moeslund TB. Learning to Remove Rain in Traffic Surveillance by Using Synthetic Data. In Kerren A, Hurter C, Braz J, editors, Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. Vol. 4. SCITEPRESS Digital Library. 2019. p. 123-130 https://doi.org/10.5220/0007361301230130