Learning to Detect Traffic Signs: Comparative Evaluation of Synthetic and Real-World Datasets

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

27 Citations (Scopus)
389 Downloads (Pure)

Abstract

This study compares the performance of sign detection based on synthetic training data to the performance of detection based on real-world training images. Viola-Jones detectors are created for 4 different traffic signs with both synthetic and real data, and varying numbers of training samples. The detectors are tested and compared. The result is that while others have successfully used synthetic training data in a classification context, it does not seem to be a good solution for detection. Even when the synthetic data covers a large part of the parameter space, it still performs significantly worse than real-world data.
Original languageEnglish
Title of host publication21st International Conference on Pattern Recognition
PublisherIEEE
Publication date11 Nov 2012
Pages3452-3455
ISBN (Print)978-1-4673-2216-4
Publication statusPublished - 11 Nov 2012
EventInternation Conference on Pattern Recognition - Tsukuba International Congress Center, Tsukuba Science City, Japan
Duration: 11 Nov 201215 Nov 2012
Conference number: 21

Conference

ConferenceInternation Conference on Pattern Recognition
Number21
LocationTsukuba International Congress Center
Country/TerritoryJapan
CityTsukuba Science City
Period11/11/201215/11/2012

Fingerprint

Dive into the research topics of 'Learning to Detect Traffic Signs: Comparative Evaluation of Synthetic and Real-World Datasets'. Together they form a unique fingerprint.

Cite this