Abstract
Driver activity classification is crucial for ensuring road safety, with applications ranging from driver assistance systems to autonomous vehicle control transitions. In this paper, we present a novel approach leveraging generalizable representations from vision-language models for driver activity classification. Our method employs a Semantic Representation Late Fusion Neural Network (SRLF-Net) to process synchronized video frames from multiple perspectives. Each frame is encoded using a pretrained vision-language encoder, and the resulting embeddings are fused to generate class probability predictions. By leveraging contrastively-learned vision-language representations, our approach achieves robust performance across diverse driver activities. We evaluate our method on the Naturalistic Driving Action Recognition Dataset, demonstrating strong accuracy across many classes. Our results suggest that vision-language representations offer a promising avenue for driver monitoring systems, providing both accuracy and interpretability through natural language descriptors.
Originalsprog | Engelsk |
---|---|
Titel | Vision and Language for Autonomous Driving and Robotics Workshop, CVPR |
Publikationsdato | 2024 |
Status | Udgivet - 2024 |
Begivenhed | Vision and Language for Autonomous Driving and Robotics Workshop 2024 - Seattle, USA Varighed: 18 jun. 2024 → 18 jun. 2024 https://vision-language-adr.github.io/#papers |
Workshop
Workshop | Vision and Language for Autonomous Driving and Robotics Workshop 2024 |
---|---|
Land/Område | USA |
By | Seattle |
Periode | 18/06/2024 → 18/06/2024 |
Internetadresse |