Analyzing visual imagery for emergency drone landing on unknown environments

O. M. Bektash*, Jacob Naundrup Pedersen, Anders la Cour-Harbo

*Kontaktforfatter

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

6 Citationer (Scopus)
77 Downloads (Pure)

Abstract

Autonomous landing is a fundamental aspect of drone operations which is being focused upon by the industry, with ever-increasing demands on safety. As the drones are likely to become indispensable vehicles in near future, they are expected to succeed in automatically recognizing a landing spot from the nearby points, maneuvering toward it, and ultimately, performing a safe landing. Accordingly, this paper investigates the idea of vision-based location detection on the ground for an automated emergency response system which can continuously monitor the environment and spot safe places when needed. A convolutional neural network which learns from image-based feature representation at multiple scales is introduced. The model takes the ground images, assign significance to various aspects in them and recognize the landing spots. The results provided support for the model, with accurate classification of ground image according to their visual content. They also demonstrate the feasibility of computationally inexpensive implementation of the model on a small computer that can be easily embedded on a drone.

OriginalsprogEngelsk
TidsskriftInternational Journal of Micro Air Vehicles
Vol/bind14
Sider (fra-til)1-18
ISSN1756-8293
DOI
StatusUdgivet - 1 jul. 2022

Fingeraftryk

Dyk ned i forskningsemnerne om 'Analyzing visual imagery for emergency drone landing on unknown environments'. Sammen danner de et unikt fingeraftryk.

Citationsformater