In the last years we have seen a tremendous increase in the adoption of mobile devices. More than one-quarter of the global population will use smartphones in 2015 according to eMarketer estimates.
These electronic devices though, which are used in every-day life, can't always cope with the increased demand for processing power, memory, and storage required by modern applications in robotics, vision, security, gaming etc. The result is that most such applications are only implemented on high-end servers essentially waiting for a technological breakthrough that will allow them to run on smaller-scale devices.
In this project we consider the case for Face Recognition on mobile smartphones. We aim to enable smartphones on extracting important informations, like gender and age, from a picture of a person. Furthermore, we want the smartphone to find a match of the given person from a big database of already known people faces. These kind of applications are very popular and are being used intensively from governments to detect suspicious and dangerous people participating on public events, e.g. We want to show that running this augmented reality application entirely on the mobile device presents several limits. The smartphone cannot cope in real-time comparing the reference picture against too many faces, mainly due to battery constraints but also to processing capability constraints.
The proposed project aims to solve this problem by taking advantage of high-performance cloud infrastructures and high-bandwidth networks. In our approach, compute or storage intensive tasks are seamlessly offloaded from the small-scale low-power devices to powerful virtual accelerators running on high-end servers in the cloud.
We aim to implement an augmented reality application that uses the OpenCV library for face feature extraction and the ThinkAir framework for offloading computation to the accelerators on the cloud.