Mobile feature-cloud panorama construction for image recognition applications

Miguel Bordallo Lopez, Jari Hannuksela, Olli Silvén

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

Abstract

Camera-based context retrieval applications are a set of services that provide information about the user environment based on a geo-located picture. Traditionally, to retrieve the information, the server was in charge of computing the features of a taken picture and matching it with the ones present on a database. Our design proposes the substitution of still image capturing for real-time video frame analysis that is able to compute the features required for the image matching on the fly. In our approach, each single video frame shown in the viewfinder is analyzed to extract the relevant features. Only the new features extracted on this single frame are sent to the server along with the registration parameters that compose a feature-cloud panorama. To increase the robustness of the system, only high quality frames are selected for the feature mosaic composition. A moving objects detection stage can discard the undesired detected features. The experiments show that by moving most of the computation to the device, the bandwidth is reduced considerably and the chances of a successful matching improve with the large number of accurate features detectable on a video sequence. The general approach based on feature extraction, registration and object detection for video frames, is achievable at a high frame rate.
Original languageEnglish
Title of host publicationProceedings of the 2011 International Workshop on Applications, Systems, and Services for Camera Phone Sensing (MobiPhoto 2011), Penghu, Taiwan, 26.6.2011
Number of pages5
Publication statusPublished - 2011
MoE publication typeNot Eligible

Fingerprint

Dive into the research topics of 'Mobile feature-cloud panorama construction for image recognition applications'. Together they form a unique fingerprint.

Cite this