An alternative to scale-space representation for extracting local features in image recognition

Hans Jørgen Andersen, Phuong Giang Nguyen

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

1 Citation (Scopus)
474 Downloads (Pure)

Abstract

In image recognition, the common approach for extracting local features using a scale-space representation has usually three main steps; first interest points are extracted at different scales, next from a patch around each interest point the rotation is calculated with corresponding orientation and compensation, and finally a descriptor is computed for the derived patch (i.e. feature of the patch). To avoid the memory and computational intensive process of constructing the scale-space, we use a method where no scale-space is required This is done by dividing the given image into a number of triangles with sizes dependent on the content of the image, at the location of each triangle. In this paper, we will demonstrate that by rotation of the interest regions at the triangles it is possible in grey scale images to achieve a recognition precision comparable with that of MOPS. The test of the proposed method is performed on two data sets of buildings.
Original languageEnglish
Title of host publicationInternational Conference on Computer Vision Theory and Applications
EditorsGabriela Csurka, Jose Braz
Number of pages5
Volume1
PublisherInstitute for Systems and Technologies of Information, Control and Communication
Publication date24 Feb 2012
Pages341-345
ISBN (Print)978-989-8565-03-7
Publication statusPublished - 24 Feb 2012
EventInternational Conference on Computer Vision Theory and Applications - Rome, Italy
Duration: 24 Feb 201226 Feb 2012

Conference

ConferenceInternational Conference on Computer Vision Theory and Applications
CountryItaly
CityRome
Period24/02/201226/02/2012

Fingerprint Dive into the research topics of 'An alternative to scale-space representation for extracting local features in image recognition'. Together they form a unique fingerprint.

Cite this