A Spatial Self-Similarity Based Feature Learning Method for Face Recognition under Varying Poses

Xiaodong Duan, Zheng-Hua Tan

Research output: Contribution to journalJournal articleResearchpeer-review

7 Citations (Scopus)
100 Downloads (Pure)

Abstract

In this paper, we propose a low-complexity method to learn pose-invariant features for face recognition with no need for pose information. In contrast to the commonly used approaches of recovering frontal face images from profile views, the proposed method extracts the subject related part from a local feature by removing its pose related part. First, the method generates a self-similarity feature by computing the distances between local feature descriptors of different non-overlapping blocks in a face image. Secondly, it subtracts from the local feature a linear transformation of the self-similarity feature and the transformation matrix is learned through minimizing the feature distance between face images from the same person but under different poses while retaining the discriminative information across different persons. In order to evaluate our method, extensive experiments on face recognition across poses are conducted using FERET and Multi-PIE, in addition, experiments on face recognition under unconstrained situations are conducted using LFW-a. Results on these three public databases show that the proposed method is able to significantly improve the recognition performance as compared with using the original local features and outperforms or is comparable to related, state-of-the-art pose-invariant face recognition approaches.

Original languageEnglish
JournalPattern Recognition Letters
Volume111
Pages (from-to)109-116
Number of pages8
ISSN0167-8655
DOIs
Publication statusPublished - 2018

Keywords

  • Face recognition
  • Feature learning
  • Pose invariant
  • Self-similarity

Fingerprint

Dive into the research topics of 'A Spatial Self-Similarity Based Feature Learning Method for Face Recognition under Varying Poses'. Together they form a unique fingerprint.

Cite this