A Spatial Self-Similarity Based Feature Learning Method for Face Recognition under Varying Poses

Xiaodong Duan, Zheng-Hua Tan

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

In this paper, we propose a low-complexity method to learn pose-invariant features for face recognition with no need for pose information. In contrast to the commonly used approaches of recovering frontal face images from profile views, the proposed method extracts the subject related part from a local feature by removing its pose related part. First, the method generates a self-similarity feature by computing the distances between local feature descriptors of different non-overlapping blocks in a face image. Secondly, it subtracts from the local feature a linear transformation of the self-similarity feature and the transformation matrix is learned through minimizing the feature distance between face images from the same person but under different poses while retaining the discriminative information across different persons. In order to evaluate our method, extensive experiments on face recognition across poses are conducted using FERET and Multi-PIE, in addition, experiments on face recognition under unconstrained situations are conducted using LFW-a. Results on these three public databases show that the proposed method is able to significantly improve the recognition performance as compared with using the original local features and outperforms or is comparable to related, state-of-the-art pose-invariant face recognition approaches.

Original languageEnglish
JournalPattern Recognition Letters
Volume111
Pages (from-to)109-116
Number of pages8
ISSN0167-8655
DOIs
Publication statusPublished - 2018

Fingerprint

Face recognition
Linear transformations
Experiments

Cite this

@article{d2f992901f9d4dbca5ba5352908e7619,
title = "A Spatial Self-Similarity Based Feature Learning Method for Face Recognition under Varying Poses",
abstract = "In this paper, we propose a low-complexity method to learn pose-invariant features for face recognition with no need for pose information. In contrast to the commonly used approaches of recovering frontal face images from profile views, the proposed method extracts the subject related part from a local feature by removing its pose related part. First, the method generates a self-similarity feature by computing the distances between local feature descriptors of different non-overlapping blocks in a face image. Secondly, it subtracts from the local feature a linear transformation of the self-similarity feature and the transformation matrix is learned through minimizing the feature distance between face images from the same person but under different poses while retaining the discriminative information across different persons. In order to evaluate our method, extensive experiments on face recognition across poses are conducted using FERET and Multi-PIE, in addition, experiments on face recognition under unconstrained situations are conducted using LFW-a. Results on these three public databases show that the proposed method is able to significantly improve the recognition performance as compared with using the original local features and outperforms or is comparable to related, state-of-the-art pose-invariant face recognition approaches.",
author = "Xiaodong Duan and Zheng-Hua Tan",
year = "2018",
doi = "10.1016/j.patrec.2018.05.007",
language = "English",
volume = "111",
pages = "109--116",
journal = "Pattern Recognition Letters",
issn = "0167-8655",
publisher = "Elsevier",

}

A Spatial Self-Similarity Based Feature Learning Method for Face Recognition under Varying Poses. / Duan, Xiaodong; Tan, Zheng-Hua.

In: Pattern Recognition Letters, Vol. 111, 2018, p. 109-116.

Research output: Contribution to journalJournal articleResearchpeer-review

TY - JOUR

T1 - A Spatial Self-Similarity Based Feature Learning Method for Face Recognition under Varying Poses

AU - Duan, Xiaodong

AU - Tan, Zheng-Hua

PY - 2018

Y1 - 2018

N2 - In this paper, we propose a low-complexity method to learn pose-invariant features for face recognition with no need for pose information. In contrast to the commonly used approaches of recovering frontal face images from profile views, the proposed method extracts the subject related part from a local feature by removing its pose related part. First, the method generates a self-similarity feature by computing the distances between local feature descriptors of different non-overlapping blocks in a face image. Secondly, it subtracts from the local feature a linear transformation of the self-similarity feature and the transformation matrix is learned through minimizing the feature distance between face images from the same person but under different poses while retaining the discriminative information across different persons. In order to evaluate our method, extensive experiments on face recognition across poses are conducted using FERET and Multi-PIE, in addition, experiments on face recognition under unconstrained situations are conducted using LFW-a. Results on these three public databases show that the proposed method is able to significantly improve the recognition performance as compared with using the original local features and outperforms or is comparable to related, state-of-the-art pose-invariant face recognition approaches.

AB - In this paper, we propose a low-complexity method to learn pose-invariant features for face recognition with no need for pose information. In contrast to the commonly used approaches of recovering frontal face images from profile views, the proposed method extracts the subject related part from a local feature by removing its pose related part. First, the method generates a self-similarity feature by computing the distances between local feature descriptors of different non-overlapping blocks in a face image. Secondly, it subtracts from the local feature a linear transformation of the self-similarity feature and the transformation matrix is learned through minimizing the feature distance between face images from the same person but under different poses while retaining the discriminative information across different persons. In order to evaluate our method, extensive experiments on face recognition across poses are conducted using FERET and Multi-PIE, in addition, experiments on face recognition under unconstrained situations are conducted using LFW-a. Results on these three public databases show that the proposed method is able to significantly improve the recognition performance as compared with using the original local features and outperforms or is comparable to related, state-of-the-art pose-invariant face recognition approaches.

U2 - 10.1016/j.patrec.2018.05.007

DO - 10.1016/j.patrec.2018.05.007

M3 - Journal article

VL - 111

SP - 109

EP - 116

JO - Pattern Recognition Letters

JF - Pattern Recognition Letters

SN - 0167-8655

ER -