Different face views project different face topology in 2D images. The unified processing of face images with less topology different related to smaller range of face view angles is more convenient, and vice versa. Thus many researches divided the entire face pattern space form multiview face images into many subspaces with small range of view angles. However, large number of subspaces is computationally demanding, and different face processing algorithms take different strategies to handle the view changing. Therefore, the research of proper division of face pattern space is needed to ensure good performance. Different from other researches, this paper proposes an optimal view angle range criterion of face pattern space division in theory by careful analysis on the structure differences of multiview faces and on the influence to face processing algorithms. Then a face pattern space division method is proposed. Finally, this paper uses the proposed criterion and method to divide the face pattern space for face detection and compares with other division results. The final results show the proposed criterion and method can satisfy the processing performance with minimum number of subspace. The study in this paper can also help other researches which need to divide pattern space of other objects based on their different views.
Target detection in multimodal (multisensor) images is a difficult problem especially with the impact of different views
and the complex backgrounds. In this paper, we propose a target detection method based on ground region matching and
spatial constraints to solve it. First, the extrinsic parameters of camera are used to transform the images to reduce the
impact of viewpoints differences. Then the stable ground object regions are extracted by MSER. Those regions are used
to build a graph model to describe the reference image with spatial constraints to reduce the impact of multimodal and
complex backgrounds. At last, the ground region matching and the model registration with sensed images are used to
find the target. Using this method, we overcome those difficulties and obtain a satisfied experiment result; the final
detection rate is 94.34% in our data set of visible reference images in top views and infrared sensed images in side views
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.