Iris recognition has been considered as a secure and reliable biometric technology. However, iris images are prone to off-angle or are partially occluded when captured with fewer user cooperations. As a consequence, iris recognition especially iris segmentation suffers a serious performance drop. To solve this problem, we propose a multitask deep active contour model for off-angle iris image segmentation. Specifically, the proposed approach combines the coarse and fine localization results. The coarse localization detects the approximate position of the iris area and further initializes the iris contours through a series of robust preprocessing operations. Then, iris contours are represented by 40 ordered isometric sampling polar points and thus their corresponding offset vectors are regressed via a convolutional neural network for multiple times to obtain the precise inner and outer boundaries of the iris. Next, the predicted iris boundary results are regarded as a constraint to limit the segmentation range of noise-free iris mask. Besides, an efficient channel attention module is introduced in the mask prediction to make the network focus on the valid iris region. A differentiable, fast, and efficient SoftPool operation is also used in place of traditional pooling to keep more details for more accurate pixel classification. Finally, the proposed iris segmentation approach is combined with off-the-shelf iris feature extraction models including traditional OM and deep learning-based FeatNet for iris recognition. The experimental results on two NIR datasets CASIA-Iris-off-angle, CASIA-Iris-Africa, and a VIS dataset SBVPI show that the proposed approach achieves a significant performance improvement in the segmentation and recognition for both regular and off-angle iris images.
Counting the number of people is still an important task in social security applications, and a few methods based on video surveillance have been proposed in recent years. In this paper, we design a novel optical sensing system to directly acquire the depth map of the scene from one light-field camera. The light-field sensing system can count the number of people crossing the passageway, and record the direction and intensity of rays at a snapshot without any assistant light devices. Depth maps are extracted from the raw light-ray sensing data. Our smart sensing system is equipped with a passive imaging sensor, which is able to naturally discern the depth difference between the head and shoulders for each person. Then a human model is built. Through detecting the human model from light-field images, the number of people passing the scene can be counted rapidly. We verify the feasibility of the sensing system as well as the accuracy by capturing real-world scenes passing single and multiple people under natural illumination.
Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.
The increase in twin births has created a requirement for biometric systems to accurately determine the identity
of a person who has an identical twin. The discriminability of some of the identical twin biometric traits,
such as fingerprints, iris, and palmprints, is supported by anatomy and the formation process of the biometric
characteristic, which state they are different even in identical twins due to a number of random factors during
the gestation period. For the first time, we collected multiple biometric traits (fingerprint, face, and iris) of
66 families of twins, and we performed unimodal and multimodal matching experiments to assess the ability
of biometric systems in distinguishing identical twins. Our experiments show that unimodal finger biometric
systems can distinguish two different persons who are not identical twins better than they can distinguish identical
twins; this difference is much larger in the face biometric system and it is not significant in the iris biometric
system. Multimodal biometric systems that combine different units of the same biometric modality (e.g. multiple
fingerprints or left and right irises.) show the best performance among all the unimodal and multimodal biometric
systems, achieving an almost perfect separation between genuine and impostor distributions.
Iris image acquisition is the fundamental step of the iris recognition, but capturing high-resolution iris images
in real-time is very difficult. The most common systems have small capture volume and demand users to fully
cooperate with machines, which has become the bottleneck of iris recognition's application. In this paper, we aim
at building an active iris image acquiring system which is self-adaptive to users. Two low resolution cameras are
co-located in a pan-tilt-unit (PTU), for face and iris image acquisition respectively. Once the face camera detects
face region in real-time video, the system controls the PTU to move towards the eye region and automatically
zooms, until the iris camera captures an clear iris image for recognition. Compared with other similar works, our
contribution is that we use low-resolution cameras, which can transmit image data much faster and are much
cheaper than the high-resolution cameras. In the system, we use Haar-like cascaded feature to detect faces and
eyes, linear transformation to predict the iris camera's position, and simple heuristic PTU control method to
track eyes. A prototype device has been established, and experiments show that our system can automatically
capture high-quality iris image in the range of 0.6m×0.4m×0.4m in average 3 to 5 seconds.
With the development of the current networked society, personal identification based on biometrics has received more and more attention. Iris recognition has a satisfying performance due to its high reliability and non-invasion. In an iris recognition system, preprocessing, especially iris localization plays a very important role. The speed and performance of an iris recognition system is crucial and it is limited by the results of iris localization to a great extent. Iris localization includes finding the iris boundaries (inner and outer) and the eyelids (lower and upper). In this paper, we propose an iris localization algorithm based on texture segmentation. First, we use the information of low frequency of wavelet transform of the iris image for pupil segmentation and localize the iris with a differential integral operator. Then the upper eyelid edge is detected after eyelash is segmented. Finally, the lower eyelid is localized using parabolic curve fitting based on gray value segmentation. Extensive experimental results show that the algorithm has satisfying performance and good robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.