PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 6499, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Transreal arithmetic is a total arithmetic that contains real arithmetic, but which has no arithmetical exceptions. It allows
the specification of the Universal Perspex Machine which unifies geometry with the Turing Machine. Here we axiomatise
the algebraic structure of transreal arithmetic so that it provides a total arithmetic on any appropriate set of numbers. This
opens up the possibility of specifying a version of floating-point arithmetic that does not have any arithmetical exceptions
and in which every number is a first-class citizen. We find that literal numbers in the axioms are distinct. In other words, the axiomatisation does not require special axioms
to force non-triviality. It follows that transreal arithmetic must be defined on a set of numbers that contains{-∞,-1,0,1,∞,&pphi;}
as a proper subset. We note that the axioms have been shown to be consistent by machine proof.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The contemporary artist David Hockney has hypothesized that some early Renaissance painters secretly projected optical
images onto their supports (canvas, paper, oak panel, ...), directly traced these projections, and then filled in the tracings with
paint[1]. Hockney has presented somewhat impressionistic image evidence for this claim, but he and thin-film physicist Charles
Falco also point to perspective anomalies, to the fidelity of passages in certain paintings, and to historical documents in
search of support for this direct tracing claim[2].
Key visual evidence adduced in support of this tracing claim is a pair of portraits by Jan van Eyck of Cardinal Niccolo
Albergati - a small informal silverpoint study of 1431 and a slightly larger formal work in oil on panel of 1432. The
contours in these two works bear striking resemblance in shape (after being appropriately scaled) and there are at least two
"relative shifts" - passages that co-align well after a spatial shift of one of the images [2]. This evidence has led the
theory's proponents to claim that van Eyck copied the silverpoint by means of an optical projector, or epidiascope, the
relative shifts due to him accidentally bumping the setup during the copying.
Previous tests of the tracing theory for these works considered four candidate methods van Eyck might have used to
copied and enlarged the image in the silverpoint study: unaided ("by eye"), mechanical, grid, and the optical projection
method itself [3]. Based on the full evidence, including the recent discovery of tiny pinprick holes in the silverpoint, reenactments,
material culture and optical knowledge in the early 15th century, the mechanical method was judged most
plausible and optical method the least plausible[3].
However, this earlier work did not adequately test whether a trained artist could "re-enact" the copying by mechanical
methods: "Although we have not explicitly verified that high fidelities can be achieved through the use of a Reductionszirkel(or compass, protractor and ruler), there are no significant challenges in this regard"[3]. Our work here seeks to complete the
test of the direct tracing claim. As we shall see, a talented realist artist can indeed achieve fidelity comparable to that found in
these works, a result that re-affirms the earlier conclusion that when copying and enlarging the silverpoint image, it is more
likely that van Eyck used a well-known, simple, mechanical method than a then unknown, secret and complicated optical
method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For stereo imaging, it is a general practice to use two cameras of same focal lengths, with their viewing axis
normal to the line joining the camera centres. This paper analyses the result of difference in orientations and
focal lengths of two arbitrary prespective viewing cameras, by deriving the epipolar lines and its correspoinding
equations. This enables one to find the correspondence search space in terms of focal length accuracies as well
as camera orientation parameteres. Relevant numerically simulated results are also given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The present work addresses the design of an image acquisition front-end for target detection and tracking within a wide range of distances. Inspired by raptor bird's vision, a novel design for a visual sensor is proposed. The sensor consists of two parts, each originating from the studies of biological vision systems of different species. The front end is comprised of a set of video cameras imitating a falconiform eye, in particular its optics and retina [1]. The back end is a software remapper that uses a popular in machine vision log-polar model of retino-cortical projection in primates [2], [3], [4]. The output of this sensor is a composite log-polar image incorporating both near and far visual fields into a single homogeneous image space. In such space it is easier to perform target detection and tracking for those applications that deal with targets moving along the camera axis. The target object preserves its shape and size being handled seamlessly between cameras regardless of distance to the composite sensor. The prototype of proposed composite sensor has been created and is used as a front-end in experimental mobile vehicle detection and tracking system. Its has been tested inside a driving simulator and results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scattered data is defined as a collection of data that have little specified connectivity among data points. Trivariate
scattered data interpolation from R3 --> R consists of constructing a function f = (x, y, z) such
that f(xi, yi, zi) = Fi, i=1, N where V = {vi = (xi, yi, zi) &egr; R3, i=1,....N} is a set of distinct and non-coplanar
data points and F = (F1, ......, FN) is a real data vector. The weighted alpha shapes method is defined for a finite set
of weighted points. Let S &subuline; Rd x R be such a set. A weighted point is denoted as p=(p', &ohgr;) with p' &egr; Rd its location and
&ohgr; &egr; R its weight. For a weighted point p and a real &agr; define P+&agr;=(P', &ohgr; + &agr;). So p and P+&agr; share
the same location and their weights differ by &agr;. In other words, it is a polytope uniquely determined by the points, their
weights, and a parameter &agr; &egr; R that controls the desired level of detail.
Therefore, how to assign the weight for each point is one of the main tasks to achieve the desirable volumetric scattered
data interpolation. In other words, we need to investigate the way to achieve different levels of detail in a single shape
by assigning weights to the data points. In this paper, Modified Shepard's method is applied in terms of least squares
manner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Subdivision of triangular meshes is a common technique for refining a given surface representation for various
purposes in computer vision, computer graphics, and finite element methods. Particularly, in the processing
of reconstructed surfaces based on sensed data, subdivision can be used to add surface points at locations in
which the sensed data was sparse and so increase the density of various computed surface properties at such
locations. Standard subdivision techniques are normally applied to the complete mesh and so add vertices and
faces throughout the mesh. In modifying global adaptive subdivision schemes to perform local subdivision, it
is necessary to guarantee smooth transition between subdivided regions and regions left at the original level
so as to prevent the formation of surface artifacts at the boundaries between such regions. Moreover, the
produced surface mesh needs to be suitable for successive local subdivision steps. We propose a novel approach
for incremental adaptive subdivision of triangle meshes which may be applied to multiple global subdivision
schemes and which may be repeated iteratively without forming artifacts in the subdivided mesh. The decision
of where to subdivide in each iteration is determined based on an error measure which is minimized through
subdivision. Smoothness between various subdivision levels is obtained through the postponement of local atomic
operations. The proposed scheme is evaluated and compared to known techniques using quantitative measures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional (3D) modeling of object from two-dimensional (2D) image is important for many fields including
computer graphics, image understanding and medical imaging. Shape-from-Shading is a popular method for 3D shape
estimation from one image. For the 3D mesh generation, Delaunay triangulation is effective. If depths for all pixels in
an image are estimated by Shape-from-Shading and 3D Delaunay triangulation is done for the depth data, it is not
efficient. We propose an efficient method to generate 3D near-equilateral triangular mesh from 2D image directly. In the
proposed method, the projection of 3D mesh on 2D image is generated by ellipsoidal bubble mesh method firstly. Then,
the positions of mesh vertices in 3D space are estimated from Lambertian model and the direction of axes of ellipsoidal
bubble in ellipsoidal bubble method. We evaluated the proposed method by 3D mesh generation for a simulation image
and a picture taken by a digital camera. The results showed good estimation for the shapes of objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increase in affordable computing power has fostered the creation of 3D models. The field of 3D models is on the
rise with the advent of high speed Internet. Large repositories of 3D models are being created and made public on the
Internet. Searching among these repositories is an obvious requirement. In this paper, we present a total shear invariant
feature vector (SIFV) for searching a 3D model. This feature vector is robust to affine transformations of the original
model. The proposed feature vector is Fourier transform of the histogram of the number of points in the concentric
spheres partitioning the point cloud data of a 3D model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new algorithm for reconstructing 3D shapes. The algorithm takes one 2D image of a 3D shape and
reconstructs the 3D shape by applying a priori constraints: symmetry, planarity and compactness. The shape is
reconstructed without using information about the surfaces, such as shading, texture, binocular disparity or motion.
Performance of the algorithm is illustrated on symmetric polyhedra, but the algorithm can be applied to a very wide
range of shapes. Psychophysical plausibility of the algorithm is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we examine two fundamental problems related to object recognition for point features under
full perspective projection. The first problem involves the geometric constraints (object-image equations) that
must hold between a set of object feature points (object configuration) and any image of those points under
a full perspective projection, which is just a pinhole camera model for image formation. These constraints are
formulated in an invariant way, so that object pose, image orientation, or the choice of coordinates used to
express the feature point locations either on the object or in the image are irrelevant. These constraints turn out
to be expressions in the shape coordinates calculated from the feature point coordinates. The second problem
concerns the notion of shape and a description of the resulting shape spaces. These spaces aquire certain natural
metrics, but the metrics are often hard to compute. We will discuss certain cases where the computations are
managable, but will leave the general case to a future paper.
Taken all together, the results in this paper provide a way to understand the relationship that exists between
3D geometry and its "residual" in a 2D image. This relationship is completely characterized (for a particular
combination of features) by the above set of fundamental equations in the 3D and 2D shape coordinates. The
equations can be used to test for the geometric consistency between an object and an image. For example, by
fixing point features on a known object, we get constraints on the 2D shape coordinates of possible images of
those features. Conversely, if we have specific 2D features in an image, we will get constraints on the 3D shape
coordinates of objects with feature points capable of producing that image. This yields a test for which object is
being viewed. The object-image equations are thus a fundamental tool for attacking identification/recognition
problems in computer vision and automatic target recognition applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel method to improve the performance of existing three dimensional (3D) human face recognition algorithms that employ Euclidean distances between facial fiducial points as features. We further investigate a novel 3D face recognition algorithm that employs geodesic and Euclidean distances between facial fiducial points. We demonstrate that this algorithm is robust to changes in facial expression. Geodesic and Euclidean distances were calculated between pairs of 25 facial fiducial points. For the proposed algorithm, geodesic distances and 'global curvature' characteristics, defined as the ratio of geodesic to Euclidean distance between a pairs of points, were employed as features. The most discriminatory features were selected using stepwise linear discriminant analysis (LDA). These were projected onto 11 LDA directions, and face models were matched using the Euclidean distance metric. With a gallery set containing one image each of 105 subjects and a probe set containing 663 images of the same subjects, the algorithm produced EER=1.4% and a rank 1 RR=98.64%. It performed significantly better than existing algorithms based on principal component analysis and LDA applied to face range images. Its verification performance for expressive faces was also significantly better than an algorithm that employed Euclidean distances between facial fiducial points as features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D has become an important modality for face biometrics. The accuracy of a 3D face recognition system depends
on a correct registration that aligns the facial surfaces and makes a comparison possible. The best results obtained
so far use a one-to-all registration approach, which means each new facial surface is registered to all faces in
the gallery, at a great computational cost. We explore the approach of registering the new facial surface to an
average face model (AFM), which automatically establishes correspondence to the pre-registered gallery faces.
Going one step further, we propose that using a couple of well-selected AFMs can trade-off computation time
with accuracy. Drawing on cognitive justifications, we propose to employ category-specific alternative average
face models for registration, which is shown to increase the accuracy of the subsequent recognition. We inspect
thin-plate spline (TPS) and iterative closest point (ICP) based registration schemes under realistic assumptions
on manual or automatic landmark detection prior to registration. We evaluate several approaches for the coarse
initialization of ICP. We propose a new algorithm for constructing an AFM, and show that it works better than
a recent approach. Finally, we perform simulations with multiple AFMs that correspond to different clusters in
the face shape space and compare these with gender and morphology based groupings. We report our results on
the FRGC 3D face database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the present paper, we investigate discretization of curves based on polynomials in the 2-dimensional space.
Under some assumptions, we propose an arithmetic characterization of thin and connected discrete approximations
of such curves. In fact, we reach usual discretization models, that is, GIQ, OBQ and BBQ but with a
generic arithmetic definition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Topological image feature extraction is very important for many high level tasks in image processing and for topological analysis and modeling of image data. In this work, we use cubical homology theory to extract topological features as well as their geometric representations in image raw data. Furthermore, we present two algorithms that will allow us to do this extraction task very easily. The first one uses the elementary cubical representation to check the adjacency between cubes in order to localize the connected components in the image data. The second algorithm is about cycle extraction. The first step consists of finding cubical generators of the first homology classes. These generators allow to find rough locations of the holes in the image data. The second method localizes the optimal cycles from the ordinary ones. The optimal cycles represent the boundaries of the holes in the image data. A number of experiments are presented to validate these algorithms on synthetic and real binary images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object recognition using the shape of objects boundaries and surface reconstruction using slice contours rely
on the identification of the correct and complete boundary information of the segmented objects in the scene.
Geometric deformable models (GDM) using the level sets method provide a very efficient framework for image
segmentation. However, the segmentation results provided by these models are dependent on the contour initialization.
Also, if there are textured objects in the scene, usually the incorrect boundaries are detected. In
most cases where the strategy is to detect the correct boundary of all the objects in the scene, the results of
the segmentation will only provide incomplete and/or inaccurate object's boundaries. In this work, we propose
a new method to detect the correct boundary information of segmented objects, in particular textured objects.
We use the average squared gradient to determine the appropriate initialization positions and by varying the size
of the test regions we create multiple images, that we will call layers, to determine the appropriate boundaries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research uses the object extracting technique to extract the -thumb, index, middle, ring, and small fingers from the
hand images. The algorithm developed in this research can find the precise locations of the fingertips and the finger-to-finger-
valleys. The extracted fingers contain many useful geometry features. One can use these features to do the person
identification. The geometry descriptor is used to transfer geometry features of these finger images to another feature-domain
for image-comparison. Image is scaled and the reverse Wavelet Transform is performed to the finger image to
make the finger image has more salient feature. Image subtraction is used to exam the difference of the two images. This
research uses the finger-image and the palm image as the features to recognize different people. In this research, totally
eighteen hundred and ninety comparisons are conducted. Within these eighteen hundred and ninety comparisons, two
hundred and seventy comparisons are conducted for self-comparison. The other sixteen hundred and twenty comparisons
are conducted for comparisons between two different persons' finger images. The false accept rate is 0%, the false reject
rate is 1.9%, and the total error rate is 1.9%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce transreal analysis as a generalisation of real analysis. We find that the generalisation of the real exponential
and logarithmic functions is well defined for all transreal numbers. Hence, we derive well defined values of all transreal
powers of all non-negative transreal numbers. In particular, we find a well defined value for zero to the power of zero. We
also note that the computation of products via the transreal logarithm is identical to the transreal product, as expected. We
then generalise all of the common, real, trigonometric functions to transreal functions and show that transreal (sin x)/x is
well defined everywhere. This raises the possibility that transreal analysis is total, in other words, that every function and
every limit is everywhere well defined. If so, transreal analysis should be an adequate mathematical basis for analysing
the perspex machine - a theoretical, super-Turing machine that operates on a total geometry. We go on to dispel all of the
standard counter "proofs" that purport to show that division by zero is impossible. This is done simply by carrying the
proof through in transreal arithmetic or transreal analysis. We find that either the supposed counter proof has no content or
else that it supports the contention that division by zero is possible. The supposed counter proofs rely on extending the
standard systems in arbitrary and inconsistent ways and then showing, tautologously, that the chosen extensions are not
consistent. This shows only that the chosen extensions are inconsistent and does not bear on the question of whether
division by zero is logically possible. By contrast, transreal arithmetic is total and consistent so it defeats any possible
"straw man" argument. Finally, we show how to arrange that a function has finite or else unmeasurable (nullity) values,
but no infinite values. This arithmetical arrangement might prove useful in mathematical physics because it outlaws naked
singularities in all equations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Perspex Machine arose from the unification of computation with geometry. We now report significant redevelopment
of both a partial C compiler that generates perspex programs and of a Graphical User Interface (GUI). The compiler is
constructed with standard compiler-generator tools and produces both an explicit parse tree for C and an Abstract Syntax
Tree (AST) that is better suited to code generation. The GUI uses a hash table and a simpler software architecture to
achieve an order of magnitude speed up in processing and, consequently, an order of magnitude increase in the number of
perspexes that can be manipulated in real time (now 6,000). Two perspex-machine simulators are provided, one using
trans-floating-point arithmetic and the other using transrational arithmetic. All of the software described here is available
on the world wide web.
The compiler generates code in the neural model of the perspex. At each branch point it uses a jumper to return control to
the main fibre. This has the effect of pruning out an exponentially increasing number of branching fibres, thereby greatly
increasing the efficiency of perspex programs as measured by the number of neurons required to implement an algorithm.
The jumpers are placed at unit distance from the main fibre and form a geometrical structure analogous to a myelin sheath
in a biological neuron. Both the perspex jumper-sheath and the biological myelin-sheath share the computational function
of preventing cross-over of signals to neurons that lie close to an axon. This is an example of convergence driven by
similar geometrical and computational constraints in perspex and biological neurons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We use the Expectation-Maximization (EM) algorithm to classify 3D aerial lidar scattered height data into four
categories: road, grass, buildings, and trees. To do so we use five features: height, height variation, normal
variation, lidar return intensity, and image intensity. We also use only lidar-derived features to organize the data
into three classes (the road and grass classes are merged). We apply and test our results using ten regions taken
from lidar data collected over an area of approximately eight square miles, obtaining higher than 94% accuracy.
We also apply our classifier to our entire dataset, and present visual classification results both with and without
uncertainty. We use several approaches to evaluate the parameter and model choices possible when applying EM
to our data. We observe that our classification results are stable and robust over the various subregions of our
data which we tested. We also compare our results here with previous classification efforts using this data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.