PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9404 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The plenoptic function was originally defined as a complete record of the 3D structure of radiance in a scene and its dependence on a number of different parameters including position, angle, wavelength, polarization, etc. Recently-developed plenoptic cameras typically capture only the geometric aspects of the plenoptic function. Using this information, computational photography can render images with an infinite variety of features such as focus, depth of field, and parallax. Less attention has been paid to other, nonspatial, parameters of the plenoptic function that could also be captured. In this paper, we develop the microlens-based image sensor (aka the Lippmann sensor) as a generalized plenoptic capture device, able to capture additional information based on filters/modifiers placed on different microlenses. Multimodal capture can comprise many different parameters such as high-dynamic range, multispectral, and so on. For this paper we explore two particular examples in detail: polarization capture based on interleaved polarization filters, and capture with extended depth of field based on microlenses with different focal lengths.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of an image processing pipeline for each new camera design can be time-consuming. To speed
camera development, we developed a method named L3 (Local, Linear, Learned) that automatically creates an
image processing pipeline for any design. In this paper, we describe how we used the L3 method to design and
implement an image processing pipeline for a prototype camera with five color channels. The process includes
calibrating and simulating the prototype, learning local linear transforms and accelerating the pipeline using
graphics processing units (GPUs).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To speed the development of novel camera architectures we proposed a method, L3 (Local, Linear and Learned),that automatically creates an optimized image processing pipeline. The L3 method assigns each sensor pixel into one of 400 classes, and applies class-dependent local linear transforms that map the sensor data from a pixel and its neighbors into the target output (e.g., CIE XYZ rendered under a D65 illuminant). The transforms are precomputed from training data and stored in a table used for image rendering. The training data are generated by camera simulation, consisting of sensor responses and rendered CIE XYZ outputs. The sensor and rendering illuminant can be equal (same-illuminant table) or different (cross-illuminant table). In the original implementation, illuminant correction is achieved with cross-illuminant tables, and one table is required for each illuminant. We find, however, that a single same-illuminant table (D65) effectively converts sensor data for many different same-illuminant conditions. Hence, we propose to render the data by applying the same-illuminant D65 table to the sensor data, followed by a linear illuminant correction transform. The mean color reproduction error using the same-illuminant table is on the order of 4▵E units, which is only slightly larger than the cross-illuminant table error. This approach reduces table storage requirements significantly without substantially degrading color reproduction accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When photographs are taken through a glass or any other semi-reflecting transparent surface, in museums, shops,
aquariums etc., we encounter undesired reflection. Reflection Removal is an ill-posed problem and is caused by
superposition of two layers namely the scene in front of camera and the scene behind the camera getting reflected
because of the semi-reflective surface. Modern day hand held Smart Devices (smartphones, tablets, phablets, etc) are
typically used for capturing scenes as they are equipped with good camera sensors and processing capabilities and we
can expect image quality to be similar to a professional camera. In this direction, we propose a novel method to reduce
reflection in images, which is an extension of Independent Component Analysis (ICA) approach, by making use of two
cameras present - a back camera (capturing actual scene) and a front facing camera. When compared to the original ICA
implementation, our method gives on an average of 10% improvement on the peak signal to noise ratio of the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stray light is the part of an image that is formed by misdirected light. I.e. an ideal optic would map a point of
the scene onto a point of the image. With real optics however, some parts of the light get misdirected. This is
due to effects like scattering at edges, Fresnel reflections at optical surfaces, scattering at parts of the housing,
scattering from dust and imperfections – on and inside of the lenses – and further reasons. These effects lead to
errors in colour-measurements using spectral radiometers and other systems like scanners. Stray light is further
limiting the dynamic range that can be achieved with High-Dynamic-Range-Technologies (HDR) and can lead
to the rejection of cameras due to quality considerations. Therefore it is of interest, to measure, quantify and
correct these effects. Our work aims at measuring the stray light point spread function (stray light PSF) of
a system which is composed of a lens and an imaging sensor. In this paper we present a framework for the
evaluation of PSF-models which can be used for the correction of straylight. We investigate if and how our
evaluation framework can point out errors of these models and how these errors influence straylight correction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical blur can display significant spatial variation across the image plane, even for constant camera settings and object depth. Existing solutions to represent this spatially varying blur requires a dense sampling of blur kernels across the image, where each kernel is defined independent of the neighboring kernels. This approach requires a large amount of data collection, and the estimation of the kernels is not as robust as if it were possible to incorporate knowledge of the relationship between adjacent kernels.
A novel parameterized model is presented which relates the blur kernels at different locations across the image plane. The model is motivated by well-established optical models, including the Seidel aberration model. It is demonstrated that the proposed model can unify a set of hundreds of blur kernel observations across the image plane under a single 10-parameter model, and the accuracy of the model is demonstrated with simulations and measurement data collected by two separate research groups.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single image blind deconvolution aims to estimate the unknown blur from a single observed blurred image and recover the original sharp image. Such task is severely ill-posed and typical approaches involve some heuristic or other steps without clear mathematical explanation to arrive at an acceptable solution. We show that a straight- forward maximum a posteriori estimation incorporating sparse priors and mechanism to deal with boundary artifacts, combined with an efficient numerical method can produce results which compete with or outperform much more complicated state-of-the-art methods. Our method is naturally extended to deal with overexposure in low-light photography, where linear blurring model is violated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we develop a regularization framework for image deblurring based on a new definition of the
normalized graph Laplacian. We apply a fast scaling algorithm to the kernel similarity matrix to derive the
symmetric, doubly stochastic filtering matrix from which the normalized Laplacian matrix is built. We use this
new definition of the Laplacian to construct a cost function consisting of data fidelity and regularization terms
to solve the ill-posed motion deblurring problem. The final estimate is obtained by minimizing the resulting cost
function in an iterative manner. Furthermore, the spectral properties of the Laplacian matrix equip us with the
required tools for spectral analysis of the proposed method. We verify the effectiveness of our iterative algorithm
via synthetic and real examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Camera lenses suffer from aberrations that can cause noticeable blur in captured images, especially when large apertures are used. With very small apertures, images become less sharp due to diffraction. The blur that is due to camera optics can be removed by deconvolution if the PSFs that accurately characterize this blur are available. In this paper we describe a new system developed by us that allows estimating optics blur PSFs in an efficient manner. It consists of a new test chart with black square tiles containing small white random circle pattern and software for fully automatic processing of captured images of this chart. It can process a high resolution image of the chart and produce PSF estimates for a dense set of field position covering the entire image frame within a couple of minutes. The system has been tested with several different lenses and the estimated PSFs have been successfully used for removing optics blur from a variety of different images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chromatic aberration distortions such as wavelength-dependent blur are caused by imperfections in photographic lenses. These distortions are much more severe in the case of color and near-infrared joint acquisition, as a wider band of wavelengths is captured. In this paper, we consider a scenario where the color image is in focus, and the NIR image captured with the same lens and same focus settings is out-of-focus and blurred. To reduce chromatic aberration distortions, we propose an algorithm that estimates the blur kernel and deblurs the NIR image using the sharp color image as a guide in both steps. In the deblurring step, we retrieve the lost details of the NIR image by exploiting the sharp edges of the color image, as the gradients of color and NIR images are often correlated. However, differences of scene reflections and light in visible and NIR bands cause the gradients of color and NIR images to be different in some regions of the image. To handle this issue, our algorithm measures the similarities and differences between the gradients of the NIR and color channels. The similarity measures guide the deblurring algorithm to efficiently exploit the gradients of the color image in reconstructing
high-frequency details of NIR, without discarding the inherent differences between these images. Simulation results verify the effectiveness of our algorithm, both in estimating the blur kernel and deblurring the NIR image, without producing ringing artifacts inherent to the results of most deblurring methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel image fusion algorithm for a visible image and a near infrared (NIR) image. For
the proposed fusion, the image is selected pixel-by-pixel based on local saliency. In this paper, the local saliency
is measured by a local contrast. Then, the gradient information is fused and the output image is constructed
by a Poisson image editing which preserves the gradient information of both images. The effectiveness of the
proposed fusion algorithm is demonstrated in various applications including denoising, dehazing, and image
enhancement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method for upscaling high dynamic range (HDR) images is introduced in this paper. Overshooting artifact is the
common problem when using linear filters such as bicubic interpolation. This problem is visually more noticeable while
working on HDR images where there exist more transitions from dark to bright. Our proposed method is capable of
handling these artifacts by computing a simple gradient map which enables the filter to be locally adapted to the image
content. This adaptation consists of first, clustering pixels into regions with similar edge structures and second, learning
the shape and length of our symmetric linear filter for each of these pixel groups. This new filter can be implemented in
a separable fashion which perfectly fits hardware implementations. Our experimental results show that training our filter
with HDR images can effectively reduce the overshooting artifacts and improve upon the visual quality of the existing
linear upscaling approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For cinematic and episodic productions, on-set look management is an important component of the creative process, and
involves iterative adjustments of the set, actors, lighting and camera configuration. Instead of using the professional
motion capture device to establish a particular look, the use of a smaller form factor DSLR is considered for this purpose
due to its increased agility. Because the spectral response characteristics will be different between the two camera
systems, a camera emulation transform is needed to approximate the behavior of the destination camera. Recently, twodimensional
transforms have been shown to provide high-accuracy conversion of raw camera signals to a defined
colorimetric state. In this study, the same formalism is used for camera emulation, whereby a Canon 5D Mark III DSLR
is used to approximate the behavior a Red Epic cinematic camera. The spectral response characteristics for both cameras
were measured and used to build 2D as well as 3x3 matrix emulation transforms. When tested on multispectral image
databases, the 2D emulation transforms outperform their matrix counterparts, particularly for images containing highly
chromatic content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital Photography and Image Quality I, Joint Session with Conferences 9396 and 9404
The so-called texture loss is a critical parameter in the objective image quality assessment of todays cameras. Especially cameras build in mobile phones show significant loss of low contrast, fine details which are hard to describe using standard resolution measurement procedures. The combination of very small form factor and high pixel count leads to a high demand of noise reduction in the signal-processing pipeline of these cameras. Different work groups within ISO and IEEE are investigating methods to describe the texture loss with an objective method. The so-called dead leaves pattern has been used for quite a while in this context. Image Engineering presented a new intrinsic approach at the Electronic Imaging Conference 2014, which promises to solve the open issue of the original approach, which could be influenced by noise and artifacts. In this paper, we present our experience with the new approach for a large set of different imaging devices. We show, that some sharpening algorithm found in todays cameras can significantly influence the Spatial Frequency Response based on the Dead Leaves structure (SFRDeadLeaves) results and therefore make an objective evaluation of the perceived image quality even harder. For an objective comparison of cameras, the resulting SFR needs to be reduced to a small set of numbers, ideally a single number. The observed sharpening algorithms lead to much better numerical results, while the image quality already degrades due to strong sharpening. So the measured, high SFRDeadLeaves result is not wrong, as it reflects the artificially enhanced SFR, but the numerical result cannot be used as the only number to describe the image quality. We propose to combine the SFRDeadLeaves measurement with other SFR measurement procedures as described in ISO12233:2014. Based on the three different SFR functions using the dead leaves pattern, sinusoidal Siemens Stars and slanted edges, it is possible to obtain a much better description if the perceived image quality. We propose a combination of SFRDeadLeaves, SFREdge and SFRSiemens measurements for an in-depth test of cameras and present our experience based on todays cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital Photography and Image Quality II, Joint Session with Conferences 9396 and 9404
To measure the low light performance of today’s cameras has become a challenge [1]. The increasing quality for noise
reduction algorithms and other steps of the image pipe make it necessary to investigate the balance of image quality
aspects. The first step to define a measurement procedure is to capture images under low light conditions using a huge
variety of cameras and review the images as well as the metadata of these images. Image quality parameters that are
known to be affected by low light levels are noise, resolution, texture reproduction, color fidelity, and exposure. For each
of the parameters thresholds below which the images get unacceptable need to be defined. Although this may later on
require a real psychophysical study to increase the precision of the thresholds the current project tries to find out whether
each parameter can be viewed as an independent one or if multiple parameters need to be grouped to differentiate
acceptable images from unacceptable ones. Another important aspect is the definition of camera settings. For example
the longest acceptable exposure time and how this is affected by image stabilization. Cameras on a tripod may produce
excellent images with multi second exposures. After this ongoing analysis the question is how the light level gets
reported? All these aspects are currently collected and will be incorporated into the upcoming ISO 19093 Standard that
defines the measurement procedure for the low light performance of cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To determine the proper exposure, cameras generally use the concept of “film speed” – a number
representing the film’s sensitivity to light. For film, this number was a function of the emulsion and processing,
changeable only in batches. However, digital cameras essentially process each shot individually, so most adopted
the idea that the film speed of the sensor could be changed for each shot. The catch is that it isn’t clear that the
sensitivity of a sensor used in a digital camera can be adjusted at all: many digital cameras have been claimed
to be “ISO-less,” capable of producing similar images for the same exposure independent of the ISO setting used.
This paper will present the results of testing the ISO-less behavior of various digital cameras, concluding with
a simple proposal for how these results could be used to create a new paradigm for computing exposure and
processing parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the presence of light bloom or glow, multiple peaks may appear in the focus profile and mislead the autofocus system
of a digital camera to an incorrect in-focus decision. We present a novel method to overcome the blooming effect. The
key idea behind the method is based on the observation that multiple peaks are generated due to the presence of false
features in the captured image, which, in turn, are due to the presence of fringe (or feather) of light extending from the
border of the bright image area. By detecting the fringe area and excluding it from focus measurement, the blooming
effect can be reduced. Experimental results show that the proposed anti-blooming method can indeed improve the
performance of an autofocus system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed
LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2-
piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also
the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires
exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial
inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each
entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up
tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the
number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that
minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR
performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd
order polynomial inverse tone mapping with continuous constraint.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel technique for the problem of super-resolution of facial data. The method uses a patch-based technique, and for each low-resolution input image patch, we seek the best matching patches from a database of face images using the Coherency Sensitive Hashing technique. Coherency Sensitive Hashing relies on hashing to combine image coherence cues and image appearance cues to effectively find matching patches in images. This differs from existing methods that apply a high-pass filter on input patches to extract local features. We then perform a weighted sum of the best matching patches to get the enhanced image. We compare with state-of-the-art techniques and observe that the approach provides better performance in terms of both visual quality and reconstruction error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick
Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The
proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when
compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code
model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations
inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding
process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG,
JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC,
followed by the JPEG2000, and JPEG.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image mosaicing is the act of combining two or more images and is used in many applications in computer vision, image
processing, and computer graphics. It aims to combine images such that no obstructive boundaries exist around overlapped
regions and to create a mosaic image that exhibits as little distortion as possible from the original images. Most of the
existing algorithms are the computationally complex and don’t show good results always in obtaining of the stitched
images, which are different: scale, light, various free points of view and others. In this paper we consider an algorithm
which allows increasing the speed of processing in the case of stitching high-resolution images. We reduced the
computational complexity used an edge image analysis and saliency map on high-detailisation areas. On detected areas
are determined angles of rotation, scaling factors, the coefficients of the color correction and transformation matrix. We
define key points using SURF detector and ignore false correspondences based on correlation analysis. The proposed
algorithm allows to combine images from free points of view with the different color balances, time shutter and scale. We
perform a comparative study and show that statistically, the new algorithm deliver good quality results compared to
existing algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.