PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 6498, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fluorescence molecular tomography (FMT) is an emerging modality for the in-vivo imaging of fluorescent probes
which improves upon existing planar photographic imaging techniques by quantitatively reconstructing fluorochrome
distributions in-vivo. We present here results using an FMT system capable of full view imaging for arbitrary surface
geometries. Results are presented comparing single and multiple projection configurations, and illustrating the need for
properly implemented non-negativity constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pinhole imaging is a promising approach for high spatial resolution single gamma emission
imaging in situations when the required field of view (FOV) is small, as is the case for small
animal imaging. However, all pinhole collimators exhibit steep decrease in sensitivity with
increasing angle of incidence from the pinhole axis. This in turn degrades the reconstruction
images, and requires higher dose of radiotracer. We developed a novel pinhole SPECT system
for small animal imaging which uses two opposing and offset small cone-angle square
pinholes, each looking at half of the FOV. This design allows the pinholes to be placed closer to
the object and greatly increases detection efficiency and spatial resolution, while not requiring
larger size detectors. Iterative image reconstruction algorithms for this system have been developed. Preliminary experimental data have demonstrated marked improvement in contrast and spatial resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we propose a spatio-temporal approach for reconstruction of dynamic gated cardiac SPECT images.
As in traditional gated cardiac SPECT, the cardiac cycle is divided into a number of gate intervals, but the
tracer distribution is treated as a time-varying signal for each gate. Our goal is to produce a dynamic image
sequence that shows both cardiac motion and time-varying tracer distribution. To combat the ill-conditioned
nature of the problem, we use B-spline basis functions to regulate the time activities curves, and apply a joint
MAP estimation approach based on motion compensation for reconstruction of all the different gates. We also
explore the benefit of using a time-varying regularization prior for the gated dynamic images. The proposed
approach is evaluated using a dynamic version of the 4D gated mathematical cardiac torso phantom (gMCAT)
simulating a gated SPECT perfusion acquisition with Technitium-99m labeled Teboroxime.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Models used to derive image reconstruction algorithms typically make assumptions designed to increase the
computational tractability of the algorithms while taking enough account of the physics to achieve desired
performance. As the models for the physics become more detailed, the algorithms typically increase in complexity,
often due to increases in the number of parameters in the models. When parameters are estimated from measured
data and models of increased complexity include those of lower complexity as special cases, then as the number of
parameters increases, model errors decrease and estimation errors increase. We adopt an information geometry
approach to quantify the loss due to model errors and Fisher information to quantify the loss due to estimation
errors. These are unified into one cost function. This approach is detailed in an X-ray transmission tomography
problem where allmodels are approximations to the underlying problem defined on the continuum. Computations
and simulations demonstrate the approach. The analysis provides tools for determining an appropriate model
complexity for a given problem and bounds on information that can be extracted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image stitching is used to combine several images into one wide-angled mosaic image. Traditionally mosaic
images have been constructed from a few separate photographs, but nowadays that video recording has become
commonplace even on mobile phones, it is possible to consider also video sequences as a source for mosaic images.
However, most stitching methods require vast amounts of computational resources that make them unusable on
mobile devices.
We present a novel panorama stitching method that is designed to create high-quality image mosaics from
both video clips and separate images even on low-resource devices. The software is able to create both 360
degree panoramas and perspective-corrected mosaics. Features of the software include among others: detection
of moving objects, inter-frame color balancing and rotation correction. The application selects only the frames
of highest quality for the final mosaic image. Low-quality frames are dropped on the fly while recording the
frames for the mosaic.
The complete software is implemented on Matlab, but also a mobile phone version exists. We present a
complete solution from frame acquisition to panorama output with different resource profiles that suit various
platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
If multiple images of a scene are available instead of a single image, we can use the additional information
conveyed by the set of images to generate a higher quality image. This can be done along multiple dimensions.
Super-resolution algorithms use a set of shifted and rotated low resolution images to create a high resolution
image. High dynamic range imaging techniques combine images with different exposure times to generate an
image with a higher dynamic range. In this paper, we present a novel method to combine both techniques and
construct a high resolution, high dynamic range image from a set of shifted images with varying exposure times.
We first estimate the camera response function, and convert each of the input images to an exposure invariant
space. Next, we estimate the motion between the input images. Finally, we reconstruct a high resolution, high
dynamic range image using an interpolation from the non-uniformly sampled pixels. Applications of such an
approach can be found in various domains, such as surveillance cameras, consumer digital cameras, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a multi-image registration method, which aims at recognizing and extracting multiple panoramas from an
unordered set of images without user input. A method for panorama recognition introduced by Lowe and Brown is based
on extraction of a full set of scale invariant image features and fast matching in feature space, followed by post-processing
procedures. We propose a different approach, where the full set of descriptors is not required, and a small number of them are
used to register a pair of images. We propose feature point indexing based on corner strength value. By matching descriptor
pairs with similar corner strengths we update clusters in rotation-scale accumulators, and a probabilistic approach determines
when these clusters are further processed with RANSAC to find inliers of image homography. If the number of inliers and
global similarity between images are sufficient, a fast geometry-guided point matching is performed to improve the accuracy
of registration. A global registration graph, whose node weights are proportional to the image similarity in the area of overlap,
is updated with each new registration. This allows the prediction of undiscovered image registrations by finding the shortest
paths and corresponding transformation chains. We demonstrate our approach using typical image collections containing
multiple panoramic sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the ever increasing computational power of modern day processors, it has become feasible to use more
robust and computationally complex algorithms that increase the resolution of images without distorting edges
and contours. We present a novel image interpolation algorithm that uses the new contourlet transform to
improve the regularity of object boundaries in the generated images. By using a simple wavelet-based linear
interpolation scheme as our initial estimate, we use an iterative projection process based on two constraints
to drive our solution towards an improved high-resolution image. Our experimental results show that our new
algorithm significantly outperforms linear interpolation in subjective quality, and in most cases, in terms of
PSNR as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a 4D spatiotemporal segmentation algorithm based on the Mumford-Shah functional coupled with
shape priors. When used in a clinical setting, our algorithm could greatly alleviate the time that clinicians
must spend working with the acquired data to manually retrieve diagnostically meaningful measurements. The
advantage of the 4D algorithm is that segmentation occurs in both space and time simultaneously, improving
accuracy and robustness over existing 2D and 3D methods. The segmentation contour or hyper-surface is a zero
level set function in 4D space that exploits the coherence within continuous regions not only between spatial
slices, but between consecutive time samples as well. Shape priors are incorporated into the segmentation to limit
the result to a known shape. Variations in shape are computed using principal component analysis (PCA), of a
signed distance representation of the training data derived from manual segmentation of 18 carefully selected data
sets. The automatic segmentation occurs by manipulating the parameters of this signed distance representation
to minimize a predetermined energy functional. Several tests are presented to show the consistency and accuracy
of the novel automatic 4D segmentation process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the application of the expectation-maximization/maximization of the posterior marginals
(EM/MPM) algorithm to serial section images, which inherently represent three dimensional (3D) data. The images of
interest are electron micrographs of cross sections of a titanium alloy. To improve the accuracy of the resulting
segmentation images, the images are pre-filtered before being used as input to the EM/MPM algorithm. The output of
the pre-filter at a particular pixel represents an estimate of the entropy at that pixel, based on the grayscale values of
neighboring pixels. This filter tends to be biased towards higher entropy values if an edge is present within the window
being used. This causes edges in the final segmentation to move out from higher entropy regions and into lower entropy
regions. In order to preserve the locations of these edges, a multiscale technique involving the use of an adaptive filter
window has been developed. We present experimental results demonstrating the application of this technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation using the piecewise smooth variational model proposed by Mumford and Shah is both robust and computationally
expensive. Fortunately, both the intermediate segmentations computed in the process of the evolution, and the
final segmentation itself have a common structure. They typically resemble a linear combination of blurred versions of the
original image. In this paper, we present methods for fast approximations to Mumford-Shah segmentation using reduced
image bases. We show that the majority of the robustness of Mumford-Shah segmentation can be obtained without allowing
each pixel to vary independently in the implementation. We illustrate segmentations of real images that show how the
proposed segmentation method is both computationally inexpensive, and has comparable performance to Mumford-Shah
segmentations where each pixel is allowed to vary freely.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Representation and comparison of shapes is a problem with many applications in computer vision and imaging, including object recognition and medical diagnosis. We will discuss some constructions from the theory of conformal mapping which provide ways to represent and compare planar shapes. It is a remarkable fact that conformal maps from the unit disk to a planar domain encode the geometry of the domain in useful and tangible ways. Two examples of the relationship between conformal mapping and geometry are provided by the medial axis and the boundary curvature of a planar domain. Both the medial axis and the boundary curvature can be used in applications to compare and describe shapes and both appear clearly in the conformal structure. Here we introduce some results demonstrating how conformal mapping encodes the geometry of a planar domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The theory of compressive sensing (CS) enables the reconstruction of a sparse or compressible
image or signal from a small set of linear, non-adaptive (even random) projections. However, in
many applications, including object and target recognition, we are ultimately interested in making
a decision about an image rather than computing a reconstruction. We propose here a framework
for compressive classification that operates directly on the compressive measurements without first
reconstructing the image. We dub the resulting dimensionally reduced matched filter the smashed
filter. The first part of the theory maps traditional maximum likelihood hypothesis testing into the
compressive domain; we find that the number of measurements required for a given classification
performance level does not depend on the sparsity or compressibility of the images but only on
the noise level. The second part of the theory applies the generalized maximum likelihood method
to deal with unknown transformations such as the translation, scale, or viewing angle of a target
object. We exploit the fact the set of transformed images forms a low-dimensional, nonlinear
manifold in the high-dimensional image space. We find that the number of measurements required
for a given classification performance level grows linearly in the dimensionality of the manifold but
only logarithmically in the number of pixels/samples and image classes. Using both simulations
and measurements from a new single-pixel compressive camera, we demonstrate the effectiveness
of the smashed filter for target classification using very few measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider two natural extensions of the standard (lower case script "L")1 minimization framework for compressive sampling. The
two methods, one based on penalizing second-order derivatives and one based on the redundant wavelet transform,
can also be viewed as variational methods which extend the basic total-variation recovery program. A numerical
example illustrates that these methods tend to produce smoother images than the standard recovery programs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bayer Pattern Color Filter Arrays (CFAs) are widely used in digital photo and video cameras. Generally these images are
corrupted by a signal and exposure dependent quantum noise. An automatic image processing carrying out within
camera usually implies a gamma and color corrections and an interpolation. And at the same time the noise in the image
becomes non-quantum and spatially correlated. This results in a drastic decrease of posterior noise reduction.
Considerably better quality of output images can be provided if non-processed Bayer Pattern CFAs (in RAW format) are
extracted from a camera and processed on PC. For this case, more effective noise reduction can be achieved as well as
better quality image reconstruction algorithms can accessed. The only drawback of storing images in a camera in RAW
format is their rather large size. Existing lossless image compression methods provide image compression ratios (CRs)
for such images of only about 1.5...2 times. At the same time, a posterior filtering in addition to noise reduction results
in appearing losses in the image. Therefore, the use of lossy image compression methods is defensible in this case while
final decreasing of effectiveness of noise reduction is inessential. The paper describes a method of adaptive selection of
quantization step for each block of a Bayer Pattern CFAs for DCT based image compression. This method allows
restricting the decreasing of the posterior noise reduction by only 0.25...0.3 dB. Achieved CRs for the proposed scheme
are by 2.5...5 times higher than for strictly lossless image compression methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we develop a spectral imaging system and associated reconstruction methods that have been designed
to exploit the theory of compressive sensing. Recent work in this emerging field indicates that when the
signal of interest is very sparse (i.e. zero-valued at most locations) or highly compressible in some basis, relatively
few incoherent observations are necessary to reconstruct the most significant non-zero signal components.
Conventionally, spectral imaging systems measure complete data cubes and are subject to performance limiting
tradeoffs between spectral and spatial resolution. We achieve single-shot full 3D data cube estimates by using
compressed sensing reconstruction methods to process observations collected using an innovative, real-time,
dual-disperser spectral imager. The physical system contains a transmissive coding element located between
a pair of matched dispersers, so that each pixel measurement is the coded projection of the spectrum in the
corresponding spatial location in the spectral data cube. Using a novel multiscale representation of the spectral
image data cube, we are able to accurately reconstruct 256×256×15 spectral image cubes using just 256×256
measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Commodity graphics hardware boards (GPUs) have achieved remarkable speedups in various sub-areas of Computed
Tomography (CT). This paper takes a close look at the GPU architecture and its programming model and describes a
successful acceleration of Feldkamp's cone-beam CT reconstruction algorithm. Further, we will also have a comparative
look at the new emerging Cell architecture in this regard, which similar to GPUs has also seen its first deployment in
gaming and entertainment. To complete the discussion on high-performance PC-based computing platforms, we will
also compare GPUs with FPGA (Field Programmable Gate Array) based medical imaging solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tomographic image reconstruction, such as the reconstruction of CT projection values, of tomosynthesis data,
PET or SPECT events, is computational very demanding. In filtered backprojection as well as in iterative
reconstruction schemes, the most time-consuming steps are forward- and backprojection which are often limited
by the memory bandwidth.
Recently, a novel general purpose architecture optimized for distributed computing became available: the
Cell Broadband Engine (CBE). Its eight synergistic processing elements (SPEs) currently allow for a theoretical
performance of 192 GFlops (3 GHz, 8 units, 4 floats per vector, 2 instructions, multiply and add, per clock).
To maximize image reconstruction speed we modified our parallel-beam and perspective backprojection
algorithms which are highly optimized for standard PCs, and optimized the code for the CBE processor.1-3 In
addition, we implemented an optimized perspective forwardprojection on the CBE which allows us to perform
statistical image reconstructions like the ordered subset convex (OSC) algorithm.4
Performance was measured using simulated data with 512 projections per rotation and 5122 detector elements.
The data were backprojected into an image of 5123 voxels using our PC-based approaches and the new CBE-
based algorithms. Both the PC and the CBE timings were scaled to a 3 GHz clock frequency.
On the CBE, we obtain total reconstruction times of 4.04 s for the parallel backprojection, 13.6 s for the
perspective backprojection and 192 s for a complete OSC reconstruction, consisting of one initial Feldkamp
reconstruction, followed by 4 OSC iterations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional iterative reconstruction of large CT data sets poses several challenges in terms of the associated
computational and memory requirements. In this paper, we present results obtained by implementing
a computational framework for reconstructing axial cone-beam CT data using a cluster of inexpensive dualprocessor
PCs. In particular, we discuss our parallelization approach, which uses POSIX threads and message
passing (MPI) for local and remote load distribution, as well as the interaction of that load distribution with
the implementation of ordered subset based algorithms. We also consider a heuristic data-driven 3D focus of
attention algorithm that reduces the amount of data that must be considered for many data sets. Furthermore,
we present a modification to the SIRT algorithm that reduces the amount of data that must be communicated
between processes. Finally, we introduce a method of separating the work in such a way that some computation
can be overlapped with the MPI communication thus further reducing the overall run-time. We summarize the
performance results using reconstructions of experimental data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cone-beam reconstruction (CBR) is useful for producing volume images from projections in many fields including
medicine, biomedical research, baggage scanning, paleontology, and nondestructive manufacturing inspection. CBR
converts a set of two-dimensional (2-D) projections into a three-dimensional (3-D) image of the projected object. The
most common algorithm used for CBR is referred to as the Feldkamp-Davis-Kress (FDK) algorithm; this involves
filtering and cone-beam backprojection steps for each projection of the set. Over the past decade we have observed or
studied FDK on platforms based on many different processor types, both single-processor and parallel-multiprocessor
architectures. In this paper we review the different platforms, in terms of design considerations that include speed,
scalability, ease of programming, and cost. In the past few years, the availability of programmable special processors
(i.e. graphical processing units [GPUs] and Cell Broadband Engine [BE]), has resulted in platforms that meet all the
desirable considerations simultaneously.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bilateral filtering1, 2 has proven to be a powerful tool for adaptive denoising purposes. Unlike conventional filters,
the bilateral filter defines the closeness of two pixels not only based on geometric distance but also based on
radiometric (graylevel) distance. In this paper, to further improve the performance and find new applications,
we make contact with a classic non-parametric image reconstruction technique called kernel regression,3 which
is based on local Taylor expansions of the regression function. We extend and generalize the kernel regression
method and show that bilateral filtering is a special case of this new class of adaptive image reconstruction
techniques, considering a specific choice for weighting kernels and zeroth order Taylor approximation. We show
improvements over the classic bilateral filtering can be achieved by using higher order local approximations of
the signal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well-known that Total Variation (TV) minimization with L2 data fidelity terms (which corresponds to white
Gaussian additive noise) yields a restored image which presents some loss of contrast. The same behavior occurs
for TV models with non-convex data fidelity terms that represent speckle noise. In this note we propose a new
approach to cope with the restoration of Synthetic Aperture Radar images while preserving the contrast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A nonlocal quadratic functional of weighted differences is examined. The weights are based on image features
and represent the affinity between different pixels in the image. By prescribing different formulas for the weights,
one can generalize many local and nonlocal linear denoising algorithms, including nonlocal means and bilateral
filters. The steepest descent for minimizing the functional can be interpreted as a nonlocal diffusion process. We
show state of the art denoising results using the nonlocal flow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In any real optical imaging system, some portion of the entering light flux is
misdirected to undesired locations due to scattering from surface imperfections
and multiple reflections between optical elements. This unwanted light is called
stray light. Its effects include lower contrast, reduced detail, and color inaccuracy. Accurate removal of stray-flux effects first requires determination of the stray light point-spread function (PSF) of the system. For digital still cameras,
we assume a parametric, shift-variant, rotationally invariant PSF model. For
collection of data to estimate the parameters of this model, we use a light
source box that provides nearly uniform illumination behind a circular aperture.
Several images of this light source are captured when it is at different locations in the field of view of the camera. Also, another exposure of each
scene with a different shutter speed is used to provide details in the darker
regions. A subset of the data obtained from these images is used in a nonlinear
optimization algorithm. After estimating the parameters of the PSF model, we provide the results of applying the correction algorithm to the images taken of real world scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is devoted to a recent topic in image analysis: the decomposition of an image into a cartoon or
geometric part, and an oscillatory or texture part. Here, we propose a practical solution to the (BV,G) model
proposed by Y. Meyer. We impose that the cartoon is a function of bounded variation, while the texture is
represented as the Laplacian of some function whose gradient belongs to L(infinity). The problem thus becomes related
with the absolutely minimizing Lipschitz extensions and the infinity Laplacian. Experimental results for image
denoising and cartoon + texture separation, together with details of the algorithm, are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Retinal image of a symmetric object is itself symmetric only for a small set of viewing directions. Interestingly, human
subjects have little difficulty in determining whether a given retinal image was produced by a symmetric object,
regardless of the viewing direction. We tested perception of planar (2D) symmetric figures (dotted patterns and
polygons) when the figures were slanted in depth. We found that symmetry could be detected reliably with polygons,
but not with dotted patterns. Next, we tested the role image features representing the symmetry of the pattern itself
(orientation of projected symmetry axis and symmetry lines) vs. those representing the 3D viewing direction
(orientation of the axis of rotation). We found that symmetry detection is improved when the projected symmetry axis
or lines are known to the subject, but not when the axis of rotation is known. Finally, we showed that performance with
orthographic images is higher than that with perspective images. A computational model, which measures the
asymmetry of the presented polygon based on its single orthographic or perspective image, is presented. Performance
of the model is similar to the performance of human subjects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Discrimination of friendly or hostile objects is investigated using information-theory measures/metric in an image
which has been compromised by a number of factors. In aerial military images, objects with different orientations can
be reasonably approximated by a single identification signature consisting of the average histogram of the object under
rotations. Three different information-theoretic measures/metrics are studied as possible criteria to help classify the
objects. The first measure is the standard mutual information (MI) between the sampled object and the library object
signatures. A second measure is based on information efficiency, which differs from MI. Finally an information
distance metric is employed which determines the distance, in an information sense, between the sampled object and the
library object. It is shown that the three (parsimonious) information-theoretic variables introduced here form an
independent basis in the sense that any variable in the information channel can be uniquely expressed in terms of the
three parameters introduced here. The methodology discussed is tested on a sample set of standardized images to
evaluate their efficacy. A performance standardization methodology is presented which is based on manipulation of
contrast, brightness, and size attributes of the sample objects of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a computationally efficient Optimal Mass Transport algorithm. This method is based on the
Monge-Kantorovich theory and is used for computing elastic registration and warping maps in image registration and
morphing applications. This is a parameter free method which utilizes all of the grayscale data in an image pair in a
symmetric fashion. No landmarks need to be specified for correspondence. In our work, we demonstrate significant
improvement in computation time when our algorithm is applied as compared to the originally proposed method by
Haker et al [1]. The original algorithm was based on a gradient descent method for removing the curl from an initial
mass preserving map regarded as 2D vector field. This involves inverting the Laplacian in each iteration which is now
computed using full multigrid technique resulting in an improvement in computational time by a factor of two. Greater
improvement is achieved by decimating the curl in a multi-resolutional framework. The algorithm was applied to 2D
short axis cardiac MRI images and brain MRI images for testing and comparison.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We study the effect of interpolation and restriction operators on the convergence of multigrid algorithms for
solving linear PDEs. Using a modal analysis of a subclass of these systems, we determine how two groups of
the modal components of the error are filtered and mixed at each step in the algorithm. We then show that the
convergence rate of the algorithm depends on both the properties of the interpolation and restriction operators
and the characteristics of the system. The analysis opens the problem of optimization of these operators. By
different choices of operators we show a trade-off between the optimization of the convergence rate and the
optimization of the number of computations required per iteration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advances in nanotechnology have resulted in a variety of exciting new nanomaterials, such as nanotubes,
nanosprings and suspended nanoparticles. Characterizing these materials is important for refining the manufacturing
process as well as for determining their optimal application. The scale of the nanocomponents makes
high-resolution imaging, such as electron microscopy, a preferred method for performing the analyses. This work
focuses on the specific problem of using transmission electron microscopy (TEM) and image processing techniques
to quantify the spatial distribution of nanoparticles suspended in a film. In particular, we focus on the
problem of determining whether the nanoparticles are located in a co-planar fashion or not. The correspondences
between particles in images acquired at different tilt angles is used as an estimate of co-planarity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image inpainting techniques have been widely used to remove undesired visual objects in images such as damaged portions of photographs and people who have accidentally entered into pictures. Conventionally, the missing parts of an image are completed by optimizing the objective function which is defined based on the sum of SSD (sum of squared differences). However, the naive SSD-based objective function is not robust against intensity change in an image. Thus, unnatural intensity change often appears in the missing parts. In addition, when an image has continuously changing texture patterns, the completed texture in a resultant image sometimes blurs due to inappropriate pattern matching. In this paper, in order to improve the image quality of the completed texture, the conventional objective function is newly extended by considering intensity changes and spatial locality to prevent unnatural intensity changes and blurs in a resultant image. By minimizing the extended energy function, the missing regions can be completed without unnatural intensity changes and blurs. In experiments, the effectiveness of the proposed method is successfully demonstrated by applying our method to various images and comparing the results with those obtained by the conventional method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We study the problem of signal reconstruction from a periodical nonuniform set of samples. The considered
system takes samples of delayed versions of a continuous signal at low sampling rate, with different fractional
delays for different channels. We design IIR synthesis filters so that the overall system approximates a sampling
system of high sampling rate using techniques from model-matching problem in control theory with available
software (such as Matlab). Unlike traditional signal processing methods, our approach uses techniques from
control theory which convert systems with fractional delays into H-norm-equivalent discrete-time systems. The
synthesis filters are designed so that they minimize the H(infinity) norm of the error system. As a consequence, the
induced error is uniformly small over all (band-limited and band-unlimited) input signals. The experiments are
also run for synthesized images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a stochastic grammar model for random-walk-like time series that has features at several temporal scales.
We use a tree structure to model these multiscale features. The inside-outside algorithm is used to estimate the model
parameters. We develop an algorithm to forecast the sign of the first difference of a time series. We illustrate the algorithm
using log-price series of several stocks and compare with linear prediction and a neural network approach. We furthermore
illustrate our algorithm using synthetic data and show that it significantly outperforms both the linear predictor and the
neural network. The construction of our synthetic data indicates what types of signals our algorithm is well suited for.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Inter-subject analysis of anatomical and functional brain imaging data requires the images to be registered to
a common coordinate system in which anatomical features are aligned. Intensity-based volume registration
methods can align subcortical structures well, but the variability in sulcal folding patterns typically results
in misalignment of the cortical surface. Conversely, surface-based registration using sulcal features can produce
excellent cortical alignment but the mapping between brains is restricted to the cortical surface. Here we describe
a method for volumetric registration that also produces a one-to-one point correspondence between cortical
surfaces. This is achieved by first parameterizing and aligning the cortical surfaces. We then use a constrained
harmonic mapping to define a volumetric correspondence between brains. Finally, the correspondence is refined
using an intensity-based warp. We evaluate the performance of our proposed method in terms of the inter-subject
alignment of expert-labeled sub-cortical structures after registration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Analysis of functional magnetic resonance imaging (fMRI) data has been performed using both model-driven
(parametric) methods and data-driven methods. An advantage of model-driven methods is incorporation of prior
knowledge of spatial and temporal properties of the hemodynamic response (HDR). A novel analytical framework for
fMRI data has been developed that identifies multi-voxel regions of activation through iterative segmentation-based
optimization over HDR estimates for both individual voxels and regional groupings. Simulations using synthetic
activation embedded in autoregressive integrated moving average (ARIMA) noise reveal the proposed procedure to be
more sensitive and selective than conventional fMRI analysis methods (reference set: principle component analysis,
PCA; independent component analysis, ICA; k-means clustering, k=100; univariate t-test) in identification of active
regions over the range of average contrast-to-noise ratios of 0.5 to 4.0. Results of analysis of extant human data (for
which the average contrast-to-noise ratio is unknown) are further suggestive of greater statistical detection power.
Refinement of this new procedure is expected to reduce both false positive and negative rates, without resorting to
filtering that can reduce the effective spatial resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By acknowledging local decay and phase
evolution, single-shot parameter assessment by retrieval
from signal encoding (SS-PARSE) models each
datum as a sample from (k, t)-space rather than
k-space. This more accurate model promises better
performance at a price of more complicated reconstruction
computations. Normally, conjugate-gradients
is used to simultaneously estimate local image magnitude,
decay, and frequency. Each iteration of the
conjugate-gradients algorithm requires several evaluations
of the image synthesis function and one evaluation
of gradients. Because of local decay and frequency
and the non-Cartesian trajectory, fast algorithms
based on FFT cannot be effectively used to accelerate
the evaluation of the image synthesis function and gradients.
This paper presents a fast algorithm to compute
the image synthesis function and gradients by linear
combinations of FFTs. By polynomial approximation
of the exponential time function with local decay and
frequency as parameters, the image synthesis function
and gradients become linear combinations of non-
Cartesian Fourier transforms. In order to use the FFT,
one can interpolate non-Cartesian trajectories. The
quality of images reconstructed by the fast approach
presented in this paper is the same as that of the
normal conjugate-gradient method with significantly
reduced computation time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The accuracy of the system model in an iterative reconstruction algorithm greatly affects the quality of reconstructed
PET images. For efficient computation in reconstruction, the system model in PET can be factored into a product of
geometric projection matrix and detector blurring matrix, where the former is often computed based on analytical
calculation, and the latter is estimated using Monte Carlo simulations. In this work, we propose a method to estimate the
2D detector blurring matrix from experimental measurements. Point source data were acquired with high-count statistics
in the microPET II scanner using a computer-controlled 2-D motion stage. A monotonically convergent iterative
algorithm has been derived to estimate the detector blurring matrix from the point source measurements. The algorithm
takes advantage of the rotational symmetry of the PET scanner with the modeling of the detector block structure. Since
the resulting blurring matrix stems from actual measurements, it can take into account the physical effects in the photon
detection process that are difficult or impossible to model in a Monte Carlo simulation. Reconstructed images of a line
source phantom show improved resolution with the new detector blurring matrix compared to the original one from the
Monte Carlo simulation. This method can be applied to other small-animal and clinical scanners.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned about high resolution reconstruction of projection reconstruction MR imaging from
angular under-sampled k-space data. A similar problem has been recently addressed in the framework of compressed
sensing theory. Unlike the existing algorithms used in compressed sensing theory, this paper employs
the FOCal Underdetermined System Solver(FOCUSS), which was originally designed for EEG and MEG source
localization to obtain sparse solution by successively solving quadratic optimization. We show that FOCUSS
is very effective for the projection reconstruction MRI, because the medical images are usually sparse in image
domain, and the center region of the under-sampled radial k-space data still provides a meaningful low resolution
image, which is essential for the convergence of FOCUSS. We applied FOCUSS for projection reconstruction MR
imaging using single coil. Extensive experiments confirms that high resolution reconstruction with virtually free
of angular aliasing artifacts can be obtained from severely under-sampled k-space data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts
in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed
image quality when the dosage is low and the noise is therefore high. However, high computational
cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications.
Among the various iterative methods that have been studied for statistical reconstruction, iterative
coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its
fast convergence.
This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore
reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous
iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence
by focusing computation where it is most needed. Experimental results with real data indicate that the
method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Myosin filaments are important components of striated muscle and pack in a two-dimensional array.
The array can be imaged by electron microscopy of thin cross-sections which shows, for many species,
that the filaments adopt two orientations with an interesting statistical distribution. Analysis of the
micrographs and Monte Carlo modelling shows that the disorder maps to a frustrated Ising model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional electrophotographic printers tend to produce Moire artifacts when used for printing images scanned from
printed material such as books and magazines. We propose a novel descreening algorithm that removes a wide range of
Moire-causing screen frequencies in a scanned document while preserving image sharpness and edge detail. We develop
two non-linear noise removal algorithms, resolution synthesis denoising (RSD) and modified SUSAN filtering, and use
the combination of the two to achieve a robust descreening performance. The RSD predictor is based on a stochastic
image model whose parameters are optimized in an offline training algorithm using pairs of spatially registered original
and scanned images obtained from real scanners and printers. The RSD algorithm works by classifying the local window
around the current pixel in the scanned image and then applying linear filters optimized for the selected classes. The
modified SUSAN filter uses the output of RSD for performing an edge-preserving smoothing on the raw scanned data and
produces the final output of the descreening algorithm.
The performance of the descreening algorithm was evaluated on a variety of test documents obtained from different
printing sources. The experimental results demonstrate that the algorithm suppresses halftone noise without deteriorating
text and image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a computational model for image formation of in-vitro adult hippocampal progenitor (AHP)
cells, in bright-field time-lapse microscopy. Although this microscopymodality barely generates sufficient contrast
for imaging translucent cells, we show that by using a stack of defocused image slices it is possible to extract
position and shape of spherically shaped specimens, such as the AHP cells. This inverse problem was solved
by modeling the physical objects and image formation system, and using an iterative nonlinear optimization
algorithm to minimize the difference between the reconstructed and measured image stack. By assuming that
the position and shape of the cells do not change significantly between two time instances, we can optimize
these parameters using the previous time instance in a Bayesian estimation approach. The 3D reconstruction
algorithm settings, such as focal sampling distance, and PSF, were calibrated using latex spheres of known size
and refractive index. By using the residual between reconstructed and measured image intensities, we computed
a peak signal-to-noise ratio (PSNR) to 28 dB for the sphere stack. A biological specimen analysis was done using
an AHP cell, where reconstruction PSNR was 28 dB as well. The cell was immuno-histochemically stained and
scanned in a confocal microscope, in order to compare our cell model to a ground truth. After convergence the
modelled cell volume had an error of less than one percent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-temporal earth-observation imagery is now available at sub-meter accuracy and has been found very useful for
performing quick damage detection for urban areas affected by large-scale disasters. The detection of structural damage
using images taken before and after disaster events is usually modeled as a change detection problem. In this paper,
we propose a new perspective for performing change detection, where dissimilarity measures are used to extract urban
structural damage. First, image gradient magnitudes and spatial variances are used as a means to capture urban structural
features. Subsequently, a family of distribution dissimilarity measures, including: Euclidean distance, Cosine, Jeffery
divergence, and Bhattacharyya distance, are used to extract structural damage. We particularly focus on evaluating the
performance of these dissimilarity-based change detection methods under the framework of pattern classification and crossvalidation,
and with the use of a pair of bi-temporal satellite images captured before and after a major earthquake in Bam,
Iran. The paper concludes that the proposed change detection methods for urban structural damage detection, which
are conceptually simple and computationally efficient, outperform the traditional correlation analysis in terms of both
classification accuracy and tolerance to local alignment errors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sparse object supports are often encountered in many imaging problems. For such sparse objects, recent theory
of compressed sensing tells us that accurate reconstruction of objects are possible even from highly limited
number of measurements drastically smaller than the Nyquist sampling limit by solving L1 minimization problem.
This paper employs the compressed sensing theory for cryo-electron microscopy (cryo-EM) single particle
reconstruction of virus particles. Cryo-EM single particle reconstruction is a nice application of the compressed
sensing theory because of the following reasons: 1) in some cases, due to the difficulty in sample collection, each
experiment can obtain micrographs with limited number of virus samples, providing undersampled projection
data, and 2) the nucleic acid of a viron is enclosed within capsid composed of a few proteins; hence the support
of capsid in 3-D real space is quite sparse. In order to minimize the L1 cost function derived from compressed
sensing, we develop a novel L1 minimization method based on the sliding mode control theory. Experimental
results using synthetic and real virus data confirm that the our algorithm provides superior reconstructions of
3-D viral structures compared to the conventional reconstruction algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional (3D) microscopic imaging techniques such as confocal microscopy have become a common tool in
measuring cellular structures. While computer volume visualization has advanced into a sophisticated level in medical
applications, much fewer studies have been made on data acquired by the 3D microscopic imaging techniques. To
optimize the visualization of such data, it is important to consider the data characteristics such as thin data volume. It is
also interesting to apply the new GPU (graphics processing unit) technology to interactive volume rendering of the data.
In this paper, we discuss several texture-based techniques to visualize confocal microscopy data by considering the data
characteristics and with support of GPU. One simple technique generates one set of 2D textures along the axial direction
of image acquisition. An improved technique uses three sets of 2D textures in the three principal directions, and creates
the rendered image via a weighted sum of the images generated by blending the individual texture sets. In addition, we
propose a new approach based on stencil such that textures are blended based on a stencil control. Given the viewing
condition, a texel needs to be drawn only when its corresponding projection on the image plane is inside a stencil area.
Finally, we have explored the use of multiple-channel datasets for flexible classification of objects. These studies are
useful to optimize the visualization of 3D microscopic imaging data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic segmentation is an essential problem in biomedical imaging. It is still an open problem to automatically
segment biomedical images with complex structures and compositions. This paper proposes a novel algorithm called
Gradient-Intensity Clusters and Expanding Boundaries (GICEB). The algorithm attempts to solve the problem with
considerations of the image properties in intensity, gradient, and spatial coherence in the image space. The solution is
achieved through a combination of using a two-dimensional histogram, domain connectivity in the image space, and
segment region growing. The algorithm has been tested on some real images and the results have been evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.