PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6974, including the Title Page, Copyright
information, Table of Contents, Introduction (if any), and the
Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band
night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a
daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the
colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are
independent of the scene content. Here we describe the implementation of this method in two prototype portable dual
band realtime night vision systems. One system provides co-aligned visual and near-infrared bands of two image
intensifiers, the other provides co-aligned images from a digital image intensifier and an uncooled longwave infrared
microbolometer. The co-aligned images from both systems are further processed by a notebook computer. The color
mapping is implemented as a realtime lookup table transform. The resulting colorised video streams can be displayed in
realtime on head mounted displays and stored on the hard disk of the notebook computer. Preliminary field trials
demonstrate the potential of these systems for applications like surveillance, navigation and target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a fast and efficient method to derive and apply natural colors to nighttime imagery from multiband sensors.
The color mapping is derived from the combination of a multiband image and a corresponding natural color reference
image. The mapping optimizes the match between the multiband image and the reference image, and yields a nightvision
image with colors similar to that of the daytime image. The mapping procedure is simple and fast. Once it has been
derived the color mapping can be deployed in realtime. Different color schemes can be used tailored to the environment
and the application. The expectation is that by displaying nighttime imagery in natural colors human observers will be
able to interpret the imagery better and faster, thereby improving situational awareness and reducing reaction times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many image fusion systems involving passive sensors require the accurate registration of the sensor data prior to
performing fusion. Since depth information is not readily available in such systems, all registration algorithms are
intrinsically approximations based upon various assumption about the depth field. Although often overlooked, many
registration algorithms can break down in certain situations and this may adversely affect the image fusion performance.
In this paper, we discuss a framework for quantifying the accuracy and robustness of image registration algorithms
which allows a more precise understanding of their shortcomings. In addition, some novel algorithms have been
investigated that overcome some of these limitations. A second aspect of this work has considered the treatment of
images from multiple sensors whose angular and spatial separation is large and where conventional registration
algorithms break down (typically greater than a few degrees of separation). A range of novel approaches is reported
which exploit the use of parallax to estimate depth information and reconstruct a geometrical model of the scene. The
imagery can then be combined with this geometrical model to render a variety of useful representations of the data.
These techniques (which we term Volume Registration) show great promise as a means of gathering and presenting 3D
and 4D scene information for both military and civilian applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An object-image metric is an extension of standard metrics in that it is constructed for matching and comparing
configurations of object features to configurations of image features. For the generalized weak perspective camera,
it is invariant to any affine transformation of the object or the image. Recent research in the exploitation of the
object-image metric suggests new approaches to Automatic Target Recognition (ATR). This paper explores the
object-image metric and its limitations. Through a series of experiments, we specifically seek to understand how
the object-image metric could be applied to the image registration problem-an enabling technology for ATR..
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This contribution presents a fusion method for spectral series with the main purpose of obtaining 3D
information. The image series to be fused are combined stereo and spectral series gained with a camera
array. Therefore, in order to register them, features that are invariant with respect to the varying
gray values in the spectral images are extracted. The proposed approach is region based and uses
characteristics like size, position and shape for registration. The regions are identified using the watershed
transformation. The fusion problem is modeled using energy functionals that are to be optimized. They
take into consideration the size, position, shape and correlation of the regions. Using the implemented
algorithm, several scenes have been reconstructed. The experimental results show that the proposed
method delivers reliable and accurate dense depth maps of combined stereo and spectral series.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are over 250 image steganography methods available on the Internet. In digital image steganalysis an analyst has
three goals, first determine if an embedded message exists, next determine the embedding method used to create the
stego image and finally extract the hidden message. The objective of this paper lies on the second goal, that is, to
identify the embedding technique used to create the steganography image. Several detection systems currently exist, so
the identification problem becomes one of determining which detection system has correctly identified the embedding
method. In this work, the individual detection systems are fused using boosting. Boosting is a powerful technique for
combining an ensemble of base classifiers to produce a form of committee with improved performance over any of the
single classifiers in the ensemble. The results in this paper show that boosting takes advantage of the individual strengths
from each detection systems and classification performance is increased by 10%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability of a tracker to isolate the foreground target from the background of an image is crucially dependent on the set
of features selected for tracking. Collins & Liu [2] propose an on-line, adaptive approach to selecting the set of features
based on the insight that the set of features that best discriminate between target and background classes is the best set to
use for tracking. In previous work [10], we have proposed an approach based on Combinatorial Fusion Analysis for
selecting features for Real-Time tracking. We discuss the relative merits of the two methods and motivate their
combination to produce an improved tracking system. We show several results from a difficult tracking sequence with
human targets to demonstrate the effectiveness of the combined system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The registration of images from cameras of different types and/or at different locations is well researched topic.
It is of great interest for both military and civilian applications. Researchers have come up with pixel level
registration techniques by exploiting intensity correlations to spatially align pixels from the two cameras. This
is a computationally expensive method as it requires pixel level operation on the images and this would make
it difficult to register the images in real time. Furthermore, images from different types of cameras may have
different intensity distributions for corresponding pixels which will degrade the registration accuracy. In this
paper we propose to use Multilayer Perceptron (MLP) neural network to solve the image registration problem.
The experimental results show that the performance of the proposed method is suited for registration both in
speed and accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large scale sensor networks composed of many low-cost small sensors networked together with a small number of high
fidelity position sensors can provide a robust, fast and accurate air defense and warning system. The team has been
developing simulations of such large networks, and is now adding terrain data in an effort to provide more realistic
analysis of the approach. This work, a heterogeneous sensor network simulation system with integrated terrain data for
real-time target detection in a three-dimensional environment is presented. The sensor network can be composed of large
numbers of low fidelity binary and bearing-only sensors, and small numbers of high fidelity position sensors, such as
radars. The binary and bearing-only sensors are randomly distributed over a large geographic region; while the position
sensors are distributed evenly. The elevations of the sensors are determined through the use of DTED Level 0 dataset.
The targets are located through fusing measurement information from all types of sensors modeled by the simulation.
The network simulation utilizes the same search-based optimization algorithm as in our previous two-dimensional sensor
network simulation with some significant modifications. The fusion algorithm is parallelized using spatial
decomposition approach: the entire surveillance area is divided into small regions and each region is assigned to one
compute node. Each node processes sensor measurements and terrain data only for the assigned sub region. A master
process combines the information from all the compute nodes to get the overall network state. The simulation results
have indicated that the distributed fusion algorithm is efficient enough so that an optimal solution can be reached before
the arrival of the next sensor data with a reasonable time interval, and real-time target detection can be achieved. The
simulation was performed on a Linux cluster with communication between nodes facilitated by the Message Passing
Interface (MPI). The input target information for the simulations is a set of modified target track data generated from a
realistic theater level air combat simulation. The probability of detection (POD), false alarm rate (FAR), and average
deviation (AVD) are used in evaluating the network performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present the results of applying a general purpose feature combination framework for tracking
to the specific task of tracking vehicles in UAV data sets. In the fusion framework used (previously presented
elsewhere1) vehicles' pixel-based features from multiple channels, specifially RGB and thermal IR, are split across
separate individual spatiogram trackers. The use of spatiograms allows embedding of some spatial information
into the models whilst also avoiding the exponential increase in computational load and memory requirements
associated with the more commonly used histogram. This tracking framework is embedded in a complete system
for detecting and tracking vehicles. The system first carries out pre-processing to ensure spatially and temporally
aligned visible spectrum and IR data prior to tracking. Vehicle detection in the initial two frames is achieved
by first compensating for camera motion, followed by frame differencing and post-processing (thresholding and
size filtering) to identify vehicle regions. Each vehicle is then described by a bounding box and this is used to
generate a set of spatiograms for each of the available data channels. The detected vehicle is then tracked using
the spatiogram tracker framework. Results of experiments on a variety of UAV data sets indicate the promising
performance of the overall system, even in the presence of significant illumination variation, partial and full
occlusions and significant camera motion and focus change. Results are particularly encouraging given that we
do not periodically re-initialise the detection phase and this points to the robustness of the tracking framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a highly adaptive and auto-configurable, multi-layer network architecture for distributed information
fusion to address large volume surveillance challenges, assuming a multitude of different sensor types on multiple
mobile platforms for intelligence, surveillance and reconnaissance. Our focus is on network enabled operations
to efficiently manage and improve employment of a set of mobile resources, their information fusion engines
and networking capabilities under dynamically changing and essentially unpredictable conditions. A high-level
model of the proposed architecture is formally described in abstract functional and operational terms based on
the Abstract State Machine formalism. This description of the underlying design concepts provides a concise and
precise blueprint for reasoning about key system attributes at an intuitive level of understanding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss the problem of making sense of very large amounts of multi-sensor data in terms of information fusion and
situation awareness. Our focus is on the application layer of the GIG in support of ISR analysis, sometimes referred to as
level 2 fusion or cognitive fusion. We discuss an approach where the key ontological constructs are events, event
correlation, situations, and situation assessment. We extend classic Belief-Desire-Intention (BDI) agents with situation
awareness, as a result of which the actions of BDI agents are triggered by situations rather than single events. We discuss
our reasoning mechanism against the background of ontology and knowledge base development and provide a simple
illustration of the concept towards opportunistic reasoning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A PRECARN partnership project, called CanCoastWatch (CCW), is bringing together a team of researchers from
industry, government, and academia for creating an advanced simulation test bed for the purpose of evaluating the
effectiveness of Network Enabled Operations in a Coastal Wide Area Surveillance situation. The test bed allows
experimenting with higher-level distributed information fusion, dynamic resource management and configuration
management given multiple constraints on the resources and their communications networks.
The test bed provides general services that are useful for testing many fusion applications. This includes a multi-layer
plug-and-play architecture, and a general multi-agent framework based on John Boyd's OODA loop.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Collecting data in any domain is often only the first step in creating intelligence. This is the beginning of a process which
includes fusing that data with other data sources to create something more than the individual data elements. This has been
historically difficult because collection sensors are optimized for their respective conventional domains. We have developed a
multi-sensor system combining an electro-optical camera and infrared camera, using a revolutionary data handling and
manipulating process to fuse multiple types of data to create intelligence.
To address the data fusion needs of such a system, we have developed an internationally recognized, open data fusion language
called Transducer Markup Language (TML). TML was used in conjunction with the multi-sensor system to describe data from
the EO and IR cameras and support sensors all within a self-contained mobile delivery system during military exercises in July
2007. The TML data was collected and disseminated to multiple users within a Service Oriented Architecture (SOA), after
which it was further distributed using Cursor-On-Target (COT) messages over a second network. This demonstrated both data
fusion of multiple sensor data sources and integration into existing data distribution infrastructures.
Key innovations of this demonstration included the use of multiple sensors including EO & IR cameras and a single sensor data
exchange language to capture and describe sensor output.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection and tracking of sophisticated targets involving increased resolution and the utility of synthetic aperture
transmitter windows require new and innovative approaches to active sensor technologies. The bistatic technique to
target tracking is extended to form a multistatic radar system using multiple and separate transmitters and receivers that
are designated sparsely located in a region. The fusion of the received data through the utility of a non-linear Kalman
filter technique is discussed to predict track through observations from the multistatic system. Target positions and
velocities are estimated and the error is shown to converge to zero. The Munkre algorithm is utilized for data and track
association to improve error minimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose using the smart antenna principle as the basis of a new design for smart optical receivers in LADAR
systems. This paper demonstrates the feasibility of designing a LADAR system with a receiver consisting of an array of
photodetectors, which leads to field-of-view enhancement and beamforming by fusing streams of video information
received from the detectors. As a proof of concept, we demonstrate this design by fusing several video information
streams from different fields of view using our Mathworks Simulink® model. The fusion algorithm uses the fuzzy logic
maximum operation on the data output from the cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maritime surveillance of coastal regions requires operational staff to integrate a large amount of information from a
variety of military and civilian sources. The diverse nature of the information sources makes complete automation
difficult. The volume of vessels tracked and the number of sources makes it difficult for the limited operation centre staff
to fuse all the information manually within a reasonable timeframe.
In this paper, a conceptual decision space is proposed to provide a framework for automating the process of operators
integrating the sources needed to maintain Maritime Domain Awareness. The decision space contains all potential pairs
of ship tracks that are candidates for fusion. The location of the candidate pairs in this defined space depends on the
value of the parameters used to make a decision. In the application presented, three independent parameters are used: the
source detection efficiency, the geo-feasibility, and the track quality. One of three decisions is applied to each candidate
track pair based on these three parameters:
1. to accept the fusion, in which case tracks are fused in one track,
2. to reject the fusion, in which case the candidate track pair is removed from the list of potential fusion, and
3. to defer the fusion, in which case no fusion occurs but the candidate track pair remains in the list of
potential fusion until sufficient information is provided.
This paper demonstrates in an operational setting how a proposed conceptual space is used to optimize the different
thresholds for automatic fusion decision while minimizing the list of unresolved cases when the decision is left to the
operator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a Markov (stochastic) game theoretic level-3 data fusion approach for defensive counterspace.
Based on the Markov game theory and the advanced knowledge infrastructures for information fusion, the approach can
enhance threat detection, validation, and mitigation for future counterspace and space situational awareness (SSA)
operations. A Markov game is constructed to model the possible interactions between the dynamic and intelligent threats
and friendly satellites, and effects of various space weather conditions. To systematically solve the complicated Markov
game, a conversion from general Markov games into several Markov Decision Processes (MDPs) as well as some static
bi-matrix games is provided. The proposed Markov game model and innovative solution are demonstrated in a numerical
example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper considers the applicability of algorithms, constraint solving and active structure across the spectrum of
complexity of information fusion applications. Information fusion is recast as a cognitive application using dynamic
structure building and constraint reasoning. The similarity between situation awareness and an undirected structure
responding to change is highlighted. The efficiency and speed of operation of cognitive information fusion are touched
on. A tsunami warning system provides an example which involves multiple threat and demonstrates the difference
between segmented algorithms making decisions without context, and the active use of knowledge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is about the fusion of multiple knowledge sources represented using default logic. More precisely, the focus
is on solving the problem that occurs when the standard-logic knowledge parts of the sources are contradictory, as default
theories trivialize in this case. To overcome this problem, several candidate policies are discussed. Among them, it is
shown that replacing each formula belonging to minimally unsatisfiable subformulas by a corresponding supernormal
default exhibits appealing features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The US Military has been undergoing a radical transition from a traditional "platform-centric" force to one capable of
performing in a "Network-Centric" environment. This transformation will place all of the data needed to efficiently
meet tactical and strategic goals at the warfighter's fingertips. With access to this information, the challenge of fusing
data from across the batttlespace into an operational picture for real-time Situational Awareness emerges. In such an
environment, centralized fusion approaches will have limited application due to the constraints of real-time
communications networks and computational resources. To overcome these limitations, we are developing a formalized
architecture for fusion and track adjudication that allows the distribution of fusion processes over a dynamically created
and managed information network. This network will support the incorporation and utilization of low level tracking
information within the Army Distributed Common Ground System (DCGS-A) or Future Combat System (FCS). The
framework is based on Bowman's Dual Node Network (DNN) architecture that utilizes a distributed network of
interlaced fusion and track adjudication nodes to build and maintain a globally consistent picture across all assets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a novel adaptive learning method for data mining in support of decision-making systems. Due to the
inherent characteristics of information ambiguity/uncertainty, high dimensionality and noisy in many homeland security
and defense applications, such as surveillances, monitoring, net-centric battlefield, and others, it is critical to develop
autonomous learning methods to efficiently learn useful information from raw data to help the decision making process.
The proposed method is based on a dynamic learning principle in the feature spaces. Generally speaking, conventional
approaches of learning from high dimensional data sets include various feature extraction (principal component analysis,
wavelet transform, and others) and feature selection (embedded approach, wrapper approach, filter approach, and others)
methods. However, very limited understandings of adaptive learning from different feature spaces have been achieved.
We propose an integrative approach that takes advantages of feature selection and hypothesis ensemble techniques to
achieve our goal. Based on the training data distributions, a feature score function is used to provide a measurement of
the importance of different features for learning purpose. Then multiple hypotheses are iteratively developed in different
feature spaces according to their learning capabilities. Unlike the pre-set iteration steps in many of the existing ensemble
learning approaches, such as adaptive boosting (AdaBoost) method, the iterative learning process will automatically stop
when the intelligent system can not provide a better understanding than a random guess in that particular subset of
feature spaces. Finally, a voting algorithm is used to combine all the decisions from different hypotheses to provide the
final prediction results. Simulation analyses of the proposed method on classification of different US military aircraft
databases show the effectiveness of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
McQ has developed a broad based capability to fuse information in a geographic area from multiple sensors to build a
better understanding of the situation. The paper will discuss the fusion architecture implemented by McQ to use many
sensors and share their information. This multi sensor fusion architecture includes data sharing and analysis at the
individual sensor, at communications nodes that connect many sensors together, at the system server/user interface, and
across multi source information available through networked services. McQ will present a data fusion architecture that
integrates a "Feature Information Base" (FIB) with McQ's well known Common Data Interchange Format (CDIF) data
structure. The distributed multi sensor fusion provides enhanced situation awareness for the user.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime
3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity
estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D
imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed
algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a
predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some
experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a
reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing
time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, multi-sensor image fusion systems and related applications have been widely investigated. In an image fusion
system, robust and accurate multi-modal image registration is essential. In the conventional method, the image registration
process starts with manually-pointed corresponding pairs in both sensored images. Using these corresponding pairs, a
transform matrix is initialized and refined through an optimization process. In this paper, we propose a new automatic
extraction method for such corresponding pairs. The Harris corner detector is employed to extract feature points in both
EO/IR images individually. Patches around the detected feature points are matched with a probabilistic criterion, mutual information
(MI), which is a preferred measure for image registration due to its robust and accurate performance. Simulation
results show that the proposed scheme has a low time complexity and extracts corresponding pairs well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the paper the actuality of neurophysiologically motivated neuron arrays with flexibly programmable functions
and operations with possibility to select required accuracy and type of nonlinear transformation and learning are shown.
We consider neurons design and simulation results of multichannel spatio-time algebraic accumulation - integration of
optical signals. Advantages for nonlinear transformation and summation - integration are shown. The offered circuits are
simple and can have intellectual properties such as learning and adaptation.
The integrator-neuron is based on CMOS current mirrors and comparators. The performance: consumable power -
100...500 μW, signal period- 0.1...1ms, input optical signals power - 0.2...20 μW; time delays - less 1μs, the number
of optical signals - 2...10, integration time - 10...100 of signal periods, accuracy or integration error - about 1%.
Various modifications of the neuron-integrators with improved performance and for different applications are
considered in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.