PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this paper we discuss the utilization of Principal Component Analysis, PCA, with projection slice synthetic discriminant function (PSDF) filters to reduce a data set that represents images from different sensor systems in order to extract relative information and features from the image set. The PCA helps to emphasize the differences in each of the training images in a given class. These differences are encoded into the PSDF filters. The PSDF filters provide a premise for data fusion by utilization of the projection-slice theorem. The PSDF is implemented with a few training images generated from the PCA, containing relevant information from all of the training images. The data in the principle components that are used to represent the entire data set can be emphasized by conditioning the eigen-values of the basis vectors used to corroborate important data packets in the entire data set. The method of data fusion, and preferred data emphasis in conjunction with the PST is discussed and the fused images are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many modern imaging and surveillance systems contain more than one sensor. For example, most modern airborne imaging pods contain at least visible and infrared sensors. Often these systems have a single display that is only capable of showing data from either camera, and thereby fail to exploit the benefit of having simultaneous multi-spectral data available to the user. It can be advantageous to capture all spectral features within each image and to display a fused result rather than single band imagery. This paper discusses the key processes necessary for an image fusion system and then describes how they were implemented in a real-time, rugged hardware system. The problems of temporal and spatial misalignment of the sensors and the process of electronic image warping must be solved before the image data is fused. The techniques used to align the two inputs to the fusion system are described and a summary is given of our research into automatic alignment techniques. The benefits of different image fusion schemes are discussed and those that were implemented are described. The paper concludes with a summary of the real-time implementation of image alignment and image fusion by Octec and Waterfall Solutions and the problems that have been encountered and overcome.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information fusion is a rapidly developing research area aimed at creating methods and tools capable of augmenting security and defense systems with the state-of-the-art computational power and intelligence. An important part of information fusion, image fusion serves as the basis for a fully automatic object and target recognition. Image fusion maps images of the same scene received from different sensors into a common reference system. Using sensors of different types gives rise to a problem of finding a set of invariant features that help overrun the imagery difference caused by the different types of sensors. The paper describes an image fusion method based on the combination of the hybrid evolutionary algorithm and image local response. The latter is defined as an image transform R(V) that maps an image into itself after a geometric transformation A(V) defined by a parameter vector V is applied to the image. The transform R(V) identifies the dynamic content of the image, i.e. the salient features that are most responsive to the geometric transformation A(V). Moreover, since R(V) maps the image into itself, the result of the mapping is largely invariant to the type of the sensor used to obtain the image. Image fusion is stated as the global optimization problem of finding a proper transformation A(V) that minimizes the difference between the images subject to fusion. Hybrid evolutionary algorithm can be applied to solving the problem. Since the search for the optimal parameter vector V is conducted in the response space rather than in the actual image space, the differences in the sensor types can be significantly alleviated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Comparative evaluation of fused images is a critical step to evaluate the relative performance of different image fusion algorithms. Human visual inspection is often used to assess the quality of fused images. In this paper, we propose some variants of a new image quality metric based on the human vision system (HVS). The proposed measures evaluate the quality of a fused image by comparing its visual differences with the source images and require no knowledge of the ground truth. First, the images are transformed to the frequency domain. Second, the difference between the images in frequency domain is weighted with a human contrast sensitivity function (CSF). Finally, the quality of a fused image is obtained by computing the MSE of the weighted difference images obtained from the fused image and source images. Our experimental results show that these metrics are consistent with perceptually obtained results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High resolution and highly sensitive colour digital sensors are desired for many applications, including military and civilian missions. Due to the limitation of spectral bandwidth, the sensitivity of a digital colour sensor is usually three times lower than that of a digital panchromatic sensor with a spectral bandwidth of the entire visible range or a range from visible to near infrared. This paper introduces a conceptual architecture for producing a triple sensitive colour digital frame sensor. Automatic image fusion techniques are involved to integrate colour and panchromatic images to increase the sensitivity of the colour sensor. Available satellite colour and panchromatic images are tested to prove the concept. The test results demonstrate that the introduced architecture is promising for developing real triple sensitive colour digital sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we discuss the utilization of the Projection-Slice Theorem, PST, to reduce a data set that display each multiple spectral band representation of an image and to extract variant features from those representation. Noise is removed from each of the one-dimensional projections of the images via PST and a wavelet transform thresholding process. The extracted features emphasize differences in spectral information from the same image and are combined through synthesis via the inverse PST. This sensor fusion method facilitates the design of filters to recognize an image with characteristics similar to the relevant features from each of the bands that have been incorporated in a combined multispectral/fused image. We present our method of feature extraction, wavelet noise removal, and data synthesis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Full Spectrum Dominance, or as defined by Joint Vision 2020, the ability to be persuasive in peace, decisive in war and preeminent in any form of conflict, cannot be accomplished without the ability to know what the adversary is currently doing as well as the capacity to correctly anticipate the adversary's future actions. A key component in the ability to predict the adversary's intention is Situation Awareness (SA). In this paper we provide a discussion of an SA model, examine a specific instantiation of the model and demonstrate how it has been applied to two specific domains: Global Monitoring and Cyber Awareness. We conclude this paper with a discussion on future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Situation awareness involves the identification and monitoring of relationships among level-one objects. This problem in general is intractable (i.e., there is a potentially infinite number of relations that could be tracked) and thus requires additional constraints and guidance defined by the user if there is to be any hope of creating practical situation awareness systems. This paper describes a Situation Awareness Assistant (SAWA) that facilitates the development of user-defined domain knowledge in the form of formal ontologies and rule sets and then permits the application of the domain knowledge to the monitoring of relevant relations as they occur in evolving situations. SAWA includes tools for developing ontologies in OWL and rules in SWRL and provides runtime components for collecting event data, storing and querying the data, monitoring relevant relations and viewing the results through a graphical user interface. An application of SAWA to a scenario from the domain of supply logistics is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusion 2+ of air-to-air engagement involves pressing, real-time constraints and very large amounts of imperfect data. Real-time data acquired during an air-to-air engagement will have different types of imperfection; two representative classes of imperfection are vagueness and ambiguity in the data. However, the current approaches of managing Fusion 2+ are limited to utilize either vague data or ambiguous data. The most popular fusion technique for vague data is Fuzzy Logic, and for ambiguous data, the Bayesian Network. The challenge addressed in this proposal is to explore the framework of a hybrid processing Fusion 2+ model that can formally process both vague (fuzzy) and ambiguous (probabilistic) data types. There are two major issues for building this Fusion 2+ model. The first issue is to mathematically integrate the heterogeneous models, which have different domains, probability and possibility. The second issue is to programmatically integrate two different S/Ws. For solving the first issue, this research explores and adopts two novel transformation methods between probability and possibility and compares the sensitivity between methods. Also this research provides an Object Oriented Tool for building a hybrid model by adopting an Application Programming Interface, so that we can model the complex (multi-to-multi) Fusion 2+ model of an air-to-air engagement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
How well does an algorithm support its purpose and user base? Has automation provided the user with the ability to augment their production, quality or responsiveness? In a number of systems today these questions can be answered by either Measures of Performance (MOP) or Measures of Effectiveness (MOE). However, the fusion community has not yet developed sufficient measures and has only recently devoted a concerted effort to address this deficiency. In this paper, we will summarize work in metrics for the lower levels of fusion (object ID, tracking, etc) and discuss whether these same metrics still apply to the higher levels (Situation Awareness), or if other approaches are necessary. We conclude this paper with a set of future activities and direction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Plan recognition has to be performed in a statistically robust manner concerning a possibly infinite number of tactical situations and different types of units. We need a generic model for tactical plan recognition where we combine observations and a priori knowledge in a flexible manner by using suitable methodologies and by having a large hypothesis space taken into account. Threat and therefore observed agent’s plans should be put into a context.
Here, we propose Multi-Entity Bayesian Networks (MEBN), introduced in [2], which enable the composition of Bayesian Networks from the network pieces, as the key methodology when designing flexible plan recognition models. However, Bayesian network pieces (fragments) must be compatible and therefore we propose ontology for generic plan recognition using Bayesian network fragments. Additionally, we claim that by using multi-entity network fragments we expand the hypothesis space and using this approach various multi-agents structures can be expressed. Our final contribution is that we incorporate the use of explicit utilities in our plan recognition model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The best method to track through a maneuver is to know the motion model of the maneuvering target. Unfortunately, a priori knowledge of the maneuver is not usually known. If the motion model of the maneuver can be estimated quickly from the measurements then the resulting track estimate will be better than the a priori static model.
An adaptive function approximation technique to improve the motion model while tracking is analyzed for its potential to track through various maneuvers. The basic function approximation technique is that of a Gaussian sum. The Gaussian sum approximates the function which represents the error between the initial static model and the actual model of the maneuver. The parameters of the Gaussian sum are identified on-line using a Kalman filter identification scheme. This scheme, used in conjunction with a Kalman filter tracker, creates a coupled technique that can improve the motion model quickly.
This adaptive Gaussian sum approach to maneuver tracking has its performance analyzed for three maneuvers. These maneuvers include a maneuvering ballistic target, a target going through an s-curve, and real target with a multiple racetrack flight path. The results of these test cases demonstrate the capabilities of this approach to track maneuvering targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
GNC algorithms are evolved from linear optimal control theory. This approach accommodates simple target maneuvers; however, these approaches lack robustness when advanced threats with intelligent target maneuvers are encountered. Kalman filters (KF) and Extended Kalman filters (EKF) require a priori defined models or equations of motion for objects being observed. Data Modeling autonomously assesses physical characteristics of a tracked object from only its measured motion. Estimates of object's mass, equivalent area, and probable control feedback loop parameters are obtained. These equations become state and process models for Kalman filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an algorithm for tracking ground targets, mainly based on measurements from a MTI radar. We examine the use of prior knowledge on target type for automatically improving the performance of a purely kinematic tracker. This tracker is an adaptation from a Variable Structure Interacting Multiple Model (VS-IMM) estimator acting within an S-Dimensional Assignment method. The algorithm can thus track several targets with false alarms, and the variable structure allows to use only the relevant dynamic models relative to roads in the vicinity of the target's prediction. However the set of possible on-road and off-road models do not interact as in the usual IMM mechanism. Instead the on-road behaviour of a target across S-1 radar scans is optimally managed in a hypothesis tree, since we do not mix incompatible road models. However, both global on-road and off-road behaviours are handled by mode transition probabilities, making it possible to track targets that change their behaviour regarding the road network with the same algorithm. Next an automatic measured type data integration scheme is examined, that can be connected to the above kinematic framework. The incoming measured type data is modelled by a belief function to reflect uncertainty, as well as the sensor's reliability. Simulation results show the operation of the kinematic tracker for road targets, illustrating the relevance of the variable structure dynamic model set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking multiple ground targets under clutter and in real time poses
several likely challenges: vehicles often get masked by foliage or
line-of-sight (LOS) problems, manifesting in misdetections and false alarms. Further complications also arise when groups of vehicles merge or split. This paper presents an attempt to address these issues using a group tracking approach. Group tracking is a way to ameliorate, or at least soften the impact of such issues from the hope that at least partial information will be received from each target group even when the probability of detection, PD of each individual member is low. A Strongest Neighbour Association (SNA) method of measurement-to-track association based on space-time reasoning and track-measurement similarity measures has been derived. We combine the association strengths of the space-time dynamics, the degree-of-overlap and the historical affinity metrics to relate measurements and tracks. The state estimation is based on standard Kalman filter. Lastly, a Pairwise Historical Affinity Ratios (PHAR) is proposed for the detection of a split scenario. This method has been tested to work well on a simulated convoy-splitting scenario. Monte Carlo experiment runs of six different error rates with five different compositions of errors have been conducted to assess the performance of the tracker. Results indicated that the group tracker remains robust (>80% precision) even in the presence of high individual source track error rates of up to 30%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ground targets can be detected by multiple classes of sources in
military surveillance. There are two main challenges for the
acquisition of ground situation picture from data collected by
multiple sources. First, different sources provide different
information that describes military entities at different
granularities and accuracies. This makes processing of data in one
unified tracker difficult. Secondly, the data update rates of
these sources vary, some update rates could be very low (such as
hours), leading to greater difficulty for data association.
This paper presents our attempt in multi-source ground target
tracking, taking the above two issues into consideration. Targets
are tracked in groups, and multiple trackers are designed so that
data of different granularities are processed by the respective
trackers. Tracks from these trackers are then correlated to form
the common picture. Two strategies are proposed to handle the
problem of varying data update rate. The first strategy is to
exploit different approaches to calculate the beliefs of data
association according to update rates. When update rate is high,
the belief is calculated by a distance function based on estimated
kinematical states. When update rate is low, the belief of data
association is computed using Bayesian network. Bayesian network
infers the beliefs based on observed information and domain
knowledge. The second strategy is to exploit the complementary
information in different trackers to improve data association. The
first step is to find the correlation among tracks from different
trackers. This track-track correlation information is fed back to
modify the beliefs of data associations in the tracks.
Experiments demonstrated that such combination of multi-source
information not only produces more complete ground picture, but
also helps to improve the data association accuracy in the
respective trackers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We combined images from different sensors that were enhanced by multiscale products of the wavelet coefficients. Using the wavelet transform, we used a multiresolution analysis to form products of coefficients across scales. Then, a fusion rule was applied to the product images to determine how the original images could be combined. Using this approach, we were able to decrease the sensitivity of the fusion process to noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper explores classifier fusion problems where the task is selecting a subset of classifiers from a larger set with the goal to achieve optimal performance. To aid in the selection process we propose the use of several correlationbased diversity measures. We define measures that capture the correlation for n classifiers as opposed to pairs of classifiers only. We then suggest a sequence of steps in selecting classifiers. This method avoids the exhaustive evaluation of all classifier combinations which can become very large for larger sets of classifiers. We then report on observations made after applying that method to a data set from a real-world application. The classifier set chosen achieves close to optimal performance with a drastically reduced set of evaluation steps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prior knowledge helps to make the speaker recognition system more reliable and robust. This paper presents a uniform framework of feature-level fusion to incorporate the prior knowledge for speaker recognition using gender information based on dynamic Bayesian network (DBN). DBNs are a new statistical approach, with the ability to handle hidden variables and missing data in a principled way with high extensibility. And thus, DBNs can describe the prior knowledge conveniently. Our contribution is to apply DBNs to construct a general feature-level fusion to combine the general acoustic feature like MFCC and prior information like gender into a single DBN for speaker identification. In our framework, gender information become additional observed data to influence both hidden variables and observed acoustic data. Experimental evaluation over a subnet of YOHO corpus show promising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several surveillance applications are characterized by the ability to gather information about the scene from more than one sensor modality, and heterogeneous sensor data must then be fused by the decision-maker. In this paper, we discuss the issues relevant to developing a model for fusion of information from audio and visual sensors, and present a framework to enhance decision-making capabilities. In particular, our methodology focuses on the issues of temporal reasoning, uncertainty representations, and coupling between features inferred from data streams coming from different sensors. We propose a conditional probability-based representation for uncertainty, along with fuzzy rules to assist decision-making, and a matrix representation of the coupling between sensor data streams. We also develop a fusion algorithm that utilizes these representations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the last several years, the Naval Research Laboratory has been developing corrosion detection algorithms for assessing coatings conditions in tank and voids on US Navy Ships. The corrosion detection algorithm is based on four independent algorithms; two edge detection algorithms, a color algorithm and a grayscale algorithm. Of these four algorithms, the color algorithm is the key algorithm and to some extent drives overall algorithm performance. The four independent algorithm results are fused with other features to first generate an image level assessment of coatings damage. The image level results are next aggregated across a tank or void image set to generate a single coatings damage value for the tank or void being inspected. The color algorithm, algorithm fusion methodology and aggregation algorithm components are key to the overall performance of the corrosion detection algorithm. This paper will describe modifications that have been made in these three algorithm components to increase the corrosion detection algorithm’s overall operating range, to improve the algorithm’s ability to assess low coatings damage and to improve the accuracy of coatings damage classification at both the individual image as well as at the whole tank level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The multiscale Kalman smoother (MKS) is a globally optimal estimator for fusing remotely sensed data. The MKS algorithm can be readily parallelized because it operates on a Markov tree data structure. However, such an implementation requires a large amount of memory to store the parameters and estimates at each scale in the tree. This becomes particularly problematic in applications where the observations have very different resolutions and the finest scale data are sparse or aggregated. Such cases commonly arise when fusing data to capture both regional and local structure. In this work, we develop an efficient MKS algorithm and apply it to the fusion of topographic and bathymetric elevation data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates methods of decision-making from uncertain and disparate data. The need for such methods arises in those sensing application areas in which multiple and diverse sensing modalities are available, but the information provided can be imprecise or only indirectly related to the effects to be discerned. Biological sensing for biodefense is an important instance of such applications. Information fusion in that context is the focus of a research program now underway at MIT Lincoln Laboratory. The paper outlines a multi-level, multi-classifier recognition architecture developed within this program, and discusses its components. Information source uncertainty is quantified and exploited for improving the quality of data that constitute the input to the classification processes. Several methods of sensor uncertainty exploitation at the feature-level are proposed and their efficacy is investigated. Other aspects of the program are discussed as well. While the primary focus of the paper is on biodefense, the applicability of concepts and techniques presented here extends to other multisensor fusion application domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor management technology progress is challenged by the geographic space it spans, the heterogeneity of the sensors, and the real-time timeframes within which plans controlling the assets are executed. This paper presents a new sensor management paradigm and demonstrates its application in a sensor management algorithm designed for a biometric access control system. This approach consists of an artificial intelligence (AI) algorithm focused on uncertainty measures, which makes the high level decisions to reduce uncertainties and interfaces with the user, integrated cohesively with a bottom up evolutionary algorithm, which optimizes the sensor network’s operation as determined by the AI algorithm. The sensor management algorithm presented is composed of a Bayesian network, the AI algorithm component, and a swarm optimization algorithm, the evolutionary algorithm. Thus, the algorithm can change its own performance goals in real-time and will modify its own decisions based on observed measures within the sensor network. The definition of the measures as well as the Bayesian network determine the robustness of the algorithm and its utility in reacting dynamically to changes in the global system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance of a multi-sensor data fusion system is inherently
constrained by the configuration of the given sensor suite.
Intelligent or adaptive control of sensor resources has been shown
to offer improved fusion performance in many applications. Common
approaches to sensor management select sensor observation tasks
that are optimal in terms of a measure of information. However,
optimising for information alone is inherently sub-optimal as it
does not take account of any other system requirements such as
stealth or sensor power conservation. We discuss the issues
relating to developing a suite of performance metrics for
optimising multi-sensor systems and propose some candidate
metrics. In addition it may not always be necessary to maximize
information gain, in some cases small increases in information
gain may take place at the cost of large sensor resource
requirements. Additionally, the problems of sensor tasking and
placement are usually treated separately, leading to a lack of
coherency between sensor management frameworks. We propose a novel
approach based on a high level decentralized information-theoretic
sensor management architecture that unifies the processes of
sensor tasking and sensor placement into a single framework.
Sensors are controlled using a minimax multiple objective
optimisation approach in order to address probability of target
detection, sensor power consumption, and sensor survivability
whilst maintaining a target estimation covariance threshold. We
demonstrate the potential of the approach through simulation of a
multi-sensor, target tracking scenario and compare the results
with a single objective information based approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rapid developments in sensor technology and its applications have energized research efforts towards devising a firm theoretical foundation for sensor management. Ubiquitous sensing, wide bandwidth communications and distributed processing provide both opportunities and challenges for sensor and process control and optimization. Traditional optimization techniques do not have the ability to simultaneously consider the wildly non-commensurate measures involved in sensor management in a single optimization routine. Market-oriented programming provides a valuable and principled paradigm to designing systems to solve this dynamic and distributed resource allocation problem. We have modeled the sensor management scenario as a competitive market, wherein the sensor manager holds a combinatorial auction to sell the various items produced by the sensors and the communication channels. However, standard auction mechanisms have been found not to be directly applicable to the sensor management domain. For this purpose, we have developed a specialized market architecture MASM (Market architecture for Sensor Management). In MASM, the mission manager is responsible for deciding task allocations to the consumers and their corresponding budgets and the sensor manager is responsible for resource allocation to the various consumers. In addition to having a modified combinatorial winner determination algorithm, MASM has specialized sensor network modules that address commensurability issues between consumers and producers in the sensor network domain. A preliminary multi-sensor, multi-target simulation environment has been implemented to test the performance of the proposed system. MASM outperformed the information theoretic sensor manager in meeting the mission objectives in the simulation experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensitivity analysis in an uncertainty reasoning system helps establish the relationship between the system output and the system parameters under a given input condition. Much work has been done in Bayesian reasoning and, in particular, Bayesian networks in the last decade. However, little work has been done in other uncertainty reasoning frameworks. In this paper, we introduce a sensitivity analysis method that is built upon the Probabilistic Argument System (PAS) framework. With the help of a PAS, we developed both approximate and closed-form formulas for sensitivity analysis that achieve the same functionalities as those developed for Bayesian networks recently reported in the literature. However, our approach can be applied for non-Bayesian reasoning as well as Bayesian reasoning, as Bayesian reasoning can be considered as a special case in PAS. For example, Demspter-Shafer (D-S) theory has a close tie with PAS. Therefore the approach described in this paper can be used to develop D-S reasoning systems. We give examples using an incomplete probabilistic model in PAS to illustrate the methods described in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We study classical and quantum harmonic analysis of phase space
functions (classical observes) on finite Heisenberg group HW2N+1(ZNmn, ZNmn, Zmn) over the ring Zmn. This group is the discrete version of the real Heisenberg group HW2N+1(RN, RN, R), where R is the real field. These functions have one dimensional and m1, m2,...,mn-dimensions matrix valued spectral components (for irreducible representations of HW2N+1. The family of all 1D representations gives classical world world (CW). Various mi-dimension representations (i=1,2,...,n) map classical world (CW) into quantum worlds QW}mi of ith resolution i=1,2,...,n). Worlds QW(m1) and QW(mn) contain rough information and fine details about quantum word, respectively. In this case the Fourier transform on the Heisenberg group can be considered as Weyl quantization multiresolution procedure. We call this transform the natural quantum Fourier transform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advances in the development of imaging sensors depend upon (among other things) the testing capabilities of research laboratories. Sensors and sensor suites need to be rigorously tested under laboratory and field conditions before being put to use. Real-time dynamic simulation of real targets is a key component of such testing, as actual full-scale tests with real targets are extremely expensive and time consuming and are not suitable for early stages of development. Dynamic projectors simulate tactical images and scenes. Several technologies exist for projecting IR and visible scenes to simulate tactical battlefield patterns - large format resistor arrays, liquid crystal light valves, Eidophor type projecting systems, and micromirror arrays, for example. These technologies are slow, or are restricted either in the modulator array size or in spectral bandwidth. In addition, many operate only in specific bandwidth regions. Physical Optics Corporation is developing an alternative to current scene projectors. This projector is designed to operate over the visible, near-IR, MWIR, and LWIR spectra simultaneously, from 300 nm to 20 μm. The resolution is 2 megapixels, and the designed frame rate is 120 Hz (40 Hz in color). To ensure high-resolution visible imagery and pixel-to-pixel apparent temperature difference of 100°C, the contrast between adjacent pixels is >100:1 in the visible to near-IR, MWIR, and LWIR. This scene projector is designed to produce a flickerless analog signal, suitable for staring and scanning arrays, and to be capable of operation in a hardware-in-the-loop test system. Tests performed on an initial prototype demonstrated contrast of 250:1 in the visible with non-optimized hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a large sensor field whose mission is to protect coastal
waters by detecting objects like submarines. The system is buoy-based
and distributed over a littoral
area. The opportunities for detection are short and intermittent and
the signal to noise ratio is low. The topology of the field changes
with time due to currents, wind, tides and storms. The field has a
number of gateway nodes that have the capability to transmit
off-field through a satellite, a ship or a plane.
We propose an approach to fusion that includes on-buoy processing,
cooperative processing with nearest neighbors and the potential for
off-field processing. Each stage of processing tries both to minimize
false positive events and to maximize the probability of detection
when an object is present. It also tries to minimize
power used in order to prolong the life of the field.
We analyze the optimal placement of gateway nodes in the field to
minimize power consumption and maximize reliability and probability
of successful off-field transmission. We analyze the duty cycles of
the sensor and gateway nodes to optimize lifetime. We also analyze
the traffic that the field will be expected to handle in order to
support network control and coordination, distributed fusion,
off-field communication (including queries and responses, and
reporting of detection events), forwarding of traffic through
individual sensor nodes toward gateways or for fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The results of design of optoelectronic scalar-relation vector processors (SRVP) with time-pulse coding as base cells for homogeneous 1D and 2D computing mediums are considered in the paper. The conception is founded on the use of advantages of time-pulse coding in hardware embodyings of multichannel devices of analog neurobiologic and time-pulse photoconverters. The two-stage structure of the SRVP mapping generalized mathematical model of quasiuniversal map of the relation between two vectors is designed on the basis of the mathematical base which includes the generalized operations of equivalence (nonequivalence), generalized operations of t-norm and s-norm of neuro-fuzzy logic. It is shown that the application of time-pulse coding allows to use quasiuniversal elements of two-valued logic as base blocks on both cascades of the processor. Four-input universal logical elements of two-valued logic (ULE TVL) with direct and complement outputs are used for vectors analog components processing by the first cascade of the SRVP. In a modified variant the ULE TVL have direct and inverse digital outputs for direct and complement time-pulse outputs and are supplied with additional optical signals conversion drivers. The ULE TVL of the second cascade has 2n or 4n inputs, where n - dimension of treated vectors. The circuits of the ULE TVL are considered on the basis of parallel analog-to-digital converters and digital circuits implemented on CMOS transistors, have optical inputs and outputs, and have following characteristics: realized on 1.5mm technology CMOS transistors; the input currents range - 100nA...100uA; the supply voltage - 3...15V; the relative error is less than 0.5%; the output voltage delay lays in range of 10...100ns. We consider structural design and circuitry of the SRVP base blocks and show that all principal components can be implemented on the basis of optoelectronic photocurrent transformers on current mirrors comparators with two-threshold and multi-threshold discriminations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When there exists the limitation of communication bandwidth between sensors and a fusion center, one needs to optimally pre-compress sensor outputs--sensor observations or estimates before sensors' transmission to obtain a constrained optimal estimation at the fusion center in terms of the linear minimum error variance criterion. This paper will give an analytic solution of the optimal linear dimensionality compression matrix for the single sensor case and analyze the existence of the optimal linear dimensionality compression matrix for the multisensor case, as well as how to implement a Gauss-Seidel algorithm to search for an optimal solution to linear dimensionality compression matrix.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the military Multi Agent System, every agent needs to analyze the temporal relationships among the tasks or combat behaviors, and it’s very important to reflect the battlefield situation in time. The temporal relation among agents is usually very complex, and we model it with interval algebra (IA) network. Therefore an efficient temporal reasoning algorithm is vital in battle MAS model. The core of temporal reasoning is path consistency algorithm, an efficient path consistency algorithm is necessary. In this paper we used the Interval Matrix Calculus (IMC) method to represent the temporal relation, and optimized the path consistency algorithm by improving the efficiency of propagation of temporal relation based on the Allen's path consistency algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet transform is efficiently applied to the area of image fusion because it’s properties such as multiresolution analysis, accurate reconstruction and similarity to people’s vision understanding. On the basis of reviewing the former research, the fusion results may be better than those with previous common fusion algorithms in many applications. This paper describes the principle and method of wavelet-based image fusion and analyzes it’s current research and future trend from the two respects: the modality of wavelet transform and fusion rules
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper it is illustrated how Bayes equations and frequency data may use as a measure of performance for belief fusion algorithms. A review of Bayes equations for single and multiple sources is provided. A simple performance measure is then calculated and applied to some belief fusion examples from the literature. Their performance measures are qualitatively similar, but the quantitative differences among these techniques appear to be arbitrary.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.