PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12122, including the Title Page, Copyright information, Table of Contents, and Committee Page.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management I
The class of Labeled Random Finite Set filters known as the delta-Generalized Labeled Multi-Bernoulli (dGLMB) filter represents the filtering density as a set of weighted hypotheses, with each hypothesis consisting of a set of labeled tracks, which are in turn pairs of a track label and a track kinematic density. Upon update with a batch of measurements, each hypothesis gives rise to many child hypotheses, and therefore truncation has to be performed for any practical application. Finite compute budget can lead to degeneracy that drops tracks. To mitigate, we adopt a factored filtering density through the use of a novel Merge/Split algorithm. Merging has long been established in the literature; our splitting algorithm is enabled by an efficient and effective marginalization scheme, through indexing a kinematic density by the measurement IDs (in a moving window) that have been used in its update. This allows us to determine when independence can be considered to hold approximately for a given tolerance, so that the "resolution" of tracking is adaptively chosen, from a single factor (dGLMB), to all-singleton factors (Labeled Multi-Bernoulli, LMB), and anywhere in between.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The sliding innovation filter (SIF) is a state and parameter estimation strategy based on sliding mode concepts. It has seen significant development and research activity in recent years. In an effort to improve upon the numerical stability of the SIF, a square-root formulation is derived. The square-root SIF is based on Potter’s algorithm. The proposed formulation is computationally more efficient and reduces the risks of failure due to numerical instability. The new strategy is applied on target tracking scenarios for the purposes of state estimation. The results are compared with the popular Kalman filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new state and parameter estimation method is introduced based on the particle filter (PF) and the sliding innovation filter (SIF). The PF is a popular estimation method, which makes use of distributed point masses to form an approximation of the probability distribution function (PDF). The SIF is a relatively new estimation strategy based on sliding mode concepts, formulated in a predictor-corrector format. It has been shown to be very robust to modeling errors and uncertainties. The combined method (PF-SIF) utilizes the estimates and state error covariance of the SIF to formulate the proposal distribution which generates the particles used by the PF. The PF-SIF method is applied on a nonlinear target tracking problem, where the results are compared with other popular estimation methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The tracking and state estimation community is broad, with diverse interests. These range from algorithmic research and development, applications to solve specific problems, to systems integration. Yet until recently, in contrast to similar communities, few tools for common development and testing were widespread. This was the motivation for the development of Stone Soup - the open source tracking and state estimation framework. The goal of Stone Soup is to conceive the solution of any tracking problem as a machine. This machine is built from components of varying degrees of sophistication for a particular purpose. The encapsulated nature and modularity of these components allow efficiency and reuse. Metrics give confidence in evaluation. The open nature of the code promotes collaboration. In April 2019, the Stone Soup initial beta version (v0.1b) was released, and though development continues apace, the framework is stable, versioned and subject to review. In this paper, we summarise the key features of and enhancements to Stone Soup - much advanced since the original beta release - and highlight several uses to which Stone Soup has been applied. These include a drone data fusion challenge, sensor management, target classification, and multi-object tracking in video using TensorFlow object detection. We also detail introductory and tutorial information of interest to a new user.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 3D trajectory estimation and observability problems of a target have been solved by using angle-only measurements. In previous works, the measurements were obtained in the thrusting/ballistic phase from a single fixed passive sensor. The present work solves the motion parameter estimation of a ballistic target in the reentry phase from a moving passive sensor on a fast aircraft. This is done with a 7-dimension motion parameter vector (velocity azimuth angle, velocity elevation angle, drag coefficient, target speed and 3D position). The maximum likelihood (ML) estimator is used for the motion parameter estimation at the end of the observation interval. Then we can predict the future position at an arbitrary time and the impact point of the target. The observability of the system is verified numerically via the invertibility of the Fisher information matrix. The Cramer–Rao lower bound for the estimated parameter vector is evaluated, and it shows that the estimates are statistically efficient. Simulation results show complete observability from the scenario considered, which illustrates that a single fast moving sensor platform for a target can estimate the motion parameter in the reentry phase.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management II
This paper addresses one of the key requirements for a successful terminal phase defense intercept, namely the ability to discriminate between the reentry vehicle (RV) and decoys, using space-based infrared (IR) sensors. In the terminal phase, light objects (decoys) slow down faster due to atmospheric drag and follow substantially different trajectories than heavy objects (RV). Therefore, the targets' velocity information will be used to differentiate between the RVs and decoys trajectories within a validating time window. The evaluation of the corresponding Cramér-Rao Lower Bound (CRLB) on the covariance of the estimates, and the statistical tests on the results of simulations show that this method is statistically efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The sliding innovation filter (SIF) is a newly developed filter that may be applied to both linear and non-linear systems. The SIF shares similar principles with sliding mode observers (SMO) and other variable structure filters such as the smooth variable structure filter (SVSF). The SIF utilizes the true trajectory as a hyperplane and forces the estimates to stay within a region of the hyperplane through the use of a discontinuous correction gain. In this paper, the SIF is applied to the well-known complex road estimation problem with nonlinear system function. The results of the application are compared with the SVSF, and future work is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the newly developed sliding innovation filter (SIF) is reformulated to accommodate the ability of extracting the hidden states. This is accomplished by using the well-known Luenberger technique, which is commonly used by observers. In this paper, the SIF is applied to a linear system, which has fewer measurements than states. The results show that the proposed filter extracts the hidden state with small RMSE, as low as 0.1, and small MAE, as low as 1.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In modern robotics and automation systems, control and estimation techniques are essential tasks. In this paper, the mathematical model of a non-linear RR manipulator is developed. To realize it, the circuit design of the system is described first in detail, and then implemented on the FPGA (field programmer gate array) prototyping board. The results show that the implementation of the system requires a minimal amount of FPGA resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Methodologies and Applications I
Fusing sensors that each test one hypothesis from a set of ambiguous, non-exclusive hypotheses about manipulation in media using only combination rules like those in Dempster-Shafer fusion ignores information about the overlap of the sensors’ beliefs, hypotheses, and evidences. We present a novel fusion approach for sensors that test ambiguous hypotheses using semantic evidence. Our approach measures the relevance of sensors to a given hypothesis and leverages the ambiguity of hypotheses and the similarity of evidences across sensors. These factors lead to a combination rule that captures the conceptual overlaps among hypotheses, evidences, and sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a multi-level (coarse to fine) classifier, decisions along with their estimated uncertainties can be reported at any or all levels. We define the concept of the “best” level from the standpoint of a user who seeks to make a single class call that seeks to maximize specificity and minimize the error in making a mistake.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Methodologies and Applications II
With the number of sensors constantly increasing, there is a great need for automating the processing of sensor data in order to reduce cognitive load and response time for manned systems and enable greater autonomy in unmanned systems. It is anticipated that the unprecedented access to sensor data (both in volume and variety) will lead to reduced false alarm rates and increased probability of detection of threats and targets. Effectively, this capability will support situational awareness and facilitate mission success. However, current signal and image processing systems largely ignore the scene context which hinders their performance. In this paper, we describe a machine learning- and semantic reasoning-based system for target detection which incorporates the context. It combines the state-of-the-art image and signal processing capability with the leading-edge logic-based semantic reasoning technology. The main focus of this paper is on the value added by the semantic reasoning to machine learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tactical operations like search and rescue or surveillance necessitate the rapid synthesis of physically dispersed assets and mobile compute nodes into a network capable of efficient and reliable information gathering, dissemination, and processing. We formalize this network synthesis problem as selecting one among a set of potentially deployable networks which optimally supports the distributed execution of complex applications. We present the NSDC (network synthesis for dispersed computing) framework; a general framework for studying this type of problem and use it to provide a solution for one well-motivated variant. We discuss how the framework can be extended to support other objectives, parameters, and constraints as well as more scalable solution approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial Intelligence/Deep Learning (AI/DL) techniques are based on learning a model using large available data sets. The data sets typically are from a single modality (e.g., imagery) and hence the model is based on a single modality. Likewise, multiple models are each built for a common scenario (e.g., video and natural language processing of text describing the situation). There are issues of robustness, efficiency, and explainability that need to be addressed. A second modality can improve efficiency (e.g., cueing), robustness (e.g., results cannot be fooled such as adversary systems), and explainability from different sources. The challenge is how to organize the data needed for joint data training and model building. For example, what is needed is (1) structure for indexing data as an object file, (2) recording of metadata for effective correlation, and (3) supporting methods of analysis for model interpretability for users. The Panel presents a variety of questions and responses discussed, explored, and analyzed for data fusion-based AI data fusion tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The tactical edge, with its complicated electromagnetic environment is a very important part of the defense operations. In general, it contains a mix of friendly and adversarial radio frequency signal sources. A method for distinguishing the signals in the tactical arena will be very useful for telling blue and red teams apart. The function data analysis (FDA) methods offer a promising approach to find their underlying signatures. The FDA contains techniques for understanding and analyzing large and complex datasets with hidden underlying properties. It is particularly useful in situations in which one records the data continuously during a time interval or intermittently at several discrete time points. It can also uncover nonlinear functional dependence hidden in such data.
In current work, we use FDA techniques to uncover the hidden continuous functions in the noisy field data. The measured data is a result of the combination of the signal and noise introduced by solar, atmospheric, and other electromagnetic signals present in the surrounding. The report consists of general theory behind FDA (Section 2), steps in the analysis of the field data (Section 3), and numerical results (Section 4). Finally, in Section 5 we summarize the results and point out the next steps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Methodologies and Applications III
The data fusion information group (DFIG) model is widely popular, extending and replacing the joint director of the labs (JDL) model as a data fusion processing framework that considers data/information exchange, user/team involvement, and mission/task design. The DFIG/JDL provides an initial design from which enhancements in analytics, learning, and teaming result in opportunities to improve data fusion methodologies. This paper highlights recent artificial intelligence/machine learning (AI/ML), deep learning, reinforcement learning, and active learning capabilities with that of the DFIG model for analysis and systems engineering designs. The general DFIG construct is applicable to many AI/ML systems; however, the focus of the paper provides useful considerations for the data fusion community to consider based on prior implemented approaches. The main ideas are: level 0 DFIG data preprocessing through AI/ML methods for data reduction, level 1/2/3 DFIG object/situation/impact assessment using AI/ML/DL methods for awareness, level 4 DFIG process refinement with reinforcement learning for control, and level 5/6 DFIG user/mission refinement with active learning for human-machine teaming.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Methodologies and Applications IV
Incidents related to insider threats are steadily increasing, especially technology thefts.1 Dr. Larry Ponemon wrote, “insider threats are not viewed as seriously as external threats, like a cyber-attack. But when companies had an insider threat, in general, they were much more costly than external incidents. This was largely because the insider that is smart has the skills to hide the crime, for months, for years, sometimes forever.”2 Insider threat is a relatively rare occurrence, often perpetuated by revenge seeking employees with a grievance against their employer. The capitol assault was an eye-opening event in that it clearly demonstrated that insiders, in this case military and ex-military, were willing to engage in violence against the government. Detection of insider threat is a difficult problem as data is limited and the factors surrounding insider threat are highly contextual. Previous research tends to focus on theoretical perspectives and threat mitigation, frequently emphasizing cyber-technical indicators of insider threat versus focusing on the human behind the screen.3 More behavior focused research has identified several psychosocial, individual-level risk factors for insider threat (e.g., disgruntlement, poor work performance, etc.) or has conducted personality assessments on known insiders post-hoc. 4,5 However, future directions in this line of research need to address early detection of mobilization as part of the defensive strategy against these threats. Subsequently, this paper will summarize the state-of-the-knowledge in terms of research on behavioral factors and approaches for insider threat detection, highlighting methods for assessing social-cyber information to enable early detection of insider threat.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Developed within the framework of the ESTIA research and innovation project, the ESTIA platform is a versatile technological solution that allows the prediction, detection and management of incidents that are related with the risk of structural fires within cultural heritage (CH) settlements and sites. The ESTIA platform is a distributed system that consists of collaborating autonomous subsystems, ensuring a broad range of applications through the platform’s potential to adapt to each deployment’s specific needs and according to the requirements of the targeted end-users.
By incorporating advanced procedures for the semi-automatic digitization of the CH built environment as well as an advanced system that simulates the development of the complex phenomena of fire propagation and human crowd behaviour, the platform is an effective tool assisting competent authorities in assessing the fire-related risks and offering training to first responders and field officers. Additionally, the platform offers an effective fire incident management system that includes fire-detection capabilities and a specialised decision support system, enhancing authorities during the management of a developing fire incident.
The validation of the ESTIA solution against the requirements of its distinct use cases was performed including the use of the platform as a simulation-based fire risk assessment tool. Subsequently, an exploratory risk assessment study was conducted aiming to establish and demonstrate a methodology for performing risk assessment studies using the provided technological solution.
This paper presents (i) the use of the ESTIA platform as a tool for the conduction of simulation-based assessment of risk related to fire incidents within a CH environment, (ii) the methodology for the technological validation of the ESTIA platform as a simulation-based fire risk assessment tool and (iii) the methodology for the conduction of the exploratory risk assessment study in the historical center of Xanthi (Greece).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
NARRATION is a platform initially intended to digitalize the process of curation of digital artefacts and the creation of digital and hybrid exhibits. In this paper we present the platform and demonstrate how it can be used in the context of scenario creation required in vulnerability and risk assessment software tools. It is shown how the platform can be integrated with the wayGoo georeference platform to facilitate spatio-temporal scenarios creation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The iCrowd human and crowd behavior simulator provides an integrated simulation platform for simulating crowd behavior alongside with simulation of physical phenomena and their interaction with and impact on the behavior of humans, including cognitive and psychological aspects and information exchange. The iCrowd simulation platform has been applied to various complex scenarios including evacuation of people from buildings and outdoor environments in case of fire, simulation and testing of risk-based security strategies and protocols, anomaly detection in security-sensitive environments based on human tracks, and others. Recently, iCrowd has become available in the form of a Simulation-as-a-Service (SaaS) environment implemented on virtual machines (VMs) and remote access through secure HTTPS connections. The SaaS iCrowd environment has been tested by qualified end-users in designing and evaluating the performance of risk-based security strategies for border crossing, evaluating the performance of novel biometrics, and for crowd evacuation in an urban environment on the scale of mid-size town with photorealistic modeling of the simulated environment. The feedback from the use of the iCrowd SaaS environment by end-users has been very encouraging and limited training was required to get end-users familiarized in the use of the simulator. In this paper we report on the different use cases the iCrowd SaaS was used to train different end-user groups and on the evaluation results from the training of these groups in terms of learning and performance objectives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications I
Sensor data fusion has significant potential for advancing discovery, processing, and inspection of engineering materials. The paper reviews recent developments in data fusion with respect to materials inspection, highlights potential areas for materials growth, and shows results from application of matching component analysis (MCA). The main contributions of the paper include analysis of current fusion methods to uncover challenges and opportunities with respect to two inspection modalities (scanning acoustic microscopy and eddy current testing); and presenting an extension of MCA which has previously developed for other image modalities. Presenting MCA highlights the benefits towards a baseline method of SAM-EC fusion using the Multi-Scale Mixed Modality Microstructure Titanium Assessment Characterization (M4TAC) challenge dataset. Example results are presented with current motivations of enhancements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A variety of neural networks architectures are being studied to tackle blur in images and videos caused by a non-steady camera and objects being captured. In this paper, we present an overview of these existing networks and perform experiments to remove the blur caused by atmospheric turbulence. Our experiments aim to examine the reusability of existing networks and identify desirable aspects of the architecture in a system that is geared specifically towards atmospheric turbulence mitigation. We compare five different architectures, including a network trained in an end-to-end fashion, thereby removing the need for a stabilization step.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate the relationship between paired SAR and optical images. SAR sensors have the capabilities of penetrating clouds and capturing data at night, whereas optical sensors cannot. We are interested in the case where we have access to both modalities during training, but only the SAR during test time. To that end, we developed a framework that inputs a SAR image and predicts a Canny edge map of the optical image, which retains structural information, while removing superfluous details. Our experiments show that by additionally using this predicted edge map for downstream tasks, we can outperform the same model that only uses the SAR image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper proposes an algorithm for forming stable features and identifying the boundaries of objects obtained in the analysis of scenes recorded in the infrared and optical ranges. The proposed approach is based on a sequential process and includes the following steps. The analysis of the boundaries of objects fixed in the visible range is carried out. For images obtained in the far-infrared range (thermal imaging), a method of decreasing the dimensionality of the cluster swing is used, which makes it possible to reduce the data bit range while preserving the boundaries of objects. Reducing the ranges is performed using analyzing histograms of color gradients and the absorption of scarcely occurring blocks by neighboring ones. Creation of masks of objects and formed closed contours. Search for intersections of the boundaries of objects (visible range) and formed masks (thermal images). Formation of areas of crossing borders and objects, search for an average line between pairs of generated data. Plotting the resulting curve and the close of curves if they are gap. As a criterion for ruining the curve used for analyzing the distance between pare elements (length of 3 pixels). The construction of the connecting line is made from the edges of the formed boundaries by interpolating a straight line. Pairs of test images obtained by sensors with a resolution of 1024x768 (8 bit, color image, visible range) and 320x240 (8 bit, color, thermal images) are used as test data to assess the effectiveness. Images of simple shapes are used as analyzed objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications II
Recently, Exercise Analysis has gained strong interest in the sport industry including athletes and coaches to understand and improve performance, as well as preventing injuries caused by incorrect exercise form. This work describes a system, USquat, that utilizes computer vision and machine learning for understanding and analyzing of a particular exercise, squatting, as proof of concept for the creation of a detective and corrective exercise assistant. Squatting was chosen as it is a complicated form of exercise and is often mis performed. For USquat, a Recurrent Neural Network is designed using Convolutional Neural Networks and Long Term Short Term networks. A sizable video library dataset containing numerous “bad” forms of squatting was created and curated to train the USquat system. When evaluated on test data, USquat achieved 90% accuracy on average. On a developed Android application that uses the resulting model, upon detection of “bad” squatting forms, it offers an instructive “good” video related specifically to the user’s bad form. Results including live application outcomes are demonstrated as well as challenging video results, problems, and areas of future work. Early work on the creation of a follow-on system to USquat that automatically generates a custom video of the user performing a correct action for the purpose of learning proper activity performance. Additionally, early work on a different version of USquat that explores an attention mechanism network is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile devices have distinct RF fingerprints, which are reflected by changes in the frequency of transmitted signals. The Short-Time Fourier Transform (STFT) is a suitable technique for evaluating this frequency content and thus identifying them. In this paper, we take advantage of STFT processing and perform roomlevel location classification. The raw in-phase and quadrature (IQ) signals and channel state information (CSI) frames have been collected using seven different cell phones. The data collection process has been performed in eight different locations on the same floor of our engineering building, which contains indoor hallways and rooms of different sizes. Three software-defined radios (SDRs) are placed in three different locations to receive signals simultaneously but separately. The IQ and CSI frames have been concatenated together for training a neural network. A Multi-Layer Perception (MLP) network has been used to train the concatenated signals as input and their corresponding locations as labels. A challenging aspect is that our dataset does not contain the same number of samples per location. Moreover, several locations have insufficient training data due to signal attenuation. An imbalanced learning method has been implemented in this dataset to overcome this limitation and improve the classification accuracy. The classification strategy involves binary classification like individual location vs. other. Using this approach, we obtain a mean accuracy of around 95%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As complexity and diversity of military assets increase, ensuring that military forces are well-trained becomes complex and costly when relying on traditional classroom instruction. Human instructors bear the burden of manually creating training datasets with tools that are often not geared to their missions, tasks, and objectives. Further, analyzing how well students learned their tasks often requires collecting and managing student performance over time, which may not be feasible in time-critical situations, and may consume instructor time and attention that could be spent facilitating learning. In addition, while one-to-one human tutoring has proven to be effective, it is costly and impractical to provide in every task domain. We present Multi-task Adaptive Learning Tutor (MALT), a concept for an intelligent tutoring system (ITS) for the psychomotor domain that flexibly responds to a user’s current tasking, information needs, and cognitive ability to interpret information. As the user performs a series of complex psychomotor sub-tasks drawn from flight procedures implemented in a simulator, MALT will learn to predict which features contributed most to their performance. In a proof-of-concept study, we trained MALT using data collected from pilots, ranging from new student pilots to Certified Flight Instructors, while performing different flight procedures. This paper presents the MALT concept, and methods and results associated with the proof-of-concept study focused on MALT’s diagnostic capability. We believe MALT to be among the first to expand the ITS beyond traditional cognitive tasks such as problem solving to include complex psychomotor tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications III
IoT has emerged as a method for cloud-enabled data sharing by connecting everyday objects to the internet. Further interconnecting data transmission, IoT sensors create a network of communication among objects, sensed data, and users. This work uses an IoT development board equipped with a microcontroller to perform sensor data collection, fusion, and processing to assess the motion, flexibility, and improvements of the human knee toward the development and enhancement for a wearable sensor device. The signals are collected through simulated movements and processed through signal processing algorithms to record and analyze data that can then be used for potential therapeutic applications. To characterize the motion and its effect on the user, the three sensors targeted include inertial measurement unit (IMU), pressure, and temperature. In this paper we demonstrate an Asure cloud-based IoT environment as well as sensor data collection and fusion from simulated knee joint motion, temperature and location change.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study of brain activity changes caused by physiological or other conditions like aging is crucial not only to understand the brain dynamics but also to identify those changes and distinguish the subject groups. In this work, we are performing a sliding window technique on the Energy Landscape analysis to explore temporal signatures of the seven major restingstate networks, namely, default mode (DMN), frontal-parietal (FPN), salience (SAN), attention (ATN), sensory-motor (SMN), visual (VIS) and auditory (AUD) networks. The dataset used for this study consists of 23 young adult and 47 old adult subjects with normal cognitive function. To study the dynamic behavior of the brain, we have applied the sliding window technique on the time courses of the obtained fMRI data. With 90-second windows and 4-second shifts from a total of 180 second time course, we obtain 24 windows of temporal energy landscape information, which is presented as a matrix with the energies of all possible connectivity states vs the sequence of sliding windows. A heat map was displayed using this matrix to examine the energy transition of these states. We found that a few bands of connectivity states are consistently low energies among the different groups of subjects. One observation was that the states in these bands are only one or two hamming distances away from each other, which means these connectivity states with consistently low energy values are close in terms of the region of interest (ROIs) connectivity. Also, SAN and ATN were working synchronously for both young and old subjects in all these bands. In summary, we are using the sliding window technique with the Energy landscape analysis to find out the brain state dynamics for the old and young subjects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, the efficient design of Wireless Sensor Networks (WSN) has become one of the fundamental fields of research. These networks are made up of a large number of sensor nodes characterized by a limited amount of resources so, the energy aspect is a key point in their development. MAC protocols play an important role in energy management, trying to minimize overhead of the physical layer reducing the energy wasted in preamble transmission. In this paper, we have implemented two different MAC protocols on various scenarios of WSNs by using OMNeT++ simulator in combination with INET framework. Our experiments show a comparison between two MAC protocols in terms of received packets and power consumption among networks’ nodes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications IV
Segmenting the human brain into networks has been a useful approach in analyzing functional connectivity. Brain network bundling can determine which regions are engaged and if they are working together. The thalamus (THL) and basal ganglia (BSL) regions in the subcortical network are linked to multiple cortical areas due to their roles in neural circuitry outlined in the cortico-basal ganglia-thalamo cortical map. Here we explore their coupling with the default mode network (DMN), frontoparietal network (FPN), salience network (SAN), attention network (ATN), sensorimotor network (SSM), visual network (VIS), and auditory network (AUD) using the energy landscape technique. Energy landscape analysis helps identify the statistical differences in functional behaviors between the healthy control and patient groups, which are obtained from the fMRI activity time courses of the 9 internetworks. In this work, we focused on studying 107 schizophrenic patients and 86 healthy controls and obtained the constructed activity patterns and disconnectivity graphs of each subject. The differences between two groups are compared. The results from bundling THL and BSL with the DMN, FPN, SAN, ATN, SSM, VIS, and AUD shows that these regions are more strongly coupled in controls than in patients. After performing energy calculations and heat map generations, we observed several lower energy band states that are common among all control and patient subjects. The potential implications of these common band states are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces principal component signature design for correlation-and-bit-aware embedding. Simulation studies show superior performance in terms of bit error rate for L1-norm and L2-norm based signatures versus arbitrary signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The surface grinding of the critical parts is the most important operation, which largely determines the product surface properties and its quality. In the context of automated production, efficient monitoring of this operation is a critically important task. In this work, we propose a new approach to monitoring of grinding: to simulate the processes of generating vibro-acoustic signals during grinding, to divide the working grains of the grinding wheel into sharp grains and low cutting ability ones. This division allows qualitatively predicting changes in the nature of vibro-acoustic signals accompanying grinding in different operational conditions, such as dry grinding and grinding with coolants, and the wear of the grinding wheel. The conclusions obtained based on the phenomenological modeling are confirmed by experimental studies showing that the vibration signal parameters adequately reflect the current state of the technological process and the wear of the grinding wheel. In this work, a new indicator for monitoring the grinding operation of products with high requirements to the quality of the machined surface was identified and evaluated. The proposed approach is shown to yield a more informative diagnostic indicator for finishing process compared to measurements of cutting forces, which are insufficiently efficient in the case of finishing operations with minimum allowance. The indicator was found to be efficient in the case of grinding of surfaces with roughness smaller than 0.4 μmm. The relevance of this indicator has been evaluated and proven in rigid grinding wheel-part-reference system, the use of which minimizes the probability of error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, the newly developed filtering technique referred to as the sliding innovation filter (SIF) is combined with multiple model strategies to enhance the performance of the filter when the system changes its structure and/or parameters. This is particularly useful for a system, such as an aerospace system, experiences a fault and continued operation is critical. The proposed method is tested on an aerospace actuator system and the results are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the unscented Kalman filter (UKF) is used to estimate the states of a vehicle while it is moving on a road with different speed. Field programmer gate array (FPGA) prototyping board is used to implement the filter. The resources of FPGA is optimized using different techniques. The overall filter performance is examined in further details.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As Industry 4.0 evolves with the abundance of data, networking capabilities and new computing technologies, manufacturers are looking for ways to exploit this revolution. The demands of machine tools and their feed drive systems require manufacturers to optimally plan and schedule maintenance actions to minimize costs. These actions can be supplemented by capitalizing on machine data and the idea of cyber-physical systems, with the use of edge and cloud computing, by monitoring important machine characteristics. A substantial benefit to manufacturers would be the ability to monitor the health characteristics of machine tools to aid them in their maintenance planning. Some of the challenges manufacturers face with this are the computing time and effort needed to analyze and evaluate the vast amount of machine data available. A step towards real-time condition monitoring of machine characteristics includes rapid parameter estimation of CNC machine tool systems. The estimation of mass and friction allow for the monitoring of CNC feed drive health. This work proposes the estimation of such parameters from real-world industrial machine tool data. A Feed drive testing procedure is developed for smart data acquisition. Data analysis and recursive least squares methods are used to extract key parameters representative of machine health that are realizable on edge computing devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The SAFETY4RAILS2 project delivers methods and systems to increase the safety and recovery of track-based inter-city railway and intra-city metro transportation. When an incident occurs during heavy usage, metro and railway operators have to consider many aspects to ensure passenger safety and security. The EU funded project SAFETY4RAILS, aims to improve the handling of such events through a holistic approach by combining a wide range of analytic tools to detect, prevent, mitigate and respond to cyber-physical attacks to railway networks. In the context of assessing the impact on the crowd inside the rail/metro station and its surroundings from a cyberattack against a rail/metro infrastructure and evaluating the effectiveness of mitigation measures in case of an attack, the iCrowd simulation platform is used in conjunction with external modules that simulate the cyber-physical attacks. This paper reports the results and lessons learnt from these simulations and provides an insight on mitigation measures that may be necessary to reduce infrastructure vulnerabilities under different cyber-physical attack scenarios to several different rail/metro infrastructures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-biometrics have long been considered as a means of providing an irrefutable identification of a person by providing the necessary complementarity of the different modalities involved in reducing false positives and increasing correct identifications. Furthermore, non-contact biometrics have been considered essential in achieving identification on-the-fly without imposing unnecessary delays and inconvenience when checking one’s ID. In the context of D4FLY, an EU-funded project under Horizon 2020, a corridor-like multi-biometric layout has been developed to allow noncontact identification on-the-go. The iCrowd crowd behavior simulator has been used to test the operational performance of the biometric corridor configuration in terms of throughput, delays and service times. This paper reports the quantitative performance results of the D4FLY biometric corridor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The combination of digital twins and simulation, alongside with a control, command, and information system, provides a powerful hybrid environment for operational testing and performance assessment of security systems under realistic conditions without interrupting the operation of the test environment. This paper summarizes the use of OCUSIM, a hybrid Control, Command & Information (C2I) and simulation environment, and the associated requirements for 3D modeling, simulation and data exchange in cyber-physical threat assessment, multi-biometrics performance evaluation, and risk-based access control in different security environments. OCUSIM is based on the integration of the OCULUS C2I system with the iCrowd simulation environment along the lines of the digital twin concept. Other use cases and different application domains for OCUSIM are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Identification of target molecules can be achieved by comparison of measured spectra to signal templates having patterns associated with known materials. This report describes the concept of using IR spectra calculated using density functional theory (DFT) as signal templates. Specifically, aspects of using DFT calculated IR spectra as templates for comparison with IR spectral measurements associated with different types of detector schemes and complex spectral-signature backgrounds. Comparison of DFT calculated and measured IR spectra, in practice, must consider that there exist artifacts due to computational errors and model assumptions in the case of DFT calculated spectra, and artifacts due to measurement errors and experimental-design assumptions in the case of spectral measurements. This paper examines aspects of combining, as complementary information within a database, DFT-calculated and measured IR spectra for spectrum feature extraction, which is for identification of target molecules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.