The complex permittivity of adobe is measured using a coaxial probe system verses frequency (1 GHz to 4 GHz) and moisture content (0% to 6%). Measurements are performed using adobe samples collected from abode bricks. The variation of the adobe complex permittivity verses frequency is measured at discrete levels of moisture content using small adobe samples exposed to controlled levels of constant humidity in an environmental chamber. The typical moisture content profile verses depth for an adobe brick is also determined. It is shown that notable changes in material properties verses depth in the adobe wall results from moisture content variation in the adobe brick. Using the characterization of the adobe material, the application of Through-the-Wall Radar Imaging (TWRI) is considered for adobe walls. Matched illumination waveforms are derived, and the effects of optimal transmission waveforms are presented to illustrate the necessity of accurate material characterization for enhancement of TWRI applications. The results presented include simulation of an object located behind an adobe wall as well as experimental measurements taken in an anechoic chamber of an object located behind a wall section composed of adobe bricks. It is shown that enhanced TWRI performance may be obtained when utilizing knowledge of a material’s dielectric properties verses frequency and moisture level by reducing the amount of two-way attenuation of a radar waveform through waveform shaping techniques such as matched illumination waveform design.
Understanding wave propagation is fundamental across numerous scientific domains, underpinning crucial tasks in acoustics, seismology, radar technology, materials science, and optics. Machine learning methods offer a promising avenue to deepen our understanding of wave propagation dynamics, providing insights into the behavior of nearfield wave patterns. Moreover, well-trained machine learning models have the capacity to generalize beyond specific training data, allowing for predictions in scenarios not explicitly encountered during training. This paper presents a machine learning approach using time-series neural networks to predict the complex near-field wave patterns emerging from metasurface devices. The recurrent neural network (RNN) and the long short term memory (LSTM) models are presented along with a custom dataset that includes 3x3 configurations of meta-atoms. The experiment focuses on assessing the models’ capabilities with varying amounts of input data and explores the challenges posed by predicting propagating waves. Results indicate that the LSTM outperforms the RNN, markedly in learning training data, highlighting its efficacy in capturing complex dependencies. Analysis of error metrics reveals insights into the impact of dataset size on model performance, with larger datasets posing computational challenges but potentially enhancing generalization. Overall, this study lays the foundation for advancing the use of time-series machine learning models for applications involving wave propagation, with implications for various applications in photonics and beyond.
Standard scalar wave propagation techniques, such as Fourier optics, struggle with the multi-scale challenges inherent in the inverse design of optical metasurfaces. Conventional approaches often assume that the source and observation planes are axially aligned and share the same spatial size and discretization. This becomes problematic in the case of metasurface design where often the source and observation planes are on the order of the wavelength. Designing metasurfaces nanometer by nanometer for large-scale applications is computationally prohibitive. Current metasurface inverse design methods generally approximate amplitude and phase under the local phase approximation. However, this is insufficient when considering the intricate interactions among nearest neighbors in a metasurface. A more comprehensive understanding requires the consideration of the complex near electric field, which holds richer information about the metasurface’s physics. Yet, computing the resultant complex field at a far distance from the metasurface is both essential for inverse design and challenging. This work presents an evaluation of three computational approaches, i.e., padded field propagation, shifted field propagation, and propagation by the chirp-z-transform, for the explicit purpose of metasurface inverse design. These techniques are implemented in the Pytorch Lightning deep learning framework facilitating optimization using the backpropagation algorithm.
The integration of neural networks and differentiable scalar wave optics has facilitated a modern approach to the design of optical systems, where simulation and optimization are carried out concurrently. These techniques encode the equations of wavefront propagation and modulation directly as layers of a neural network where the forward pass carries out simulation and the backward pass carries out optimization using the backpropagation algorithm. While this allows standard optical optimization as well as classifier-driven optimization of diffractive optics, it suffers from the ubiquitous simulation-to-reality gap. Identifying, characterizing, and ultimately reducing this simulation-to-reality gap is an ever-present objective – as the adage goes, “all models are wrong, some are useful.” To this end, this work extends recent advancements in physics-aware training where an optimizable physical device is used alongside in-silico simulation. By comparing the simulation output with the measured result from the physical device, an additional error term is introduced to the optimization objective. This work analyzes the multi-criteria loss function by varying weighting terms and analyzing performance. It is found that minimizing this new error term reduces the simulation-to-reality gap but at the cost of device performance. The optimizable device in this work is implemented using a reprogrammable spatial light modulator.
Improving a machine’s ability to reason about the unknown has been a prominent commonality across the different emerging areas of modern supervised learning. While there are different approaches that formalize this problem, many focus on generalized target recognition tailored to the known vs unknown problem setting. Overall, these approaches have created a meaningful foundation that promotes algorithm enhancement with respect to factors like detection, robustness, and internal knowledge expression. However, one major shortcoming across numerous prior works is the question of how to make use of unknown classifications for an algorithm deployment setting. Herein, we address this shortcoming by proposing an self-supervised comparison assessment methodology for computer vision tasks. Specifically, we leverage the features of foundational models across different dimensionality spaces to facilitate a comparison analysis of unknown information. Preliminary results are encouraging and demonstrate that this process not only has benefits in computer vision applications, but also is flexible for methodology alterations.
In the rapidly evolving landscape of military technology, the demand for autonomous vehicles (AVs) is increasing in both public and private sectors. These autonomous systems promise many benefits including enhanced efficiency, safety, and flexibility. To meet this demand, development of autonomous vehicles that are resilient and versatile are essential to the transport and reconnaissance market. The sensory perception of autonomous vehicles of any kind is paramount to their ability to navigate and localize in their environment. Typically, the sensors used for localization and mapping include LIDAR, IMU, GPS, and radar. Each of these has inherent weaknesses that must be accounted for in a robust system. This paper presents quantified results of simulated perturbations, artificial noise models, and other sensor challenges on autonomous vehicle platforms. The research aims to establish a foundation for robust autonomous systems, accounting for sensor limitations, environmental noise, and defense against nefarious attacks.
While deep learning is a very popular subsection of machine learning and has been used for decades, an availability gap exists for both knowledge and datasets in unstructured environments or in-the-wild applications. Knowledge of mobility in these free environments is an important stepping-stone for both Department of Defense applications as well as industrial autonomy applications. A few datasets that exist for unstructured environments such as RELLIS-3D for robotics, RUGD for navigation, and GOOSE for perception; however, due to the limited selection of datasets for this type of environment, most deep learning algorithms have not been thoroughly tested on this scenario. In this article, we will implement multiple deep learning methods on an in-house dataset to evaluate performance. Specifically, this article investigates the performance of pretrained, publicly available YOLOv4, ResNet-50, and Single Shot Detector (SSD) models on detection of unknown object classes encountered in the wild for improved, safe, and reliable maneuverability with minimized impediment in unstructured environments. The models used are tested using a dataset developed in-house for unstructured environment studies, and their performance is assessed with multiple metrics. The data used in this experiment was collected by the United States Army Corps of Engineers Engineer Research and Development Center.
The utilization of hyperspectral image data has contributed to improved performance of machine learning tasks by providing spectrally rich information that other more common sensor data lacks. An issue that can arise when using hyperspectral imagery is that it can often be computationally burdensome to collect and process. This study seeks to investigate the incorporation of hyperspectral image data collected on a co-aligned VNIR-SWIR sensor for the purpose of hyperspectral image classification. In which, the evaluation is focused on investigating the distinct effects pertaining to the VNIR data, to the SWIR data, and to the combination of the two data types with regards to hyperspectral image classification performance on vehicles. The experiments were run on data collected by the US Army Corps of Engineers Research and Development Center.
Recent years have seen the emergence of novel UAV swarm methodologies being developed for numerous applications within the Department of Defense. Such applications include, but are not limited to, search and rescue missions, intelligence, surveillance, and reconnaissance activities, and rapid disaster relief assessment. Herein, this article investigates an initial implementation of learning UAV swarm behaviors using reinforcement learning (RL). Specifically, we present a study implementing a leader-follower UAV swarm using RL-learned behaviors in a search-and-rescue task. Experiments are performed through simulations on synthetic data, specifically using a cross-platform flight simulator with Unreal Engine virtual environment. Performance is assessed by measuring key objective metrics, such as time to complete the mission, redundant actions, stagnation time, and goal success. This article seeks to provide an increased understanding and assessment of current reinforcement learning strategies being developed for controlling (or at a minimum suggesting) UAV swarm behaviors.
The adoption of neural networks for optical component design has increased rapidly in recent years. In this design framework, the numerical simulation of optical wave propagation and material wave modulation are encoded directly as layers of a neural network. This direct encoding enables the optimization of physical quantities (e.g., the transmissivity values of the diffractive optical elements) by gradient descent and the backpropagation algorithm. For the body of work which uses these networks for simulation and optimization, there is a tendency to treat the training process as identical to traditional deep neural networks. However, to the best of our knowledge, there is yet an explicit evaluation of training parameters to support this intuition. This work aims to help fill this gap by providing an exploration and evaluation of data variety to help accelerate those in the community who wish to use this emerging design framework.The application of neural networks in optical component design has witnessed rapid growth in recent years. This design framework encodes the numerical simulation of optical wave propagation and material wave modulation directly within neural network layers, enabling the optimization of physical quantities, such as transmissivity values of diffractive optical elements, through gradient descent and backpropagation algorithms. Physics-informed neural networks have been employed in designing diffractive deep neural networks, optimizing holograms for near-eye displays and creating multi-objective traditional optics. However, there remains a lack of evaluation for training parameters, and discrete sampling considerations are often overlooked. To address these gaps, this study examines the impact of dataset variety on physics-informed neural networks in optimizing lenses that either satisfy or violate the Nyquist sampling criteria. Results show that increased data variety enhances optimized lens performance across all cases. Optimized lenses demonstrate improved imaging performance by reducing diffraction orders present in aliased analytical lenses. Moreover, we reveal that low data variety leads to overfit lenses that function as selective imagers, providing valuable insights for future lens design and optimization.
Optical metasurfaces enable devices to interact with light in unique ways by modulating phase, polarization, or intensity. A metasurface, composed of individual subwavelength scatterers known as meta-atoms, can be designed to provide unparalleled control of wavefronts for a variety of optical applications, yet the design of such devices is often unintuitive and challenging due to computationally expensive forward simulations and the number of free parameters. To overcome this, there is interest in developing inverse design methods as an alternative to conventional forward design. Inverse design leverages machine learning algorithms to effectively search a problem space, starting from application and resulting in solution parameters. In this work, we adopt an inverse design approach that involves targeted forward simulations of arbitrary meta-atoms. To ensure that the dataset captures all possible shapes and rotations of near field responses with second order accuracy, it is constructed using meta-atoms with varying geometries and corresponding phase shifts, including the effect of nearest neighbors. A custom deep learning system is developed to extract meaningful features from this near field response. The proposed framework provides flexibility to produce an inverse design paradigm for generalized metasurface applications without the need for repeated forward simulations. Additionally, the machine learning model is highly effective in reconstructing electric fields, irrespective of the loss function used.
KEYWORDS: Education and training, Convolution, Data modeling, Deep learning, Performance modeling, Object detection, Neural networks, Visual process modeling, Genetic algorithms, Army
With numerous technologies, seeking to utilize deep learning-based object detection algorithms, there is an increased need for an innovative approach to compare one model to another. Often, models are compared one of two over-arching ways: performance metrics or through statistical measures on the dataset. One common approach for training an object detector for a new problem is to transfer learn a model, often initially trained extensively on the ImageNet dataset; however, why one feature backbone was selected over another is overlooked at times. Additionally, while whether it was trained on ImageNet, COCO, or some other benchmark dataset is noted, it is not necessarily considered by many practitioners outside the deep learning research community seeking to implement a state-of-the-art detector for their specific problem. This article proposes new strategies for comparing deep learning models that are associated with the same task, e.g., object detection.
Vehicle maneuverability is often supported in low-light scenarios through infrared (IR) imagery. However, if the imagery contains little temperature gradient, the raw images are less applicable. In order to maximize image effectiveness, a genetic algorithm (GA) is employed to explore various contrast enhancement operators to determine an optimal sequence of contrast enhancements. We propose a new image quality evaluator that incorporates the performance of a deep learning-based object detector and considers image spatial context through cell-structured configurations. The proposed technique is assessed both qualitatively and quantitatively for the task of maneuverability hazard detection.
Object detection remains an important and ever-present component of computer vision applications. While deep learning has been the focal point for much of the research actively being conducted in this area, there still exists certain applications in which such a sophisticated and complex system is not required. For example, if a very specific object or set of objects are desired to be automatically identified, and these objects' appearances are known a priori, then a much simpler and more straightforward approach known as matched filtering, or template matching, can be a very accurate and powerful tool to employ for object detection. In our previous work, we investigated using machine learning, specifically, the improved Evolution COnstructed features framework, to identify (near-) optimal templates for matched filtering given a specific problem. Herein, we explore how different search algorithms, e.g., genetic algorithm, particle swarm optimization, gravitational search algorithm, can derive not only (near-) optimal templates, but also promote templates that are more efficient. Specifically, given a defined template for a particular object of interest, can these search algorithms identify a subset of information that enables more efficient detection algorithms while minimizing degradation of detection performance. Performance is assessed in the context of algorithm efficiency, accuracy of the object detection algorithm and its associated false alarm rate, and search algorithm performance. Experiments are conducted on handpicked images of commercial aircraft from the xView dataset | one of the largest publicly available datasets of overhead imagery.
One aspect of the well-being of a military unit depends on its ability to reliably detect threats and properly prepare for them. While a given sensor mounted on a ground vehicle can adequately capture threats in some scenarios, its viewpoint can be quite limiting. A potential solution to these limitations is mounting the sensor onto an unmanned aerial vehicle (UAV) to provide a more holistic view of the scene. However, this new perspective creates challenges unique to it. Herein, we investigate the performance of an RGB sensor mounted onto a UAV for object detection and classification to enable advanced situational awareness for a manned/unmanned ground vehicle trailing the UAV. To do this, we perform transfer learning with state-of-the-art deep learning models, e.g., ResNet50, Inception-v3. While object detection with machine learning has been actively researched, even on remotely sensed imagery, most of it has been through the context of scene classification. Therefore, it is worthwhile to explore the implications of this new camera perspective on the performance of object detection. Performance is assessed via route-based cross-validation collected by the U.S. Army ERDC at a test site spanning multiple days.
Object recognition is a critical component in most computer vision applications, specifically image classification tasks. Often, it is desired to design an approach that either learns from the data directly or extracts discriminative features from the imagery that can be used for object classification. Most active research in the field of computer vision is concerned with machine learning at some level, whether it be a completely automated process from start to finish via deep learning strategies, or the extraction of human-derived features from the imagery that is subjected to a machine learning-based classifier. However, there are numerous applications in which a particular known object is of interest. In such a setting where a relatively specific object and scene are known a priori, one can develop an extremely robust automatic target recognition (ATR) system using matched filtering. Herein, we consider the use of machine learning to help identify a near-optimal template for matched filtering for a given problem. Specifically, the improved Evolution Constructed (iECO) framework is employed to learn the discriminative target signature(s) to define the template that leads to improved ATR performance in terms of accuracy and a reduced false alarm rate (FAR). Experiments are conducted on ideal synthetic midwave infrared imagery, and results are reported via receiver operating characteristic curves.
Applications seeking to exploit electromagnetic scattering characteristics of an imaging or detection problem typically require a large number of electromagnetic simulations in order to understand relevant object phenomena. It has been shown in a previous work that deep learning may be used to increase the efficiency of creating such datasets by providing estimations comparable to simulation results. In this work, we further investigate the utility of deep learning for electromagnetic simulation prediction by adding to the existing training and testing dataset while also incorporating additional material properties. Specifically, we explore using artificial neural networks to learn the connection between a generic object and its resulting bistatic radar cross section, thereby removing the need to repeatedly perform timely simulations. While deep learning can be seen as a computationally expensive technique, this cost is only experienced during the training of the system and not subsequently in the acquisition of results. The goal of this work is to further investigate the applicability of deep learning for electromagnetic simulation prediction as well as its potential limitations. Additionally, performance is compared for different data pre-processing techniques focused on data reduction.
Applications seeking to exploit electromagnetic scattering characteristics of an imaging or detection problem typically require a large number of electromagnetic simulations. Because these simulations are often computationally intensive, valuable resources are required to perform the simulations in an efficient and timely manner, which is not always freely available or accessible. In this work, we investigate the utility of deep learning for electromagnetic simulation prediction. Specifically, we explore using artificial neural networks to learn the connection between a generic object and its resulting bistatic radar cross section, thereby removing the need to repeatedly perform timely simulations. Such a system would be trained in an offline setting and consequently enable rapid bistatic radar cross section predictions for new objects in the future. While deep learning can be seen as a computationally expensive technique, this cost is only experienced during the training of the system and not subsequently in the acquisition of results. The goal of this work is to learn the applicability of deep learning for electromagnetic simulation prediction as well as its potential limitations. Several simple objects are investigated and a thorough statistical analysis will be used to assess the performance of our proposed method.
Synthetic aperture radar (SAR) benefits from persistent imaging capabilities that are not reliant on factors such as weather or time of day. One area that may benefit from readily available imaging capabilities is road damage detection and assessment occurring from disasters such as earthquakes, sinkholes, or mudslides. This work investigates the performance of a pre-screener for an automatic detection system used to identify locations and quantify the severity of road damage present in SAR imagery. The proposed pre-screener is comprised of two components: advanced image processing and classification. Image processing is used to condition the data, removing non-pertinent information from the imagery which helps the classifier achieve better performance. Specifically, we utilize shearlets; these are powerful filters that capture anisotropic features with good localization and high directional sensitivity. Classification is achieved through the use of a convolutional neural network, and performance is reported as classification accuracy. Experiments are conducted on satellite SAR imagery. Specifically, we investigate Sentinel-1 imagery containing both damaged and non-damaged roads.
One promising technique for improving tunnel detection is the use of spotlight synthetic aperture radar (SL-SAR) in conjunction with focusing techniques. Still, clutter arises from surface variations while severe attenuation of the target signal occurs due to the dielectric properties of the soil. To combat these ill-effects, this work aims to improve imaging and detection of underground tunnels by examining the feasibility of matched illumination waveform design for tunnel detection applications. The tunnel impulse response is incorporated in an optimum waveform derivation scheme which aims to maximize the signal-to-interference and noise ratio (SINR) at the receiver output. Numerical electromagnetic simulations are used to consider wave propagation in realistic soil scenarios which include uniform and non-uniform moisture profiles. It is demonstrated that by considering matched illumination waveforms for transmission in SL-SAR systems, an improvement in the detection and imaging capabilities is achieved through enhanced SINR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.