Measuring Hemoglobin (Hb) levels is required for the assessment of different health conditions, such as anemia, a condition where there are insufficient healthy red blood cells to carry enough oxygen to the body’s tissues. Measuring Hb levels requires the extraction of a blood sample, which is then sent to a laboratory for analysis. This is an invasive procedure that may add challenges to the continuous monitoring of Hb levels. Noninvasive techniques, including imaging and photoplethysmography (PPG) signals combined with machine learning techniques, are being investigated for continuous measurements of Hb. However, the availability of real data to train the algorithms is limited to establishing a generalization and implementation of such techniques in healthcare settings. In this work, we present a computational model based on Monte Carlo simulations that can generate multispectral PPG signals that cover a broad range of Hb levels. These signals are then used to train a Deep Learning (DL) model to estimate hemoglobin levels. Through this approach, valuable insights about the relationships between PPG signals, oxygen saturation, and Hb levels are learned by the DL model. The signals were generated by propagating a source in a volume that contains the skin tissue properties and the target physiological parameters. The source consisted of plane waves using the 660 nm and 890 nm wavelengths. A range of 6 g/dL to 18 dL Hb values was used to generate 468 PPGs to train a Convolutional Neural Network (CNN). The initial results show high accuracy in detecting low levels of Hb. To the best of our knowledge, the complexity of biological interactions involved in measuring hemoglobin levels has yet to be fully modeled. The presented model offers an alternative approach to studying the effects of changes in Hb levels on the PPGs signal morphology and its interaction with other physiological parameters that are present in the optical path of the measured signals.
Recent advances in data fusion provide the capability to obtain enhanced hyperspectral data with high spatial and spectral information content, thus allowing for an improved classification accuracy. Although hyperspectral image classification is a highly investigated topic in remote sensing, each classification technique presents different advantages and disadvantages. For example; methods based on morphological filtering are particularly good at classifying human-made structures with basic geometrical spatial shape, like houses and buildings. On the other hand, methods based on spectral information tend to perform better classification in natural scenery with more shape diversity such as vegetation and soil areas. Even more, for those classes with mixed pixels, small training data or objects with similar re ectance values present a higher challenge to obtain high classification accuracy. Therefore, it is difficult to find just one technique that provides the highest accuracy of classification for every class present in an image. This work proposes a decision fusion approach aiming to increase classification accuracy of enhanced hyperspectral images by integrating the results of multiple classifiers. Our approach is performed in two-steps: 1) the use of machine learning algorithms such as Support Vector Machines (SVM), Deep Neural Networks (DNN) and Class-dependent Sparse Representation will generate initial classification data, then 2) the decision fusion scheme based on a Convolutional Neural Network (CNN) will integrate all the classification results into a unified classification rule. In particular, the CNN receives as input the different probabilities of pixel values from each implemented classifier, and using a softmax activation function, the final decision is estimated. We present results showing the performance of our method using different hyperspectral image datasets.
Recently, multispectral and hyperspectral data fusion models based on deep learning have been proposed to generate images with a high spatial and spectral resolution. The general objective is to obtain images that improve spatial resolution while preserving high spectral content. In this work, two deep learning data fusion techniques are characterized in terms of classification accuracy. These methods fuse a high spatial resolution multispectral image with a lower spatial resolution hyperspectral image to generate a high spatial-spectral hyperspectral image. The first model is based on a multi-scale long short-term memory (LSTM) network. The LSTM approach performs the fusion using a multiple step process that transitions from low to high spatial resolution using an intermediate step capable of reducing spatial information loss while preserving spectral content. The second fusion model is based on a convolutional neural network (CNN) data fusion approach. We present fused images using four multi-source datasets with different spatial and spectral resolutions. Both models provide fused images with increased spatial resolution from 8m to 1m. The obtained fused images using the two models are evaluated in terms of classification accuracy on several classifiers: Minimum Distance, Support Vector Machines, Class-Dependent Sparse Representation and CNN classification. The classification results show better performance in both overall and average accuracy for the images generated with the multi-scale LSTM fusion over the CNN fusion.
Hyperspectral images (HSIs) contain spectral information on the order of hundreds of different wavelengths, providing information beyond the visible range. Such spectral sensitivity is often used for the classification of objects of interest within a spatial scene in fields, such as studies of the atmosphere, vegetation and agriculture, and coastal environments. The classification task involves the processing of high-dimensional data which fuels the need for efficient algorithms that better use computational resources. Classification algorithms based on sparse representation classification perform classification with high accuracy by incorporating all the relevant information of a given scene in a sparse domain. However, such an approach requires solving a computationally expensive optimization problem with time complexity Ω ( n2 ) . We propose a method that approximates the least squares solution of the sparse representation classification problem for HSIs using the Moore–Penrose pseudoinverse. The resulting time complexity of this approach reduces to O ( n2 ) . The impact on the classification accuracy and execution time is compared to the state-of-the-art methods for three varied datasets. Our experimental results show that it is possible to obtain comparable classification performance current methods, with as much as 82% of a reduction in execution time, opening the door for the adoption of this technology in scenarios where classification of high-dimensional data is time critical.
Improved benthic habitat mapping is needed to monitor coral reefs around the world and to assist coastal zones management programs. A fundamental challenge to remotely sensed mapping of coastal shallow waters is due to the significant disparity in the optical properties of the water column caused by the interaction between the coast and the sea. The objects to be classified have weak signals that interact with turbid waters that include sediments. In real scenarios, the absorption and backscattering coefficients are unknown with different sources of variability (river discharges and coastal interactions). Under normal circumstances, another unknown variable is the depth of shallow waters. This paper presents the development of algorithms for retrieving information and its application to the classification and mapping of objects under coastal shallow waters with different unknown concentrations of sediments. A mathematical model that simplifies the radiative transfer equation was used to quantify the interaction between the object of interest, the medium and the sensor. The retrieval of information requires the development of mathematical models and processing tools in the area of inversion, image reconstruction and classification of hyperspectral data. The algorithms developed were applied to one set of real hyperspectral imagery taken in a tank filled with water and TiO2 that emulates turbid coastal shallow waters. Tikhonov method of regularization was used in the inversion process to estimate the bottom albedo of the water tank using a priori information in the form of stored spectral signatures, previously measured, of objects of interest.
Compressive Sensing is an area of great recent interest for efficient signal acquisition, manipulation and reconstruction tasks in areas where sensor utilization is a scarce and valuable resource. The current work shows that approaches based on this technology can improve the efficiency of manipulation, analysis and storage processes already established for hyperspectral imagery, with little discernible loss in data performance upon reconstruction. We present the results of a comparative analysis of classification performance between a hyperspectral data cube acquired by traditional means, and one obtained through reconstruction from compressively sampled data points. To obtain a broad measure of the classification performance of compressively sensed cubes, we classify a commonly used scene in hyperspectral image processing algorithm evaluation using a set of five classifiers commonly used in hyperspectral image classification. Global accuracy statistics are presented and discussed, as well as class-specific statistical properties of the evaluated data set.
Compressive Sensing (CS)-based technologies have shown potential to improve the efficiency of acquisition, manipulation, analysis and storage processes on signals and imagery with slight discernible loss in data performance. The CS framework relies on the reconstruction of signals that are presumed sparse in some domain, from a significantly small data collection of linear projections of the signal of interest. As a result, a solution to the underdetermined linear system resulting from this paradigm makes it possible to estimate the original signal with high accuracy. One common approach to solve the linear system is based on methods that minimize the L1-norm. Several fast algorithms have been developed for this purpose. This paper presents a study on the use of CS in high-resolution reflectance confocal microscopy (RCM) images of the skin. RCM offers a cell resolution level similar to that used in histology to identify cellular patterns for diagnosis of skin diseases. However, imaging of large areas (required for effective clinical evaluation) at such high-resolution can turn image capturing, processing and storage processes into a time consuming procedure, which may pose a limitation for use in clinical settings. We present an analysis on the compression ratio that may allow for a simpler capturing approach while reconstructing the required cellular resolution for clinical use. We provide a comparative study in compressive sensing and estimate its effectiveness in terms of compression ratio vs. image reconstruction accuracy. Preliminary results show that by using as little as 25% of the original number of samples, cellular resolution may be reconstructed with high accuracy.
The Hyperspectral Image Analysis Toolbox (HIAT) is a collection of algorithms that extend the capability of the MATLAB numerical computing environment for the processing of hyperspectral and multispectral imagery. The purpose of the HIAT Toolbox is to provide information extraction algorithms to users of hyperspectral and multispectral imagery in environmental and biomedical applications. HIAT has been developed as part of the NSF Center for Subsurface Sensing and Imaging (CenSSIS) Solutionware that seeks to develop a repository of reliable and reusable software tools that can be shared by researchers across research domains. HIAT provides easy access to supervised and unsupervised classification algorithms developed at LARSIP over the last 8 years.
Feature extraction, implemented as a linear projection from a higher dimensional space to a lower dimensional subspace, is a very important issue in hyperspectral data analysis. This reduction must be done in a manner that minimizes the redundancy, maintaining the information content. This paper proposes methods for feature extraction and band subset selection based on Relative Entropy Criteria. The main objective of the feature extraction and band selection methods implemented is to reduce the dimensionality of the data maintaining the capability of discriminating objects of interest from the cluttered background. These methods accomplish the described goal by maximizing the difference between the data distribution of the lower dimensional subspace and the standard Gaussian distribution. The difference between the low dimensional space and the Gaussian distribution is measured using relative entropy, also known as information divergence. A Projection Pursuit unsupervised algorithm based on an optimization algorithm of the relative entropy is presented. An unsupervised version for selecting bands in hyperspectral data will be presented as well. The relative entropy criterion will measure the information divergence between the probability density function of the feature subset and the Gaussian probability density function. This augments the separability of the unknown clusters in the lower dimensional space. One advantage of these methods is that there is no use of labeled samples. These methods were tested using simulated data as well as remotely sensed data.
Feature extraction, implemented as a linear projection from a higher dimensional space to a lower dimensional subspace, is a very important issue in hyperspectral data analysis. The projection must be done in a matter that minimizes the redundancy, maintaining the information content. In hyperspectral data analysis, a relevant objective of feature extraction is to reduce the dimensionality of the data maintaining the capability of discriminating object of interest from the cluttered background. This paper presents a comparative study of different unsupervised feature extraction mechanisms and shows their effects on unsupervised detection and classification. The mechanisms implemented and compared are an unsupervised SVD based band subset selection mechanism, Projection Pursuit, and Principal Component Analysis. For purposes of validating the unsupervised methods, supervised mechanisms as Discriminant Analysis and a supervised band subset selection using Bhattacharyya distance were implemented and its results were compared with the unsupervised methods. Unsupervised band subset selection based on SVD chooses automatically the most independent set of bands. Projection Pursuit based feature extraction algorithm automatically searches for projections that optimize a projection index. The projection index we optimized is one that measures the information divergence between the probability density function of the projected data and the Gaussian probability density function. This produces a projection where the probability density function of the whole data set is multi-modal, instead of a Gaussian uni-modal distribution. This augments the separability of the unknown clusters in the lower dimensional space. Finally they were compared with well-known and used Principal Component Analysis. The methods were tested using synthetic as well as remotely sensed data obtained from AVIRIS and LANDSAT. They were compared using unsupervised classification methods in a known ground truth area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.