Accurate crop type and crop growth stage maps are essential for agricultural monitoring and ensuring food security. A wide variety of airborne and spaceborne sensors now provide high spatial, spectral, and temporal resolution images, which are vital for crop mapping and monitoring. Crop type and its growth stage can be characterized by spectral, spatial, and temporal features. The classification of crop types and growth stages has been explored in previous studies as independent tasks. However, the growth stages of a crop are an important factor in identifying the crop and vice-versa. A multi-task learning (MTL) framework is proposed in this work to classify the crop type and its growth stages simultaneously. A hybrid convolutional neural network and temporal convolutional network (CNN-TCN) architecture is presented to process a multitude of features relevant to the tasks. To learn the spatio–spectral features, we fed the hyperspectral input to 3D convolution blocks and multispectral input was given into 2D convolution blocks. We reformulate these multi-channel features into two dimensions and feed them into the temporal convolutional neural network. Subsequently, we use two fully connected branches for each task. MTL frameworks were developed for multispectral (Mx), hyperspectral (Hx), and the combination of Hx and Mx (Hx-Mx) images to model crop type and crop growth stage classification. Results reveal that the proposed model for Hx-Mx outperformed the best single-task model by 13% and 8% in crop growth stage and crop type classification, respectively. Compared to single-task models, the proposed model can exploit the high spectral information from Hx images and high spatial information from Mx images, making the proposed model more useful for unmanned-aerial-vehicle-based crop mapping.
DInSAR technique has been long used to observe and monitor ground surface changes over large areas. Multi- temporal SAR image datasets are used for such investigations. To eliminate the issues of signal decorrelation associated with DInSAR based processing, Persistent Scatterer InSAR (PSInSAR) technique has come in a big way in the recent years where deformation studies at small scales and fine accuracy (mm-level) have been at- tempted and measured. In this paper, ground deformation of the area near Naples, Italy has been estimated for the year 2014 using a spatial correlation based PSInSAR method. Here, Cosmo-SkyMed (CSK) SLC (ascending orbit, single polarisation, stripmap Himage mode) images of Very High Resolution (VHR) were used in this study due to its capability to detect small-scale ground deformation signal over an urban area. The PSInSAR processing, used here, involves two stage selection of PS points or stable scatterers, with the coregistered SLCs and differential interferograms, using amplitude and phase analysis. The PSInSAR results were validated using time series data of two continuous GPS (cGPS) stations over the same period for the study area. The mean deformation rate over the study area was observed to be varying from -15 to +18 mm/year along line-of-sight (LOS) of Cosmo-SkyMed. Comparison between PSI derived deformation time series and cGPS measurements re- veal a good correlation with minimal discrepancies. Additionally, distinct differences were observed between the PS based LOS displacements with those obtained from cGPS based observations in case of one station compared to the other.
The goal of our work is to use visual attention to enhance autonomous driving performance. We present two methods of predicting visual attention maps. The first method is a supervised learning approach in which we collect eye-gaze data for the task of driving and use this to train a model for predicting the attention map. The second method is a novel unsupervised approach where we train a model to learn to predict attention as it learns to drive a car. Finally, we present a comparative study of our results and show that the supervised approach for predicting attention when incorporated performs better than other approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.